<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-server-quorum.c, branch v7.0rc2</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: Fix coverity defects &amp; put coverity annotations</title>
<updated>2019-05-02T08:06:06+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2019-04-25T06:30:52+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=91c19c50a3c7b336177dda186500568be43626b8'/>
<id>91c19c50a3c7b336177dda186500568be43626b8</id>
<content type='text'>
Along with fixing few defect, put the required annotations for the defects which
are marked ignore/false positive/intentional as per the coverity defect sheet.
This should avoid the per component graph showing many defects as open in the
coverity glusterfs web page.

Updates: bz#789278
Change-Id: I19461dc3603a3bd8f88866a1ab3db43d783af8e4
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Along with fixing few defect, put the required annotations for the defects which
are marked ignore/false positive/intentional as per the coverity defect sheet.
This should avoid the per component graph showing many defects as open in the
coverity glusterfs web page.

Updates: bz#789278
Change-Id: I19461dc3603a3bd8f88866a1ab3db43d783af8e4
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: Move devel headers under glusterfs directory</title>
<updated>2018-12-05T21:47:04+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-11-29T19:08:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5'/>
<id>20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5</id>
<content type='text'>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: perform rcu_read_lock/unlock() under cleanup_lock mutex</title>
<updated>2018-12-03T17:03:57+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-11-28T10:43:58+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2bb0e89e4bb113a93c6e786446a140cd99261af8'/>
<id>2bb0e89e4bb113a93c6e786446a140cd99261af8</id>
<content type='text'>
Problem: glusterd should not try to acquire locks on any resources,
when it already received a SIGTERM and cleanup is started. Otherwise
we might hit segfault, since the thread which is going through
cleanup path will be freeing up the resouces and some other thread
might be trying to acquire locks on freed resources.

Solution: perform rcu_read_lock/unlock() under cleanup_lock mutex.

fixes: bz#1654270
Change-Id: I87a97cfe4f272f74f246d688660934638911ce54
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: glusterd should not try to acquire locks on any resources,
when it already received a SIGTERM and cleanup is started. Otherwise
we might hit segfault, since the thread which is going through
cleanup path will be freeing up the resouces and some other thread
might be trying to acquire locks on freed resources.

Solution: perform rcu_read_lock/unlock() under cleanup_lock mutex.

fixes: bz#1654270
Change-Id: I87a97cfe4f272f74f246d688660934638911ce54
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Land part 2 of clang-format changes</title>
<updated>2018-09-12T12:22:45+00:00</updated>
<author>
<name>Gluster Ant</name>
<email>bugzilla-bot@gluster.org</email>
</author>
<published>2018-09-12T12:22:45+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e16868dede6455cab644805af6fe1ac312775e13'/>
<id>e16868dede6455cab644805af6fe1ac312775e13</id>
<content type='text'>
Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4
Signed-off-by: Nigel Babu &lt;nigelb@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4
Signed-off-by: Nigel Babu &lt;nigelb@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: connect to an existing brick process when qourum status is NOT_APPLICABLE_QUORUM</title>
<updated>2018-01-05T07:31:43+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-01-03T08:59:51+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=01caa839ebda29c2fe209c4767626f2f49ea3e71'/>
<id>01caa839ebda29c2fe209c4767626f2f49ea3e71</id>
<content type='text'>
First of all, this patch reverts commit 635c1c3 as the same is causing a
regression with bricks not coming up on time when a node is rebooted.
This patch tries to fix the problem in a different way by just trying to
connect to an existing running brick when quorum status is not
applicable.

Change-Id: I0efb5901832824b1c15dcac529bffac85173e097
BUG: 1509845
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
First of all, this patch reverts commit 635c1c3 as the same is causing a
regression with bricks not coming up on time when a node is rebooted.
This patch tries to fix the problem in a different way by just trying to
connect to an existing running brick when quorum status is not
applicable.

Change-Id: I0efb5901832824b1c15dcac529bffac85173e097
BUG: 1509845
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: restart the brick if qorum status is NOT_APPLICABLE_QUORUM</title>
<updated>2017-11-09T05:10:36+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-11-06T07:53:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=635c1c3691a102aa658cf1219fa41ca30dd134ba'/>
<id>635c1c3691a102aa658cf1219fa41ca30dd134ba</id>
<content type='text'>
If a volume is not having server quorum enabled and in a trusted storage
pool all the glusterd instances from other peers are down, on restarting
glusterd the brick start trigger doesn't happen resulting into the
brick not coming up.

Change-Id: If1458e03b50a113f1653db553bb2350d11577539
BUG: 1509845
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If a volume is not having server quorum enabled and in a trusted storage
pool all the glusterd instances from other peers are down, on restarting
glusterd the brick start trigger doesn't happen resulting into the
brick not coming up.

Change-Id: If1458e03b50a113f1653db553bb2350d11577539
BUG: 1509845
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Fix few coverity errors</title>
<updated>2017-11-06T03:56:48+00:00</updated>
<author>
<name>Prashanth Pai</name>
<email>ppai@redhat.com</email>
</author>
<published>2017-11-03T06:23:12+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f461d75b226a5bc6c088280e43a96f9b54f33af5'/>
<id>f461d75b226a5bc6c088280e43a96f9b54f33af5</id>
<content type='text'>
Fixes issues 810, 248, 491, 499, 85, 786, 811, 43, and 44
from the report at [1].

[1]: https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-10-30-9aa574a5/html/

BUG: 789278
Change-Id: I27ebae2ffb2256b8eef0757d768cc46e5a942e9f
Signed-off-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Fixes issues 810, 248, 491, 499, 85, 786, 811, 43, and 44
from the report at [1].

[1]: https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-10-30-9aa574a5/html/

BUG: 789278
Change-Id: I27ebae2ffb2256b8eef0757d768cc46e5a942e9f
Signed-off-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: fix brick restart parallelism</title>
<updated>2017-11-01T03:41:36+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-10-26T08:56:30+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=82be66ef8e9e3127d41a4c843daf74c1d8aec4aa'/>
<id>82be66ef8e9e3127d41a4c843daf74c1d8aec4aa</id>
<content type='text'>
glusterd's brick restart logic is not always sequential as there is
atleast three different ways how the bricks are restarted.
1. through friend-sm and glusterd_spawn_daemons ()
2. through friend-sm and handling volume quorum action
3. through friend handshaking when there is a mimatch on quorum on
friend import.

In a brick multiplexing setup, glusterd ended up trying to spawn the
same brick process couple of times as almost in fraction of milliseconds
two threads hit glusterd_brick_start () because of which glusterd didn't
have any choice of rejecting any one of them as for both the case brick
start criteria met.

As a solution, it'd be better to control this madness by two different
flags, one is a boolean called start_triggered which indicates a brick
start has been triggered and it continues to be true till a brick dies
or killed, the second is a mutex lock to ensure for a particular brick
we don't end up getting into glusterd_brick_start () more than once at
same point of time.

Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2
BUG: 1506513
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
glusterd's brick restart logic is not always sequential as there is
atleast three different ways how the bricks are restarted.
1. through friend-sm and glusterd_spawn_daemons ()
2. through friend-sm and handling volume quorum action
3. through friend handshaking when there is a mimatch on quorum on
friend import.

In a brick multiplexing setup, glusterd ended up trying to spawn the
same brick process couple of times as almost in fraction of milliseconds
two threads hit glusterd_brick_start () because of which glusterd didn't
have any choice of rejecting any one of them as for both the case brick
start criteria met.

As a solution, it'd be better to control this madness by two different
flags, one is a boolean called start_triggered which indicates a brick
start has been triggered and it continues to be true till a brick dies
or killed, the second is a mutex lock to ensure for a particular brick
we don't end up getting into glusterd_brick_start () more than once at
same point of time.

Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2
BUG: 1506513
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: persist brickinfo's port change into glusterd's store</title>
<updated>2017-10-31T04:34:10+00:00</updated>
<author>
<name>Gaurav Yadav</name>
<email>gyadav@redhat.com</email>
</author>
<published>2017-10-27T10:34:46+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=15fe99995b8650a677b097028bc14d61a5dd5e1b'/>
<id>15fe99995b8650a677b097028bc14d61a5dd5e1b</id>
<content type='text'>
Problem:
Consider a case where node reboot is performed and prior to reboot
brick was listening to 49153. Post reboot glusterd assigned 49152
to brick and started the brick process but the new port was never
persisted. Now when glusterd restarts glusterd always read the port
from its persisted store i.e 49153 however pmap signin happens with
the correct port i.e 49152.

Fix:
Make sure when glusterd_brick_start is called, glusterd_store_volinfo is
eventually invoked.

Change-Id: Ic0efbd48c51d39729ed951a42922d0e59f7115a1
BUG: 1506589
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Consider a case where node reboot is performed and prior to reboot
brick was listening to 49153. Post reboot glusterd assigned 49152
to brick and started the brick process but the new port was never
persisted. Now when glusterd restarts glusterd always read the port
from its persisted store i.e 49153 however pmap signin happens with
the correct port i.e 49152.

Fix:
Make sure when glusterd_brick_start is called, glusterd_store_volinfo is
eventually invoked.

Change-Id: Ic0efbd48c51d39729ed951a42922d0e59f7115a1
BUG: 1506589
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: streamline logic flow in glusterd_validate_quorum()</title>
<updated>2017-08-01T10:11:37+00:00</updated>
<author>
<name>Michael Adam</name>
<email>obnox@samba.org</email>
</author>
<published>2017-06-13T12:44:39+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=3023c4bd7651d4503830c36676cd08864e315cdf'/>
<id>3023c4bd7651d4503830c36676cd08864e315cdf</id>
<content type='text'>
Make an earlier exit when the volume is not of server quorum.
Thereby it spares the actual quorum calculation in this case.
This also increases the overall readability of the function.

The patch is best seen with the --patience diff option to
understand what it does (e.g. "git show --patience").

Change-Id: Ifce50bc3f73d79d3d6226473661a83696d65149a
BUG: 1476719
Signed-off-by: Michael Adam &lt;obnox@samba.org&gt;
Reviewed-on: https://review.gluster.org/17924
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Make an earlier exit when the volume is not of server quorum.
Thereby it spares the actual quorum calculation in this case.
This also increases the overall readability of the function.

The patch is best seen with the --patience diff option to
understand what it does (e.g. "git show --patience").

Change-Id: Ifce50bc3f73d79d3d6226473661a83696d65149a
BUG: 1476719
Signed-off-by: Michael Adam &lt;obnox@samba.org&gt;
Reviewed-on: https://review.gluster.org/17924
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
