<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd.h, branch v6.8</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: migrating rebalance commands to mgmt_v3 framework</title>
<updated>2018-12-18T04:01:52+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-11-30T10:46:55+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0b4b111fbd80a5d400a07d61e2b99f230f9be76f'/>
<id>0b4b111fbd80a5d400a07d61e2b99f230f9be76f</id>
<content type='text'>
Current rebalance commands use the op_state machine framework.
Porting it to use the mgmt_v3 framework.

Change-Id: I6faf4a6335c2e2f3d54bbde79908a7749e4613e7
fixes: bz#1655827
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Current rebalance commands use the op_state machine framework.
Porting it to use the mgmt_v3 framework.

Change-Id: I6faf4a6335c2e2f3d54bbde79908a7749e4613e7
fixes: bz#1655827
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: fix get_mux_limit_per_process to read default value</title>
<updated>2018-12-07T07:08:58+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-12-06T17:44:57+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=916df2c12b19ac84b7806d31226d7f832ca7e2bb'/>
<id>916df2c12b19ac84b7806d31226d7f832ca7e2bb</id>
<content type='text'>
get_mux_limit_per_process () reads the global option dictionary and in
case it doesn't find out a key, assumes that
cluster.max-bricks-per-process option isn't configured however the
default value should be picked up in such case.

Change-Id: I35dd8da084adbf59793d58557e818d8e6c17f9f3
Fixes: bz#1656951
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
get_mux_limit_per_process () reads the global option dictionary and in
case it doesn't find out a key, assumes that
cluster.max-bricks-per-process option isn't configured however the
default value should be picked up in such case.

Change-Id: I35dd8da084adbf59793d58557e818d8e6c17f9f3
Fixes: bz#1656951
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: Move devel headers under glusterfs directory</title>
<updated>2018-12-05T21:47:04+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-11-29T19:08:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5'/>
<id>20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5</id>
<content type='text'>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: glusterd to regenerate volfiles when GD_OP_VERSION_MAX changes</title>
<updated>2018-12-05T21:36:25+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-11-20T07:02:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d4723bdd30f0955ca68fec8c01bc87229c6a24c0'/>
<id>d4723bdd30f0955ca68fec8c01bc87229c6a24c0</id>
<content type='text'>
While glusterd has an infra to allow post install of spec to bring it up
in the interim upgrade mode to allow all the volfiles to be regenerated
with the latest executable, in container world the same methodology is
not followed as container image always point to the specific gluster rpm
and gluster rpm doesn't go through an upgrade process.

This fix does the following:
1. If glusterd.upgrade file doesn't exist, regenerate the volfiles
2. If maximum-operating-version read from glusterd.upgrade doesn't match
with GD_OP_VERSION_MAX, glusterd detects it to be a version where new
options are introduced and regenerate the volfiles.

Tests done:

1. Bring up glusterd, check if glusterd.upgrade file has been created
with GD_OP_VERSION_MAX value.
2. Post 1, restart glusterd and check glusterd hasn't regenerated the
volfiles as there's is no change in the GD_OP_VERSION_MAX vs the
op_version read from the file.
3. Bump up the GD_OP_VERSION_MAX in the code by 1 and post compilation
restart glusterd where the volfiles should be again regenerated.

Note: The old way of having volfiles regenerated during an rpm upgrade
is kept as it is for now but eventually this can be sunset later.

Change-Id: I75b49a1601c71e99f6a6bc360dd12dd03a96414b
Fixes: bz#1651463
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
While glusterd has an infra to allow post install of spec to bring it up
in the interim upgrade mode to allow all the volfiles to be regenerated
with the latest executable, in container world the same methodology is
not followed as container image always point to the specific gluster rpm
and gluster rpm doesn't go through an upgrade process.

This fix does the following:
1. If glusterd.upgrade file doesn't exist, regenerate the volfiles
2. If maximum-operating-version read from glusterd.upgrade doesn't match
with GD_OP_VERSION_MAX, glusterd detects it to be a version where new
options are introduced and regenerate the volfiles.

Tests done:

1. Bring up glusterd, check if glusterd.upgrade file has been created
with GD_OP_VERSION_MAX value.
2. Post 1, restart glusterd and check glusterd hasn't regenerated the
volfiles as there's is no change in the GD_OP_VERSION_MAX vs the
op_version read from the file.
3. Bump up the GD_OP_VERSION_MAX in the code by 1 and post compilation
restart glusterd where the volfiles should be again regenerated.

Note: The old way of having volfiles regenerated during an rpm upgrade
is kept as it is for now but eventually this can be sunset later.

Change-Id: I75b49a1601c71e99f6a6bc360dd12dd03a96414b
Fixes: bz#1651463
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: perform rcu_read_lock/unlock() under cleanup_lock mutex</title>
<updated>2018-12-03T17:03:57+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-11-28T10:43:58+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2bb0e89e4bb113a93c6e786446a140cd99261af8'/>
<id>2bb0e89e4bb113a93c6e786446a140cd99261af8</id>
<content type='text'>
Problem: glusterd should not try to acquire locks on any resources,
when it already received a SIGTERM and cleanup is started. Otherwise
we might hit segfault, since the thread which is going through
cleanup path will be freeing up the resouces and some other thread
might be trying to acquire locks on freed resources.

Solution: perform rcu_read_lock/unlock() under cleanup_lock mutex.

fixes: bz#1654270
Change-Id: I87a97cfe4f272f74f246d688660934638911ce54
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: glusterd should not try to acquire locks on any resources,
when it already received a SIGTERM and cleanup is started. Otherwise
we might hit segfault, since the thread which is going through
cleanup path will be freeing up the resouces and some other thread
might be trying to acquire locks on freed resources.

Solution: perform rcu_read_lock/unlock() under cleanup_lock mutex.

fixes: bz#1654270
Change-Id: I87a97cfe4f272f74f246d688660934638911ce54
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/mux: Optimize brick disconnect handler code</title>
<updated>2018-11-18T06:10:31+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2018-11-15T07:48:36+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b4faa9e7a25bdf0582f8b0fd69aa1381c307a61e'/>
<id>b4faa9e7a25bdf0582f8b0fd69aa1381c307a61e</id>
<content type='text'>
Removed unnecessary iteration during brick disconnect
handler when multiplex is enabled.

Change-Id: I62dd3337b7e7da085da5d76aaae206e0b0edff9f
fixes: bz#1650115
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Removed unnecessary iteration during brick disconnect
handler when multiplex is enabled.

Change-Id: I62dd3337b7e7da085da5d76aaae206e0b0edff9f
fixes: bz#1650115
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Use GF_ATOMIC to update 'blockers' counter at glusterd_conf</title>
<updated>2018-09-20T03:46:05+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2018-09-19T09:02:22+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=4f6ae853ffa9d06446407f389aaef61ac0b3b424'/>
<id>4f6ae853ffa9d06446407f389aaef61ac0b3b424</id>
<content type='text'>
Problem: 
Currently in glusterd code uses sync_lock/sync_unlock to update blockers
counter which could add delays to the overall transaction phase
escpecially when there's a batch of volume stop operations processed by
glusterd in brick multiplexing mode.

Solution: Use GF_ATOMIC to update blocker counter to ensure unnecessary
context switching can be avoided.

Change-Id: Ie13177dfee2af66687ae7cf5c67405c152853990
Fixes: bz#1631128
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: 
Currently in glusterd code uses sync_lock/sync_unlock to update blockers
counter which could add delays to the overall transaction phase
escpecially when there's a batch of volume stop operations processed by
glusterd in brick multiplexing mode.

Solution: Use GF_ATOMIC to update blocker counter to ensure unnecessary
context switching can be avoided.

Change-Id: Ie13177dfee2af66687ae7cf5c67405c152853990
Fixes: bz#1631128
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: acquire lock to update volinfo structure</title>
<updated>2018-09-18T04:09:01+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-09-11T08:49:42+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=484f417da945cf83afdbf136bb4817311862a8d2'/>
<id>484f417da945cf83afdbf136bb4817311862a8d2</id>
<content type='text'>
Problem: With commit cb0339f92, we are using a separate syntask
for restart_bricks. There can be a situation where two threads
are accessing the same volinfo structure at the same time and
updating volinfo structure. This can lead volinfo to have
inconsistent values and assertion failures because of unexpected
values.

Solution: While updating the volinfo structure, acquire a
store_volinfo_lock, and release the lock only when the thread
completed its critical section part.

Fixes: bz#1627610
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;

Change-Id: I545e4e2368e3285d8f7aa28081ff4448abb72f5d
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: With commit cb0339f92, we are using a separate syntask
for restart_bricks. There can be a situation where two threads
are accessing the same volinfo structure at the same time and
updating volinfo structure. This can lead volinfo to have
inconsistent values and assertion failures because of unexpected
values.

Solution: While updating the volinfo structure, acquire a
store_volinfo_lock, and release the lock only when the thread
completed its critical section part.

Fixes: bz#1627610
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;

Change-Id: I545e4e2368e3285d8f7aa28081ff4448abb72f5d
</pre>
</div>
</content>
</entry>
<entry>
<title>Land clang-format changes</title>
<updated>2018-09-12T11:52:48+00:00</updated>
<author>
<name>Gluster Ant</name>
<email>bugzilla-bot@gluster.org</email>
</author>
<published>2018-09-12T11:52:48+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=45a71c0548b6fd2c757aa2e7b7671a1411948894'/>
<id>45a71c0548b6fd2c757aa2e7b7671a1411948894</id>
<content type='text'>
Change-Id: I6f5d8140a06f3c1b2d196849299f8d483028d33b
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I6f5d8140a06f3c1b2d196849299f8d483028d33b
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Fix Buffer size issues</title>
<updated>2018-09-04T14:01:59+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-08-28T18:48:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=8612a1ca192333c2b760455661647d83bed2fd92'/>
<id>8612a1ca192333c2b760455661647d83bed2fd92</id>
<content type='text'>
This patch fixes buffer size issue 1138522.

Change-Id: Ia12fc8f34f75704f8ed3efae2022c4fd67a8c76c
updates: bz#789278
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch fixes buffer size issue 1138522.

Change-Id: Ia12fc8f34f75704f8ed3efae2022c4fd67a8c76c
updates: bz#789278
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
