<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-brick-ops.c, branch v7.5</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd-volgen.c: remove BD xlator from the graph</title>
<updated>2019-06-18T12:09:09+00:00</updated>
<author>
<name>Yaniv Kaul</name>
<email>ykaul@redhat.com</email>
</author>
<published>2019-05-26T08:18:05+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2d278f0407ab7d29507dc697653b39d72ddee472'/>
<id>2d278f0407ab7d29507dc697653b39d72ddee472</id>
<content type='text'>
The BD xlator was removed some time ago. Remove it from the graph.
We can also remove the caps settings - only the BD xlator
was using it.

Lastly, remove the caps (which only BD was using) and the document
describing the translator.

Change-Id: Id0adcb2952f4832a5dc6301e726874522e07935d
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The BD xlator was removed some time ago. Remove it from the graph.
We can also remove the caps settings - only the BD xlator
was using it.

Lastly, remove the caps (which only BD was using) and the document
describing the translator.

Change-Id: Id0adcb2952f4832a5dc6301e726874522e07935d
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/tier: remove tier related code from glusterd</title>
<updated>2019-05-27T07:50:24+00:00</updated>
<author>
<name>Hari Gowtham</name>
<email>hgowtham@redhat.com</email>
</author>
<published>2019-05-02T13:03:34+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e1cc4275583dfd8ae8d0433587f39854c1851794'/>
<id>e1cc4275583dfd8ae8d0433587f39854c1851794</id>
<content type='text'>
The handler functions are pointed to dummy functions.
The switch case handling for tier also have been moved to
point default case to avoid issues, if reintroduced.

The tier changes in DHT still remain as such.

updates: bz#1693692

Change-Id: I80d80c9a3eb862b4440a36b31ae82b2e9d92e4dc
Signed-off-by: Hari Gowtham &lt;hgowtham@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The handler functions are pointed to dummy functions.
The switch case handling for tier also have been moved to
point default case to avoid issues, if reintroduced.

The tier changes in DHT still remain as such.

updates: bz#1693692

Change-Id: I80d80c9a3eb862b4440a36b31ae82b2e9d92e4dc
Signed-off-by: Hari Gowtham &lt;hgowtham@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Fix coverity defects &amp; put coverity annotations</title>
<updated>2019-05-02T08:06:06+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2019-04-25T06:30:52+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=91c19c50a3c7b336177dda186500568be43626b8'/>
<id>91c19c50a3c7b336177dda186500568be43626b8</id>
<content type='text'>
Along with fixing few defect, put the required annotations for the defects which
are marked ignore/false positive/intentional as per the coverity defect sheet.
This should avoid the per component graph showing many defects as open in the
coverity glusterfs web page.

Updates: bz#789278
Change-Id: I19461dc3603a3bd8f88866a1ab3db43d783af8e4
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Along with fixing few defect, put the required annotations for the defects which
are marked ignore/false positive/intentional as per the coverity defect sheet.
This should avoid the per component graph showing many defects as open in the
coverity glusterfs web page.

Updates: bz#789278
Change-Id: I19461dc3603a3bd8f88866a1ab3db43d783af8e4
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: remove redundant glusterd_check_volume_exists () calls</title>
<updated>2019-04-08T13:07:42+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2019-04-03T02:57:05+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=4bd4b0a9437189cef439833a0ed70005db9f8409'/>
<id>4bd4b0a9437189cef439833a0ed70005db9f8409</id>
<content type='text'>
A pattern of following was found in multiple places where both
glusterd_check_volume_exists and glusterd_volinfo_find do the same job.
We just need one of them not both. In a scaled environment having many
volumes this is a bottleneck to iterate over the volume list to find a
volume twice!

    exists = glusterd_check_volume_exists(volname);
    ret = glusterd_volinfo_find(volname, &amp;volinfo);
    if ((ret) || (!exists)) {

Credits: ykaul@redhat.com for finding this out

Updates: bz#1193929
Change-Id: Ie116fe5c93e261a2bddd267c28ccb20a2884a36f
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
A pattern of following was found in multiple places where both
glusterd_check_volume_exists and glusterd_volinfo_find do the same job.
We just need one of them not both. In a scaled environment having many
volumes this is a bottleneck to iterate over the volume list to find a
volume twice!

    exists = glusterd_check_volume_exists(volname);
    ret = glusterd_volinfo_find(volname, &amp;volinfo);
    if ((ret) || (!exists)) {

Credits: ykaul@redhat.com for finding this out

Updates: bz#1193929
Change-Id: Ie116fe5c93e261a2bddd267c28ccb20a2884a36f
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/shd: Implement multiplexing in self heal daemon</title>
<updated>2019-04-01T03:44:23+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-02-25T04:35:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=bc3694d7cfc868a2ed6344ea123faf19fce28d13'/>
<id>bc3694d7cfc868a2ed6344ea123faf19fce28d13</id>
<content type='text'>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Multiple files: remove HAVE_BD_XLATOR related code.</title>
<updated>2019-03-25T05:34:50+00:00</updated>
<author>
<name>Yaniv Kaul</name>
<email>ykaul@redhat.com</email>
</author>
<published>2019-03-19T14:45:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=4acf03d304e88ca3d10d3d7076208f1462228bbb'/>
<id>4acf03d304e88ca3d10d3d7076208f1462228bbb</id>
<content type='text'>
The BD translator was removed some time ago,
(in commit a907e468e724c32b9833ce59806fc215c7122d63).
This completes the work.

Compile-tested only!
updates: bz#1635688
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;

Change-Id: I999df52e479a72d3cc9523f22f9056de17eb559c
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The BD translator was removed some time ago,
(in commit a907e468e724c32b9833ce59806fc215c7122d63).
This completes the work.

Compile-tested only!
updates: bz#1635688
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;

Change-Id: I999df52e479a72d3cc9523f22f9056de17eb559c
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: Move devel headers under glusterfs directory</title>
<updated>2018-12-05T21:47:04+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-11-29T19:08:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5'/>
<id>20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5</id>
<content type='text'>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: perform rcu_read_lock/unlock() under cleanup_lock mutex</title>
<updated>2018-12-03T17:03:57+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-11-28T10:43:58+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2bb0e89e4bb113a93c6e786446a140cd99261af8'/>
<id>2bb0e89e4bb113a93c6e786446a140cd99261af8</id>
<content type='text'>
Problem: glusterd should not try to acquire locks on any resources,
when it already received a SIGTERM and cleanup is started. Otherwise
we might hit segfault, since the thread which is going through
cleanup path will be freeing up the resouces and some other thread
might be trying to acquire locks on freed resources.

Solution: perform rcu_read_lock/unlock() under cleanup_lock mutex.

fixes: bz#1654270
Change-Id: I87a97cfe4f272f74f246d688660934638911ce54
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: glusterd should not try to acquire locks on any resources,
when it already received a SIGTERM and cleanup is started. Otherwise
we might hit segfault, since the thread which is going through
cleanup path will be freeing up the resouces and some other thread
might be trying to acquire locks on freed resources.

Solution: perform rcu_read_lock/unlock() under cleanup_lock mutex.

fixes: bz#1654270
Change-Id: I87a97cfe4f272f74f246d688660934638911ce54
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: allow shared-storage to use bricks under glusterd working directory</title>
<updated>2018-11-08T11:53:57+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-11-06T14:14:16+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=bdb4ca184913c82ccf9552298f5d5b597794f2aa'/>
<id>bdb4ca184913c82ccf9552298f5d5b597794f2aa</id>
<content type='text'>
With commit 44e4db, we are not allowing user to create a volume
using glusterd's working directory as a brick or any sub directory
under glusterd's working directory as a brick.This has broken
shared-storage since the volume "gluster-shared-storage" is
created using the bricks under glusterd's working directory.

With this patch, we let the "gluster-shared-storage" volume
to use bricks under glusterd's working directory.

fixes: bz#1647029
Change-Id: Ifcbcf4576eea12cf46f199dea287b29bd3ec3bfd
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With commit 44e4db, we are not allowing user to create a volume
using glusterd's working directory as a brick or any sub directory
under glusterd's working directory as a brick.This has broken
shared-storage since the volume "gluster-shared-storage" is
created using the bricks under glusterd's working directory.

With this patch, we let the "gluster-shared-storage" volume
to use bricks under glusterd's working directory.

fixes: bz#1647029
Change-Id: Ifcbcf4576eea12cf46f199dea287b29bd3ec3bfd
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>stripe: remove the translator from build and glusterd</title>
<updated>2018-10-31T02:24:49+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2018-10-03T10:00:10+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=733139551322e49e7e5617356cf96e30780d2749'/>
<id>733139551322e49e7e5617356cf96e30780d2749</id>
<content type='text'>
Based on the proposal to remove few features as they are not
actively maintained [1], removing stripe translator from the
build. Also make sure there are no regression tests involving
stripe translator.

[1] https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html

Note that this patch aims at removing the translator from build, and
a followup patch is needed to remove the code from repository.

Updates: bz#1364707
Change-Id: I235b305338f138e29e9f30cba65bc0dadbebbbd5
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Based on the proposal to remove few features as they are not
actively maintained [1], removing stripe translator from the
build. Also make sure there are no regression tests involving
stripe translator.

[1] https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html

Note that this patch aims at removing the translator from build, and
a followup patch is needed to remove the code from repository.

Updates: bz#1364707
Change-Id: I235b305338f138e29e9f30cba65bc0dadbebbbd5
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
