<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-volume-ops.c, branch v7.8</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: IPV6 hostname address is not parsed correctly</title>
<updated>2019-09-06T08:05:10+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2019-09-02T05:16:10+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=056d8d65397129c7a580a9fa6e76c2b9f1ca22ff'/>
<id>056d8d65397129c7a580a9fa6e76c2b9f1ca22ff</id>
<content type='text'>
Problem: IPV6 hostname address is not parsed correctly in function
         glusterd_check_brick_order

Solution: Update the code to parse hostname address

&gt; Change-Id: Ifb2f83f9c6e987b2292070e048e97eeb51b728ab
&gt; Fixes: bz#1747746
&gt; Credits: Amgad Saleh &lt;amgad.saleh@nokia.com&gt;
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
&gt; (cherry picked from commit 6563ffb04d7ba51a89726e7c5bbb85c7dbc685b5)

Change-Id: Ifb2f83f9c6e987b2292070e048e97eeb51b728ab
Fixes: bz#1749664
Credits: Amgad Saleh &lt;amgad.saleh@nokia.com&gt;
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: IPV6 hostname address is not parsed correctly in function
         glusterd_check_brick_order

Solution: Update the code to parse hostname address

&gt; Change-Id: Ifb2f83f9c6e987b2292070e048e97eeb51b728ab
&gt; Fixes: bz#1747746
&gt; Credits: Amgad Saleh &lt;amgad.saleh@nokia.com&gt;
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
&gt; (cherry picked from commit 6563ffb04d7ba51a89726e7c5bbb85c7dbc685b5)

Change-Id: Ifb2f83f9c6e987b2292070e048e97eeb51b728ab
Fixes: bz#1749664
Credits: Amgad Saleh &lt;amgad.saleh@nokia.com&gt;
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/thin-arbiter: Thin-arbiter integration with GD1</title>
<updated>2019-07-04T07:42:11+00:00</updated>
<author>
<name>Vishal Pandey</name>
<email>vpandey@redhat.com</email>
</author>
<published>2019-04-24T08:07:16+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=08c87ae4208b73f4f183f7b54ebcb373e8bc0ede'/>
<id>08c87ae4208b73f4f183f7b54ebcb373e8bc0ede</id>
<content type='text'>
gluster volume create &lt;VOLNAME&gt; replica 2 thin-arbiter 1 &lt;host1&gt;:&lt;brick1&gt; &lt;host2&gt;:&lt;brick2&gt;
&lt;thin-arbiter-host&gt;:&lt;path-to-store-replica-id-file&gt; [force]

The changes have been made in a way that the last brick in the bricks list
will be treated as the thin-arbiter.
GD1 will be manipulated to consider replica count to be as 2 and continue creating the
volume like any other replica 2 volume but since thin-arbiter volumes need ta-brick
client xlator entries for each subvolume in fuse volfile, volfile generation is
modified in a way to inject these entries seperately in the volfile for every subvolume.

Few more additions -
1- Save the volinfo with new fields ta_bricks list and thin_arbiter_count.
2- Introduce a new option client.ta-brick-port to add remote-port to ta-brick xlator entry
   in fuse volfiles. The option can be set using the following CLI syntax -
   gluster volume set &lt;VOLNAME&gt; client.ta-brick-port &lt;PORTNO.&gt;
3- Volume Info will contain a Thin-Arbiter-path entry to distinguish
   from other replicate volumes.

Change-Id: Ib434e2313b29716f32476c6c211d282c4ef39406
Updates #687
Signed-off-by: Vishal Pandey &lt;vpandey@redhat.com&gt;
(cherry picked from commit 9b223b15ab69fce4076de036ee162f36a058bcd2)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
gluster volume create &lt;VOLNAME&gt; replica 2 thin-arbiter 1 &lt;host1&gt;:&lt;brick1&gt; &lt;host2&gt;:&lt;brick2&gt;
&lt;thin-arbiter-host&gt;:&lt;path-to-store-replica-id-file&gt; [force]

The changes have been made in a way that the last brick in the bricks list
will be treated as the thin-arbiter.
GD1 will be manipulated to consider replica count to be as 2 and continue creating the
volume like any other replica 2 volume but since thin-arbiter volumes need ta-brick
client xlator entries for each subvolume in fuse volfile, volfile generation is
modified in a way to inject these entries seperately in the volfile for every subvolume.

Few more additions -
1- Save the volinfo with new fields ta_bricks list and thin_arbiter_count.
2- Introduce a new option client.ta-brick-port to add remote-port to ta-brick xlator entry
   in fuse volfiles. The option can be set using the following CLI syntax -
   gluster volume set &lt;VOLNAME&gt; client.ta-brick-port &lt;PORTNO.&gt;
3- Volume Info will contain a Thin-Arbiter-path entry to distinguish
   from other replicate volumes.

Change-Id: Ib434e2313b29716f32476c6c211d282c4ef39406
Updates #687
Signed-off-by: Vishal Pandey &lt;vpandey@redhat.com&gt;
(cherry picked from commit 9b223b15ab69fce4076de036ee162f36a058bcd2)
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd-volgen.c: remove BD xlator from the graph</title>
<updated>2019-06-18T12:09:09+00:00</updated>
<author>
<name>Yaniv Kaul</name>
<email>ykaul@redhat.com</email>
</author>
<published>2019-05-26T08:18:05+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2d278f0407ab7d29507dc697653b39d72ddee472'/>
<id>2d278f0407ab7d29507dc697653b39d72ddee472</id>
<content type='text'>
The BD xlator was removed some time ago. Remove it from the graph.
We can also remove the caps settings - only the BD xlator
was using it.

Lastly, remove the caps (which only BD was using) and the document
describing the translator.

Change-Id: Id0adcb2952f4832a5dc6301e726874522e07935d
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The BD xlator was removed some time ago. Remove it from the graph.
We can also remove the caps settings - only the BD xlator
was using it.

Lastly, remove the caps (which only BD was using) and the document
describing the translator.

Change-Id: Id0adcb2952f4832a5dc6301e726874522e07935d
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/tier: remove tier related code from glusterd</title>
<updated>2019-05-27T07:50:24+00:00</updated>
<author>
<name>Hari Gowtham</name>
<email>hgowtham@redhat.com</email>
</author>
<published>2019-05-02T13:03:34+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e1cc4275583dfd8ae8d0433587f39854c1851794'/>
<id>e1cc4275583dfd8ae8d0433587f39854c1851794</id>
<content type='text'>
The handler functions are pointed to dummy functions.
The switch case handling for tier also have been moved to
point default case to avoid issues, if reintroduced.

The tier changes in DHT still remain as such.

updates: bz#1693692

Change-Id: I80d80c9a3eb862b4440a36b31ae82b2e9d92e4dc
Signed-off-by: Hari Gowtham &lt;hgowtham@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The handler functions are pointed to dummy functions.
The switch case handling for tier also have been moved to
point default case to avoid issues, if reintroduced.

The tier changes in DHT still remain as such.

updates: bz#1693692

Change-Id: I80d80c9a3eb862b4440a36b31ae82b2e9d92e4dc
Signed-off-by: Hari Gowtham &lt;hgowtham@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: remove glusterd_check_volume_exists() call</title>
<updated>2019-04-11T15:19:30+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2019-04-08T13:24:46+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7108913337f24d3d7a39d919a29872fe50072cf4'/>
<id>7108913337f24d3d7a39d919a29872fe50072cf4</id>
<content type='text'>
As the same functionality is covered in glusterd_volinfo_find

Updates: bz#1193929
Change-Id: I2308c5fa9b2ca9edaa95f172d0bd914103808c36
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
As the same functionality is covered in glusterd_volinfo_find

Updates: bz#1193929
Change-Id: I2308c5fa9b2ca9edaa95f172d0bd914103808c36
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: remove redundant glusterd_check_volume_exists () calls</title>
<updated>2019-04-08T13:07:42+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2019-04-03T02:57:05+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=4bd4b0a9437189cef439833a0ed70005db9f8409'/>
<id>4bd4b0a9437189cef439833a0ed70005db9f8409</id>
<content type='text'>
A pattern of following was found in multiple places where both
glusterd_check_volume_exists and glusterd_volinfo_find do the same job.
We just need one of them not both. In a scaled environment having many
volumes this is a bottleneck to iterate over the volume list to find a
volume twice!

    exists = glusterd_check_volume_exists(volname);
    ret = glusterd_volinfo_find(volname, &amp;volinfo);
    if ((ret) || (!exists)) {

Credits: ykaul@redhat.com for finding this out

Updates: bz#1193929
Change-Id: Ie116fe5c93e261a2bddd267c28ccb20a2884a36f
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
A pattern of following was found in multiple places where both
glusterd_check_volume_exists and glusterd_volinfo_find do the same job.
We just need one of them not both. In a scaled environment having many
volumes this is a bottleneck to iterate over the volume list to find a
volume twice!

    exists = glusterd_check_volume_exists(volname);
    ret = glusterd_volinfo_find(volname, &amp;volinfo);
    if ((ret) || (!exists)) {

Credits: ykaul@redhat.com for finding this out

Updates: bz#1193929
Change-Id: Ie116fe5c93e261a2bddd267c28ccb20a2884a36f
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/shd: Implement multiplexing in self heal daemon</title>
<updated>2019-04-01T03:44:23+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-02-25T04:35:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=bc3694d7cfc868a2ed6344ea123faf19fce28d13'/>
<id>bc3694d7cfc868a2ed6344ea123faf19fce28d13</id>
<content type='text'>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Multiple files: remove HAVE_BD_XLATOR related code.</title>
<updated>2019-03-25T05:34:50+00:00</updated>
<author>
<name>Yaniv Kaul</name>
<email>ykaul@redhat.com</email>
</author>
<published>2019-03-19T14:45:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=4acf03d304e88ca3d10d3d7076208f1462228bbb'/>
<id>4acf03d304e88ca3d10d3d7076208f1462228bbb</id>
<content type='text'>
The BD translator was removed some time ago,
(in commit a907e468e724c32b9833ce59806fc215c7122d63).
This completes the work.

Compile-tested only!
updates: bz#1635688
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;

Change-Id: I999df52e479a72d3cc9523f22f9056de17eb559c
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The BD translator was removed some time ago,
(in commit a907e468e724c32b9833ce59806fc215c7122d63).
This completes the work.

Compile-tested only!
updates: bz#1635688
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;

Change-Id: I999df52e479a72d3cc9523f22f9056de17eb559c
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: Move devel headers under glusterfs directory</title>
<updated>2018-12-05T21:47:04+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-11-29T19:08:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5'/>
<id>20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5</id>
<content type='text'>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: volume-ops calls naked malloc</title>
<updated>2018-11-28T04:22:43+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2018-11-27T19:24:15+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=9f9f46ff2d30ff87a6a4f8c2af491ea1aa92fbb2'/>
<id>9f9f46ff2d30ff87a6a4f8c2af491ea1aa92fbb2</id>
<content type='text'>
libglusterfs provides wrapper functions MALLOC/__gf_default_malloc,
CALLOC/__gf_default_calloc, and REALLOC/__gf_default_realloc for those
few places outside of mempool.c that need to call malloc/calloc/realloc
directly.

Notable exceptions are "contrib" code, e.g. rbtree and timer-wheel,
and perhaps parsers generated by yacc+lex. But even parsers can be
fixed to at least call the wrappers mentioned above, if not our own
allocators

Change-Id: Ie6156307b6d2183be9c9aff153afb7598974f4e4
updates: bz#1193929
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
libglusterfs provides wrapper functions MALLOC/__gf_default_malloc,
CALLOC/__gf_default_calloc, and REALLOC/__gf_default_realloc for those
few places outside of mempool.c that need to call malloc/calloc/realloc
directly.

Notable exceptions are "contrib" code, e.g. rbtree and timer-wheel,
and perhaps parsers generated by yacc+lex. But even parsers can be
fixed to at least call the wrappers mentioned above, if not our own
allocators

Change-Id: Ie6156307b6d2183be9c9aff153afb7598974f4e4
updates: bz#1193929
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
