<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-store.h, branch v7.2</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd/thin-arbiter: Thin-arbiter integration with GD1</title>
<updated>2019-07-04T07:42:11+00:00</updated>
<author>
<name>Vishal Pandey</name>
<email>vpandey@redhat.com</email>
</author>
<published>2019-04-24T08:07:16+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=08c87ae4208b73f4f183f7b54ebcb373e8bc0ede'/>
<id>08c87ae4208b73f4f183f7b54ebcb373e8bc0ede</id>
<content type='text'>
gluster volume create &lt;VOLNAME&gt; replica 2 thin-arbiter 1 &lt;host1&gt;:&lt;brick1&gt; &lt;host2&gt;:&lt;brick2&gt;
&lt;thin-arbiter-host&gt;:&lt;path-to-store-replica-id-file&gt; [force]

The changes have been made in a way that the last brick in the bricks list
will be treated as the thin-arbiter.
GD1 will be manipulated to consider replica count to be as 2 and continue creating the
volume like any other replica 2 volume but since thin-arbiter volumes need ta-brick
client xlator entries for each subvolume in fuse volfile, volfile generation is
modified in a way to inject these entries seperately in the volfile for every subvolume.

Few more additions -
1- Save the volinfo with new fields ta_bricks list and thin_arbiter_count.
2- Introduce a new option client.ta-brick-port to add remote-port to ta-brick xlator entry
   in fuse volfiles. The option can be set using the following CLI syntax -
   gluster volume set &lt;VOLNAME&gt; client.ta-brick-port &lt;PORTNO.&gt;
3- Volume Info will contain a Thin-Arbiter-path entry to distinguish
   from other replicate volumes.

Change-Id: Ib434e2313b29716f32476c6c211d282c4ef39406
Updates #687
Signed-off-by: Vishal Pandey &lt;vpandey@redhat.com&gt;
(cherry picked from commit 9b223b15ab69fce4076de036ee162f36a058bcd2)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
gluster volume create &lt;VOLNAME&gt; replica 2 thin-arbiter 1 &lt;host1&gt;:&lt;brick1&gt; &lt;host2&gt;:&lt;brick2&gt;
&lt;thin-arbiter-host&gt;:&lt;path-to-store-replica-id-file&gt; [force]

The changes have been made in a way that the last brick in the bricks list
will be treated as the thin-arbiter.
GD1 will be manipulated to consider replica count to be as 2 and continue creating the
volume like any other replica 2 volume but since thin-arbiter volumes need ta-brick
client xlator entries for each subvolume in fuse volfile, volfile generation is
modified in a way to inject these entries seperately in the volfile for every subvolume.

Few more additions -
1- Save the volinfo with new fields ta_bricks list and thin_arbiter_count.
2- Introduce a new option client.ta-brick-port to add remote-port to ta-brick xlator entry
   in fuse volfiles. The option can be set using the following CLI syntax -
   gluster volume set &lt;VOLNAME&gt; client.ta-brick-port &lt;PORTNO.&gt;
3- Volume Info will contain a Thin-Arbiter-path entry to distinguish
   from other replicate volumes.

Change-Id: Ib434e2313b29716f32476c6c211d282c4ef39406
Updates #687
Signed-off-by: Vishal Pandey &lt;vpandey@redhat.com&gt;
(cherry picked from commit 9b223b15ab69fce4076de036ee162f36a058bcd2)
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd-volgen.c: remove BD xlator from the graph</title>
<updated>2019-06-18T12:09:09+00:00</updated>
<author>
<name>Yaniv Kaul</name>
<email>ykaul@redhat.com</email>
</author>
<published>2019-05-26T08:18:05+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2d278f0407ab7d29507dc697653b39d72ddee472'/>
<id>2d278f0407ab7d29507dc697653b39d72ddee472</id>
<content type='text'>
The BD xlator was removed some time ago. Remove it from the graph.
We can also remove the caps settings - only the BD xlator
was using it.

Lastly, remove the caps (which only BD was using) and the document
describing the translator.

Change-Id: Id0adcb2952f4832a5dc6301e726874522e07935d
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The BD xlator was removed some time ago. Remove it from the graph.
We can also remove the caps settings - only the BD xlator
was using it.

Lastly, remove the caps (which only BD was using) and the document
describing the translator.

Change-Id: Id0adcb2952f4832a5dc6301e726874522e07935d
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/tier: remove tier related code from glusterd</title>
<updated>2019-05-27T07:50:24+00:00</updated>
<author>
<name>Hari Gowtham</name>
<email>hgowtham@redhat.com</email>
</author>
<published>2019-05-02T13:03:34+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e1cc4275583dfd8ae8d0433587f39854c1851794'/>
<id>e1cc4275583dfd8ae8d0433587f39854c1851794</id>
<content type='text'>
The handler functions are pointed to dummy functions.
The switch case handling for tier also have been moved to
point default case to avoid issues, if reintroduced.

The tier changes in DHT still remain as such.

updates: bz#1693692

Change-Id: I80d80c9a3eb862b4440a36b31ae82b2e9d92e4dc
Signed-off-by: Hari Gowtham &lt;hgowtham@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The handler functions are pointed to dummy functions.
The switch case handling for tier also have been moved to
point default case to avoid issues, if reintroduced.

The tier changes in DHT still remain as such.

updates: bz#1693692

Change-Id: I80d80c9a3eb862b4440a36b31ae82b2e9d92e4dc
Signed-off-by: Hari Gowtham &lt;hgowtham@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/store: store all key-values in one shot</title>
<updated>2019-05-08T06:46:24+00:00</updated>
<author>
<name>Yaniv Kaul</name>
<email>ykaul@redhat.com</email>
</author>
<published>2019-04-28T19:05:44+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=1fa089e7a2b180e0bdcc1e7e09a63934a2a0c0ef'/>
<id>1fa089e7a2b180e0bdcc1e7e09a63934a2a0c0ef</id>
<content type='text'>
Instead of saving each key-value separately, which is slow (
especially as we fflush() after each!), store them all as one
string and write all together.

Implements https://github.com/gluster/glusterfs/issues/629

Change-Id: Ie77a272446b0b6785584b710a4fdd9c613dd9578
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat,.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Instead of saving each key-value separately, which is slow (
especially as we fflush() after each!), store them all as one
string and write all together.

Implements https://github.com/gluster/glusterfs/issues/629

Change-Id: Ie77a272446b0b6785584b710a4fdd9c613dd9578
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat,.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: Move devel headers under glusterfs directory</title>
<updated>2018-12-05T21:47:04+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-11-29T19:08:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5'/>
<id>20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5</id>
<content type='text'>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: glusterd to regenerate volfiles when GD_OP_VERSION_MAX changes</title>
<updated>2018-12-05T21:36:25+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-11-20T07:02:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d4723bdd30f0955ca68fec8c01bc87229c6a24c0'/>
<id>d4723bdd30f0955ca68fec8c01bc87229c6a24c0</id>
<content type='text'>
While glusterd has an infra to allow post install of spec to bring it up
in the interim upgrade mode to allow all the volfiles to be regenerated
with the latest executable, in container world the same methodology is
not followed as container image always point to the specific gluster rpm
and gluster rpm doesn't go through an upgrade process.

This fix does the following:
1. If glusterd.upgrade file doesn't exist, regenerate the volfiles
2. If maximum-operating-version read from glusterd.upgrade doesn't match
with GD_OP_VERSION_MAX, glusterd detects it to be a version where new
options are introduced and regenerate the volfiles.

Tests done:

1. Bring up glusterd, check if glusterd.upgrade file has been created
with GD_OP_VERSION_MAX value.
2. Post 1, restart glusterd and check glusterd hasn't regenerated the
volfiles as there's is no change in the GD_OP_VERSION_MAX vs the
op_version read from the file.
3. Bump up the GD_OP_VERSION_MAX in the code by 1 and post compilation
restart glusterd where the volfiles should be again regenerated.

Note: The old way of having volfiles regenerated during an rpm upgrade
is kept as it is for now but eventually this can be sunset later.

Change-Id: I75b49a1601c71e99f6a6bc360dd12dd03a96414b
Fixes: bz#1651463
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
While glusterd has an infra to allow post install of spec to bring it up
in the interim upgrade mode to allow all the volfiles to be regenerated
with the latest executable, in container world the same methodology is
not followed as container image always point to the specific gluster rpm
and gluster rpm doesn't go through an upgrade process.

This fix does the following:
1. If glusterd.upgrade file doesn't exist, regenerate the volfiles
2. If maximum-operating-version read from glusterd.upgrade doesn't match
with GD_OP_VERSION_MAX, glusterd detects it to be a version where new
options are introduced and regenerate the volfiles.

Tests done:

1. Bring up glusterd, check if glusterd.upgrade file has been created
with GD_OP_VERSION_MAX value.
2. Post 1, restart glusterd and check glusterd hasn't regenerated the
volfiles as there's is no change in the GD_OP_VERSION_MAX vs the
op_version read from the file.
3. Bump up the GD_OP_VERSION_MAX in the code by 1 and post compilation
restart glusterd where the volfiles should be again regenerated.

Note: The old way of having volfiles regenerated during an rpm upgrade
is kept as it is for now but eventually this can be sunset later.

Change-Id: I75b49a1601c71e99f6a6bc360dd12dd03a96414b
Fixes: bz#1651463
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Land clang-format changes</title>
<updated>2018-09-12T11:52:48+00:00</updated>
<author>
<name>Gluster Ant</name>
<email>bugzilla-bot@gluster.org</email>
</author>
<published>2018-09-12T11:52:48+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=45a71c0548b6fd2c757aa2e7b7671a1411948894'/>
<id>45a71c0548b6fd2c757aa2e7b7671a1411948894</id>
<content type='text'>
Change-Id: I6f5d8140a06f3c1b2d196849299f8d483028d33b
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I6f5d8140a06f3c1b2d196849299f8d483028d33b
</pre>
</div>
</content>
</entry>
<entry>
<title>Coverity Issue: PW.INCLUDE_RECURSION in several files</title>
<updated>2017-11-09T13:21:11+00:00</updated>
<author>
<name>Girjesh Rajoria</name>
<email>grajoria@redhat.com</email>
</author>
<published>2017-11-02T21:12:23+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0821a57bd2e7518d1c8df2d4403a2dfbb8ee5b6b'/>
<id>0821a57bd2e7518d1c8df2d4403a2dfbb8ee5b6b</id>
<content type='text'>
Coverity ID: 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417,
418, 419, 423, 424, 425, 426, 427, 428, 429, 436, 437, 438, 439,
440, 441, 442, 443

Issue: Event include_recursion

Removed redundant, recursive includes from the files.

Change-Id: I920776b1fa089a2d4917ca722d0075a9239911a7
BUG: 789278
Signed-off-by: Girjesh Rajoria &lt;grajoria@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Coverity ID: 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417,
418, 419, 423, 424, 425, 426, 427, 428, 429, 436, 437, 438, 439,
440, 441, 442, 443

Issue: Event include_recursion

Removed redundant, recursive includes from the files.

Change-Id: I920776b1fa089a2d4917ca722d0075a9239911a7
BUG: 789278
Signed-off-by: Girjesh Rajoria &lt;grajoria@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot: Issue with other processes accessing the mounted brick</title>
<updated>2017-10-23T10:05:02+00:00</updated>
<author>
<name>Sunny Kumar</name>
<email>sunkumar@redhat.com</email>
</author>
<published>2017-08-16T08:34:45+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2b3b3edee2d849b4aee314048987dc995d9679a1'/>
<id>2b3b3edee2d849b4aee314048987dc995d9679a1</id>
<content type='text'>
Added code for unmount of activated snapshot brick during snapshot
deactivation process which make sense as mount point for deactivated
bricks should not exist.

Removed code for mounting newly created snapshot, as newly created
snapshots should not mount until it is activated.

Added code for mount point creation and snapshot mount during snapshot
activation.

Added validation during glusterd init for mounting only those snapshot
whose status is either STARTED or RESTORED.

During snapshot restore, mount point for stopped snap should exist as
it is required to set extended attribute.

During handshake, after getting updates from friend mount point for
activated snapshot should exist and should not for deactivated
snapshot.

While getting snap status we should show relevent information for
deactivated snapshots, after this pathch 'gluster snap status' command
will show output like-

Snap Name : snap1
Snap UUID : snap-uuid

	Brick Path        :   server1:/run/gluster/snaps/snap-vol-name/brick
	Volume Group      :   N/A (Deactivated Snapshot)
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   N/A
	LV Size           :   N/A

Fixes: #276

Change-Id: I65783488e35fac43632615ce1b8ff7b8e84834dc
BUG: 1482023
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Added code for unmount of activated snapshot brick during snapshot
deactivation process which make sense as mount point for deactivated
bricks should not exist.

Removed code for mounting newly created snapshot, as newly created
snapshots should not mount until it is activated.

Added code for mount point creation and snapshot mount during snapshot
activation.

Added validation during glusterd init for mounting only those snapshot
whose status is either STARTED or RESTORED.

During snapshot restore, mount point for stopped snap should exist as
it is required to set extended attribute.

During handshake, after getting updates from friend mount point for
activated snapshot should exist and should not for deactivated
snapshot.

While getting snap status we should show relevent information for
deactivated snapshots, after this pathch 'gluster snap status' command
will show output like-

Snap Name : snap1
Snap UUID : snap-uuid

	Brick Path        :   server1:/run/gluster/snaps/snap-vol-name/brick
	Volume Group      :   N/A (Deactivated Snapshot)
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   N/A
	LV Size           :   N/A

Fixes: #276

Change-Id: I65783488e35fac43632615ce1b8ff7b8e84834dc
BUG: 1482023
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd : glusterd fails to start when  peer's network interface is down</title>
<updated>2017-07-28T04:47:24+00:00</updated>
<author>
<name>Gaurav Yadav</name>
<email>gyadav@redhat.com</email>
</author>
<published>2017-07-18T10:53:18+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=1477fa442a733d7b1a5ea74884cac8f29fbe7e6a'/>
<id>1477fa442a733d7b1a5ea74884cac8f29fbe7e6a</id>
<content type='text'>
Problem:
glusterd fails to start on nodes where glusterd tries to come up even
before network is up.

Fix:
On startup glusterd tries to resolve brick path which is based on
hostname/ip, but in the above scenario when network interface is not
up, glusterd is not able to resolve the brick path using ip_address or
hostname With this fix glusterd will use UUID to resolve brick path.

Change-Id: Icfa7b2652417135530479d0aa4e2a82b0476f710
BUG: 1472267
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17813
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
glusterd fails to start on nodes where glusterd tries to come up even
before network is up.

Fix:
On startup glusterd tries to resolve brick path which is based on
hostname/ip, but in the above scenario when network interface is not
up, glusterd is not able to resolve the brick path using ip_address or
hostname With this fix glusterd will use UUID to resolve brick path.

Change-Id: Icfa7b2652417135530479d0aa4e2a82b0476f710
BUG: 1472267
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17813
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
