<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/glusterfsd/src/glusterfsd-mgmt.c, branch v3.8.7</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterfsd: Continuous errors are getting in mount logs while glusterd is down</title>
<updated>2016-11-23T10:38:15+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2016-10-26T11:01:58+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=a4f1f5fcc0dfd533279ad1c41235665af82539d1'/>
<id>a4f1f5fcc0dfd533279ad1c41235665af82539d1</id>
<content type='text'>
Problem: when glusterd is down, getting the continuous mgmt_rpc_notify errors
         messages in the volume mount log for every 3 seconds,it will
         consume disk space.

Solution: To reduce the frequency of error messages use GF_LOG_OCCASIONALLY.

&gt; BUG: 1388877
&gt; Change-Id: I6cf24c6ddd9ab380afd058bc0ecd556d664332b1
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/15732
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
&gt; (cherry picked from commit 7874ed245bcc80658547992205f8396f4dd3c76a)

Change-Id: I32c7a2271333686b27b43e54a334c58d3da60a1d
BUG: 1394108
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15822
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: when glusterd is down, getting the continuous mgmt_rpc_notify errors
         messages in the volume mount log for every 3 seconds,it will
         consume disk space.

Solution: To reduce the frequency of error messages use GF_LOG_OCCASIONALLY.

&gt; BUG: 1388877
&gt; Change-Id: I6cf24c6ddd9ab380afd058bc0ecd556d664332b1
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/15732
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
&gt; (cherry picked from commit 7874ed245bcc80658547992205f8396f4dd3c76a)

Change-Id: I32c7a2271333686b27b43e54a334c58d3da60a1d
BUG: 1394108
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15822
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterfsd: explicitly turn on encryption for volfile fetch</title>
<updated>2016-09-28T14:55:34+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@redhat.com</email>
</author>
<published>2016-05-05T08:49:55+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7ade01fc75e35eede7071acb4381f5580102e6c2'/>
<id>7ade01fc75e35eede7071acb4381f5580102e6c2</id>
<content type='text'>
Problem: In case of encrypted transport RPC clients not able to
         reconnect.due to this daemon(glustershd etc) not able to
         fetch volfile and not started.

Solution: After turn on encryption explictly to fetch volfile
          issue is resolved.

&gt; Change-Id: I58e1fe7f5edf0abb5732432291ff677e81429b79
&gt; BUG: 1333317
&gt; Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/14253
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
&gt; (cherry picked from commit 60d235515e582319474ba7231aad490d19240642)

Change-Id: I15193837dc692b0cd7df942843bcf27a1c47e695
BUG: 1379216
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15567
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: In case of encrypted transport RPC clients not able to
         reconnect.due to this daemon(glustershd etc) not able to
         fetch volfile and not started.

Solution: After turn on encryption explictly to fetch volfile
          issue is resolved.

&gt; Change-Id: I58e1fe7f5edf0abb5732432291ff677e81429b79
&gt; BUG: 1333317
&gt; Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/14253
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
&gt; (cherry picked from commit 60d235515e582319474ba7231aad490d19240642)

Change-Id: I15193837dc692b0cd7df942843bcf27a1c47e695
BUG: 1379216
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15567
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd-client: switch volfile server incase existing connection breaks</title>
<updated>2016-04-12T12:14:22+00:00</updated>
<author>
<name>Prasanna Kumar Kalever</name>
<email>prasanna.kalever@redhat.com</email>
</author>
<published>2016-03-17T08:20:31+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=05bc8bfd2a11d280fe0aaac6c7ae86ea5ff08164'/>
<id>05bc8bfd2a11d280fe0aaac6c7ae86ea5ff08164</id>
<content type='text'>
Problem:
Currently, say we have 10 Node gluster volume, and mounted it using
Node 1 (N1) as volfile server and the rest as backup volfile servers

$ mount -t glusterfs -obackup-volfile-servers=&lt;N2&gt;:&lt;N3&gt;:...:&lt;N10&gt; &lt;N1&gt;:/vol /mnt

if N1 goes down we still be able to access the same mount point,
but the problem is that if we add or remove bricks to the volume
whoes volfile server is down in our case N1, that info will not be
passed to client, because connection between glusterfs and glusterd (of N1)
will be disconnected due to which we cannot store files to the newly
added bricks until N1 comes back

Solution:
If N1 goes down iterate through the nodes specified in
backup-volfile-servers list and try to establish the connection between
glusterfs and glsuterd, hence we don't really have to wait
until N1 comes back to store files in newly added bricks that are
successfully added when N1 was down

Change-Id: I653c9f081a84667630608091bc243ffc3859d5cd
BUG: 1289916
Signed-off-by: Prasanna Kumar Kalever &lt;prasanna.kalever@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13002
Tested-by: Prasanna Kumar Kalever &lt;pkalever@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Currently, say we have 10 Node gluster volume, and mounted it using
Node 1 (N1) as volfile server and the rest as backup volfile servers

$ mount -t glusterfs -obackup-volfile-servers=&lt;N2&gt;:&lt;N3&gt;:...:&lt;N10&gt; &lt;N1&gt;:/vol /mnt

if N1 goes down we still be able to access the same mount point,
but the problem is that if we add or remove bricks to the volume
whoes volfile server is down in our case N1, that info will not be
passed to client, because connection between glusterfs and glusterd (of N1)
will be disconnected due to which we cannot store files to the newly
added bricks until N1 comes back

Solution:
If N1 goes down iterate through the nodes specified in
backup-volfile-servers list and try to establish the connection between
glusterfs and glsuterd, hence we don't really have to wait
until N1 comes back to store files in newly added bricks that are
successfully added when N1 was down

Change-Id: I653c9f081a84667630608091bc243ffc3859d5cd
BUG: 1289916
Signed-off-by: Prasanna Kumar Kalever &lt;prasanna.kalever@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13002
Tested-by: Prasanna Kumar Kalever &lt;pkalever@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Bug fixes for IPv6 support</title>
<updated>2016-02-20T17:16:42+00:00</updated>
<author>
<name>Nithin D</name>
<email>nithind1988@yahoo.in</email>
</author>
<published>2015-11-15T16:44:43+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=46bd29e0f2a7fc9278068a06d12066d614f365ec'/>
<id>46bd29e0f2a7fc9278068a06d12066d614f365ec</id>
<content type='text'>
Problem:
Glusterd not working using ipv6 transport. The idea is with proper glusterd.vol configuration,
1. glusterd needs to listen on default port (240007) as IPv6 TCP listner.
2. Volume creation/deletion/mounting/add-bricks/delete-bricks/peer-probe
   needs to work using ipv6 addresses.
3. Bricks needs to listen on ipv6 addresses.
All the above functionality is needed to say that glusterd supports ipv6 transport and this is broken.

Fix:
When "option transport.address-family inet6" option is present in glusterd.vol
file, it is made sure that glusterd creates listeners using ipv6 sockets only and also the same information is saved
inside brick volume files used by glusterfsd brick process when they are starting.

Tests Run:
Regression tests using ./run-tests.sh
    IPv4: Ran manually till tests/basic/rpm.t .
    IPv6: (Need to add the above mentioned config and also add an entry for "hostname ::1" in /etc/hosts)
        Started failing at ./tests/basic/glusterd/arbiter-volume-probe.t and ran successfully till here

Unit Tests using Ipv6
    peer probe
    add-bricks
    remove-bricks
    create volume
    replace-bricks
    start volume
    stop volume
    delete volume

Change-Id: Iebc96e6cce748b5924ce5da17b0114600ec70a6e
BUG: 1117886
Signed-off-by: Nithin D &lt;nithind1988@yahoo.in&gt;
Reviewed-on: http://review.gluster.org/11988
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Glusterd not working using ipv6 transport. The idea is with proper glusterd.vol configuration,
1. glusterd needs to listen on default port (240007) as IPv6 TCP listner.
2. Volume creation/deletion/mounting/add-bricks/delete-bricks/peer-probe
   needs to work using ipv6 addresses.
3. Bricks needs to listen on ipv6 addresses.
All the above functionality is needed to say that glusterd supports ipv6 transport and this is broken.

Fix:
When "option transport.address-family inet6" option is present in glusterd.vol
file, it is made sure that glusterd creates listeners using ipv6 sockets only and also the same information is saved
inside brick volume files used by glusterfsd brick process when they are starting.

Tests Run:
Regression tests using ./run-tests.sh
    IPv4: Ran manually till tests/basic/rpm.t .
    IPv6: (Need to add the above mentioned config and also add an entry for "hostname ::1" in /etc/hosts)
        Started failing at ./tests/basic/glusterd/arbiter-volume-probe.t and ran successfully till here

Unit Tests using Ipv6
    peer probe
    add-bricks
    remove-bricks
    create volume
    replace-bricks
    start volume
    stop volume
    delete volume

Change-Id: Iebc96e6cce748b5924ce5da17b0114600ec70a6e
BUG: 1117886
Signed-off-by: Nithin D &lt;nithind1988@yahoo.in&gt;
Reviewed-on: http://review.gluster.org/11988
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterfsd: destroy frame after rebalance callback has completed</title>
<updated>2016-02-16T02:40:22+00:00</updated>
<author>
<name>Sakshi Bansal</name>
<email>sabansal@redhat.com</email>
</author>
<published>2016-01-20T04:01:00+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=888efdf5e0ab7d22c3fc227dc0dbfaec91b1d5d9'/>
<id>888efdf5e0ab7d22c3fc227dc0dbfaec91b1d5d9</id>
<content type='text'>
Rebalance after sending a status notification immediately
destroys the frame. Now in its callback the frame is corrupted.
Rebalance crashes when this corrupted frame is accessed. To avoid
this we must destroy the frame after the callback is completed.

Change-Id: If383017a61f09275256e51c44a1efa28feace87b
BUG: 1300152
Signed-off-by: Sakshi &lt;sabansal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13262
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Rebalance after sending a status notification immediately
destroys the frame. Now in its callback the frame is corrupted.
Rebalance crashes when this corrupted frame is accessed. To avoid
this we must destroy the frame after the callback is completed.

Change-Id: If383017a61f09275256e51c44a1efa28feace87b
BUG: 1300152
Signed-off-by: Sakshi &lt;sabansal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13262
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli/ afr: op_ret for index heal launch</title>
<updated>2016-02-12T07:31:00+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2016-01-18T12:16:31+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=da33097c3d6492e3b468b4347e47c70828fb4320'/>
<id>da33097c3d6492e3b468b4347e47c70828fb4320</id>
<content type='text'>
Problem:
If index heal is launched when some of the bricks are down, glustershd of that
node sends a -1 op_ret to glusterd which eventually propagates it to the CLI.
Also, glusterd sometimes sends an err_str and sometimes not (depending on the
failure happening in the brick-op phase or commit-op phase). So the message that
gets displayed varies in each case:

"Launching heal operation to perform index self heal on volume testvol has been
unsuccessful"
                (OR)
"Commit failed on &lt;host&gt;. Please check log file for details."

Fix:
1. Modify afr_xl_op() to return -1 even if index healing of atleast one brick
fails.
2. Ignore glusterd's error string in gf_cli_heal_volume_cbk and print a more
meaningful message.

The patch also fixes a bug in glusterfs_handle_translator_op() where if we
encounter an error in notify of one xlator, we break out of the loop instead of
sending the notify to other xlators.

Change-Id: I957f6c4b4d0a45453ffd5488e425cab5a3e0acca
BUG: 1302291
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13303
Reviewed-by: Anuradha Talur &lt;atalur@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
If index heal is launched when some of the bricks are down, glustershd of that
node sends a -1 op_ret to glusterd which eventually propagates it to the CLI.
Also, glusterd sometimes sends an err_str and sometimes not (depending on the
failure happening in the brick-op phase or commit-op phase). So the message that
gets displayed varies in each case:

"Launching heal operation to perform index self heal on volume testvol has been
unsuccessful"
                (OR)
"Commit failed on &lt;host&gt;. Please check log file for details."

Fix:
1. Modify afr_xl_op() to return -1 even if index healing of atleast one brick
fails.
2. Ignore glusterd's error string in gf_cli_heal_volume_cbk and print a more
meaningful message.

The patch also fixes a bug in glusterfs_handle_translator_op() where if we
encounter an error in notify of one xlator, we break out of the loop instead of
sending the notify to other xlators.

Change-Id: I957f6c4b4d0a45453ffd5488e425cab5a3e0acca
BUG: 1302291
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13303
Reviewed-by: Anuradha Talur &lt;atalur@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterfsd: To support volfile-server-transport type "unix"</title>
<updated>2015-11-20T03:59:36+00:00</updated>
<author>
<name>Mohamed Ashiq</name>
<email>mliyazud@redhat.com</email>
</author>
<published>2015-11-09T15:13:17+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7fbc38531acbc39369d3b91ba126fc4147ab89d1'/>
<id>7fbc38531acbc39369d3b91ba126fc4147ab89d1</id>
<content type='text'>
glusterfsd fails if the glusterd is bind to specific-IP address.
This patch helps glusterfsd to get the volfile using Unix domain socket.
glusterfs -s &lt;unix socket path&gt; --volfile-server-transport unix
          --volfile-id &lt;volume-name&gt; &lt;mount-point&gt;
The patch checks if the volfile-server-transport is of type "unix",
If It is then uses rpc_transport_unix_options_build to get the volfile.

Change-Id: I81b881e7ac5a3a4f2ac83c789c385cf547f0d53e
BUG: 1279484
Signed-off-by: Mohamed Ashiq &lt;mliyazud@redhat.com&gt;
Signed-off-by: Humble Devassy Chirammal &lt;hchiramm@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12556
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
glusterfsd fails if the glusterd is bind to specific-IP address.
This patch helps glusterfsd to get the volfile using Unix domain socket.
glusterfs -s &lt;unix socket path&gt; --volfile-server-transport unix
          --volfile-id &lt;volume-name&gt; &lt;mount-point&gt;
The patch checks if the volfile-server-transport is of type "unix",
If It is then uses rpc_transport_unix_options_build to get the volfile.

Change-Id: I81b881e7ac5a3a4f2ac83c789c385cf547f0d53e
BUG: 1279484
Signed-off-by: Mohamed Ashiq &lt;mliyazud@redhat.com&gt;
Signed-off-by: Humble Devassy Chirammal &lt;hchiramm@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12556
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: cli command implementation for bitrot scrub status</title>
<updated>2015-11-20T03:41:43+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>ggarg@redhat.com</email>
</author>
<published>2015-03-25T12:37:24+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=097e131481d25e5b1f859f4ea556b8bf56155472'/>
<id>097e131481d25e5b1f859f4ea556b8bf56155472</id>
<content type='text'>
CLI command  for bitrot scrub status will be :

gluster volume bitrot &lt;volname&gt; scrub status

Above command will show the statistics of bitrot scrubber.

Upon execution of this command it will show some common
scrubber tunable value of volume &lt;VOLNAME&gt; followed by
statistics of scrubber statistics of individual nodes.

sample ouput for single node:

Volume name : &lt;VOLNAME&gt;

State of scrub: Active

Scrub frequency: biweekly

Bitrot error log location: /var/log/glusterfs/bitd.log

Scrubber error log location: /var/log/glusterfs/scrub.log

=========================================================

Node name:

Number of Scrubbed files:

Number of Unsigned files:

Last completed scrub time:

Duration of last scrub:

Error count:

=========================================================

This is just infrastructure. list of bad file, last scrub
time, error count value will be taken care by
http://review.gluster.org/#/c/12503/  and
http://review.gluster.org/#/c/12654/ patches.

Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1
BUG: 1207627
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10231
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
CLI command  for bitrot scrub status will be :

gluster volume bitrot &lt;volname&gt; scrub status

Above command will show the statistics of bitrot scrubber.

Upon execution of this command it will show some common
scrubber tunable value of volume &lt;VOLNAME&gt; followed by
statistics of scrubber statistics of individual nodes.

sample ouput for single node:

Volume name : &lt;VOLNAME&gt;

State of scrub: Active

Scrub frequency: biweekly

Bitrot error log location: /var/log/glusterfs/bitd.log

Scrubber error log location: /var/log/glusterfs/scrub.log

=========================================================

Node name:

Number of Scrubbed files:

Number of Unsigned files:

Last completed scrub time:

Duration of last scrub:

Error count:

=========================================================

This is just infrastructure. list of bad file, last scrub
time, error count value will be taken care by
http://review.gluster.org/#/c/12503/  and
http://review.gluster.org/#/c/12654/ patches.

Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1
BUG: 1207627
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10231
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: use syscall wrappers instead of direct syscalls - miscellaneous</title>
<updated>2015-10-28T20:38:42+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2015-10-01T20:31:19+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=3066a21caafab6305527991de11c8eb43ec0044c'/>
<id>3066a21caafab6305527991de11c8eb43ec0044c</id>
<content type='text'>
various xlators and other components are invoking system calls
directly instead of using the libglusterfs/syscall.[ch] wrappers.

If not using the system call wrappers there should be a comment
in the source explaining why the wrapper isn't used.

Change-Id: I1f47820534c890a00b452fa61f7438eb2b3f667c
BUG: 1267967
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12276
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
various xlators and other components are invoking system calls
directly instead of using the libglusterfs/syscall.[ch] wrappers.

If not using the system call wrappers there should be a comment
in the source explaining why the wrapper isn't used.

Change-Id: I1f47820534c890a00b452fa61f7438eb2b3f667c
BUG: 1267967
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12276
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterfsd: Initialize ctx, cmd_args</title>
<updated>2015-10-10T00:42:11+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2015-10-07T13:09:42+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=069661f8497bbe459cb0a195f115d84c2ebd37aa'/>
<id>069661f8497bbe459cb0a195f115d84c2ebd37aa</id>
<content type='text'>
Change-Id: I9c71ae264665b7bba609c7f86cf42a52a6b47260
BUG: 1269696
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12311
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I9c71ae264665b7bba609c7f86cf42a52a6b47260
BUG: 1269696
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12311
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
