<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd.c, branch v3.9.0rc2</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd : Introduce reset brick</title>
<updated>2016-08-30T02:55:53+00:00</updated>
<author>
<name>Anuradha Talur</name>
<email>atalur@redhat.com</email>
</author>
<published>2016-08-22T17:22:03+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=936f8aeac3252951e7fa0cdaa5d260fad3bd5ea0'/>
<id>936f8aeac3252951e7fa0cdaa5d260fad3bd5ea0</id>
<content type='text'>
The command basically allows replace brick with src and
dst bricks as same.

Usage:
gluster v reset-brick &lt;volname&gt; &lt;hostname:brick-path&gt; start
This command kills the brick to be reset. Once this command is run,
admin can do other manual operations that they need to do,
like configuring some options for the brick. Once this is done,
resetting the brick can be continued with the following options.

gluster v reset-brick &lt;vname&gt; &lt;hostname:brick&gt; &lt;hostname:brick&gt; commit {force}

Does the job of resetting the brick. 'force' option should be used
when the brick already contains volinfo id.

Problem: On doing a disk-replacement of a brick in a replicate volume
the following 2 scenarios may occur :

a) there is a chance that reads are served from this replaced-disk brick,
which leads to empty reads. b) potential data loss if next writes succeed
only on replaced brick, and heal is done to other bricks from this one.

Solution: After disk-replacement, make sure that reset-brick command is
run for that brick so that pending markers are set for the brick and it
is not chosen as source for reads and heal. But, as of now replace-brick
for the same brick-path is not allowed. In order to fix the above
mentioned problem, same brick-path replace-brick is needed.
With this patch reset-brick commit {force} will be allowed even when
source and destination &lt;hostname:brickpath&gt; are identical as long as
1) destination brick is not alive
2) source and destination brick have the same brick uuid and path.
Also, the destination brick after replace-brick will use the same port
as the source brick.

Change-Id: I440b9e892ffb781ea4b8563688c3f85c7a7c89de
BUG: 1266876
Signed-off-by: Anuradha Talur &lt;atalur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12250
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The command basically allows replace brick with src and
dst bricks as same.

Usage:
gluster v reset-brick &lt;volname&gt; &lt;hostname:brick-path&gt; start
This command kills the brick to be reset. Once this command is run,
admin can do other manual operations that they need to do,
like configuring some options for the brick. Once this is done,
resetting the brick can be continued with the following options.

gluster v reset-brick &lt;vname&gt; &lt;hostname:brick&gt; &lt;hostname:brick&gt; commit {force}

Does the job of resetting the brick. 'force' option should be used
when the brick already contains volinfo id.

Problem: On doing a disk-replacement of a brick in a replicate volume
the following 2 scenarios may occur :

a) there is a chance that reads are served from this replaced-disk brick,
which leads to empty reads. b) potential data loss if next writes succeed
only on replaced brick, and heal is done to other bricks from this one.

Solution: After disk-replacement, make sure that reset-brick command is
run for that brick so that pending markers are set for the brick and it
is not chosen as source for reads and heal. But, as of now replace-brick
for the same brick-path is not allowed. In order to fix the above
mentioned problem, same brick-path replace-brick is needed.
With this patch reset-brick commit {force} will be allowed even when
source and destination &lt;hostname:brickpath&gt; are identical as long as
1) destination brick is not alive
2) source and destination brick have the same brick uuid and path.
Also, the destination brick after replace-brick will use the same port
as the source brick.

Change-Id: I440b9e892ffb781ea4b8563688c3f85c7a7c89de
BUG: 1266876
Signed-off-by: Anuradha Talur &lt;atalur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12250
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: fix unused variable warnings/errors</title>
<updated>2016-08-29T16:36:53+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2016-08-22T17:22:03+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b231acedb8696ef298a405ffedd36dba8a7183be'/>
<id>b231acedb8696ef298a405ffedd36dba8a7183be</id>
<content type='text'>
http://review.gluster.org/14085 fixes a/the "leak" - via the
generated rpc/xdr headers - of pragmas that mask these warnings.

However 14085 won't pass the smoke test until all the warnings are
fixed.

Change-Id: Id6f7555e35fd2bc37e8b9f81ac37d5384624501c
BUG: 1369124
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15285
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
http://review.gluster.org/14085 fixes a/the "leak" - via the
generated rpc/xdr headers - of pragmas that mask these warnings.

However 14085 won't pass the smoke test until all the warnings are
fixed.

Change-Id: Id6f7555e35fd2bc37e8b9f81ac37d5384624501c
BUG: 1369124
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15285
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Fix gsyncd upgrade issue</title>
<updated>2016-07-13T15:02:54+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2016-07-11T19:09:31+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=1b998788ece8c8b52657e8b9aae65d3279690c5b'/>
<id>1b998788ece8c8b52657e8b9aae65d3279690c5b</id>
<content type='text'>
Problem:
    gluster upgrade is not generating new volfiles

Cause:
During upgrade, "glusterd --xlator-option *.upgrade=on -N"
is run to generate new volfiles. It is run post 'glusterfs'
rpm installation. The above command fails during upgrade
if geo-replication is installed. This is because on
glusterd start 'gsyncd' binary is called to configure
geo-replication related stuff. Since 'glusterfs' rpm is
installed prior to 'geo-rep' rpm, the 'gsyncd' binary
used to glusterd upgrade command is of old version and
hence it fails before generating new volfiles.

Solution:
Don't call geo-replication configure during upgrade/downgrade.
Geo-replication configuration happens during start of glusterd
after upgrade.

Change-Id: Id58ea44ead9f69982f86fb68dc5b9ee3f6cd11a1
BUG: 1355628
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14898
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
    gluster upgrade is not generating new volfiles

Cause:
During upgrade, "glusterd --xlator-option *.upgrade=on -N"
is run to generate new volfiles. It is run post 'glusterfs'
rpm installation. The above command fails during upgrade
if geo-replication is installed. This is because on
glusterd start 'gsyncd' binary is called to configure
geo-replication related stuff. Since 'glusterfs' rpm is
installed prior to 'geo-rep' rpm, the 'gsyncd' binary
used to glusterd upgrade command is of old version and
hence it fails before generating new volfiles.

Solution:
Don't call geo-replication configure during upgrade/downgrade.
Geo-replication configuration happens during start of glusterd
after upgrade.

Change-Id: Id58ea44ead9f69982f86fb68dc5b9ee3f6cd11a1
BUG: 1355628
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14898
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: spawn daemons from init() on a single or two node setup</title>
<updated>2016-07-05T11:51:28+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2016-06-30T06:12:54+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=9a9f37440cb07ce2a1130ce39ea0d3461078f3a8'/>
<id>9a9f37440cb07ce2a1130ce39ea0d3461078f3a8</id>
<content type='text'>
Allow glusterd to spawn the daemons at the time of initialization when peer
count is less than 2. This is required if user wants to set up a two node
cluster with out server side quorum and want the bricks to come up on a node
where the other node is down, however the behaviour will be overriden when
server side quorum is enabled.

Change-Id: I21118e996655822467eaf329f638eb9a8bf8b7d5
BUG: 1352277
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14848
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Allow glusterd to spawn the daemons at the time of initialization when peer
count is less than 2. This is required if user wants to set up a two node
cluster with out server side quorum and want the bricks to come up on a node
where the other node is down, however the behaviour will be overriden when
server side quorum is enabled.

Change-Id: I21118e996655822467eaf329f638eb9a8bf8b7d5
BUG: 1352277
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14848
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/geo-rep: slave volume uuid to identify a geo-rep session</title>
<updated>2016-05-13T06:22:59+00:00</updated>
<author>
<name>Saravanakumar Arumugam</name>
<email>sarumuga@redhat.com</email>
</author>
<published>2015-12-29T13:52:36+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=a9128cda34b1f696b717ba09fa0ac5a929be8969'/>
<id>a9128cda34b1f696b717ba09fa0ac5a929be8969</id>
<content type='text'>
Problem:
Currently, it is possible to create multiple geo-rep session from
the Master host to Slave host(s), where Slave host(s) belonging
to the same volume.

For example:
Consider Master Host M1 having volume tv1 and Slave volume tv2,
which spans across two Slave hosts S1 and S2.
Currently, it is possible to create geo-rep session from
M1(tv1) to S1(tv2) as well as from M1(tv1) to S2(tv2).

When the Slave Host is alone modified, it is identified as a new geo-rep
session (as slave host and slave volume together are identifying
Slave side).

Also, it is possible to create both root and non-root geo-rep session between
same Master volume and Slave volume. This should also be avoided.

Solution:
This multiple geo-rep session creation must be avoided and
in order to avoid, use Slave volume uuid to identify a Slave.
This way, we can identify whether a session is already created for
the same Slave volume and avoid creating again (using different host).

When the session creation is forced in the above scenario, rename
the existing geo-rep session directory with new Slave Host mentioned.

Change-Id: I9239759cbc0d15dad63c48b8cf62950bb687c7c8
BUG: 1294813
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Signed-off-by: Aravinda VK &lt;avishwan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13111
Reviewed-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Tested-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Currently, it is possible to create multiple geo-rep session from
the Master host to Slave host(s), where Slave host(s) belonging
to the same volume.

For example:
Consider Master Host M1 having volume tv1 and Slave volume tv2,
which spans across two Slave hosts S1 and S2.
Currently, it is possible to create geo-rep session from
M1(tv1) to S1(tv2) as well as from M1(tv1) to S2(tv2).

When the Slave Host is alone modified, it is identified as a new geo-rep
session (as slave host and slave volume together are identifying
Slave side).

Also, it is possible to create both root and non-root geo-rep session between
same Master volume and Slave volume. This should also be avoided.

Solution:
This multiple geo-rep session creation must be avoided and
in order to avoid, use Slave volume uuid to identify a Slave.
This way, we can identify whether a session is already created for
the same Slave volume and avoid creating again (using different host).

When the session creation is forced in the above scenario, rename
the existing geo-rep session directory with new Slave Host mentioned.

Change-Id: I9239759cbc0d15dad63c48b8cf62950bb687c7c8
BUG: 1294813
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Signed-off-by: Aravinda VK &lt;avishwan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13111
Reviewed-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Tested-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: register rpc notification for unix sockets</title>
<updated>2016-01-29T06:19:47+00:00</updated>
<author>
<name>vmallika</name>
<email>vmallika@redhat.com</email>
</author>
<published>2016-01-05T12:20:09+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2fa5f7267801c3bff25f813ff6eb44305e56b0b5'/>
<id>2fa5f7267801c3bff25f813ff6eb44305e56b0b5</id>
<content type='text'>
Previously only CLI was using unix socket to connect to glusterd,
and there was no need to register rpc callback notifications.
Now auxiliary mount process is started with unix socket option.

So we need to register register rpc notifications for unix sockets as
well.

Change-Id: I985839fc91c5c2674d85a7ec94ae24f47898c22d
BUG: 1295763
Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13174
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Previously only CLI was using unix socket to connect to glusterd,
and there was no need to register rpc callback notifications.
Now auxiliary mount process is started with unix socket option.

So we need to register register rpc notifications for unix sockets as
well.

Change-Id: I985839fc91c5c2674d85a7ec94ae24f47898c22d
BUG: 1295763
Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13174
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: use syscall wrappers instead of direct syscalls -- glusterd</title>
<updated>2015-10-28T13:55:04+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2015-10-16T17:52:28+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=063d4ead61ee47433793de81a1c77e0ba69e6e07'/>
<id>063d4ead61ee47433793de81a1c77e0ba69e6e07</id>
<content type='text'>
various xlators and other components are invoking system calls
directly instead of using the libglusterfs/syscall.[ch] wrappers.

If not using the system call wrappers there should be a comment
in the source explaining why the wrapper isn't used.

Change-Id: I28bf2a5f7730b35914e7ab57fed91e1966b30073
BUG: 1267967
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12379
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
various xlators and other components are invoking system calls
directly instead of using the libglusterfs/syscall.[ch] wrappers.

If not using the system call wrappers there should be a comment
in the source explaining why the wrapper isn't used.

Change-Id: I28bf2a5f7730b35914e7ab57fed91e1966b30073
BUG: 1267967
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12379
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Tiering: change in status for remove brick and rebalance</title>
<updated>2015-09-21T14:37:37+00:00</updated>
<author>
<name>hari gowtham</name>
<email>hgowtham@redhat.com</email>
</author>
<published>2015-09-09T13:47:17+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2ebfc3d0a61ab88bc0e4bad07157f69f618ad54f'/>
<id>2ebfc3d0a61ab88bc0e4bad07157f69f618ad54f</id>
<content type='text'>
when we trigger a detach tier start on a tier vol,
it shows in the volume status task as "remove brick" instead of "Detach tier"

Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.42.171:/data/gluster/hbr1       49154     0          Y       25098
Cold Bricks:
Brick 10.70.42.171:/data/gluster/p1         49152     0          Y       25101
Brick 10.70.42.171:/data/gluster/p2         49153     0          Y       25112
NFS Server on localhost                     N/A       N/A        N       N/A

Task Status of Volume vol1
------------------------------------------------------------------------------
Task                 : Tier migrate
ID                   : e11d5a3d-b1ae-4c3f-8f95-b28993c60939
Status               : in progress

Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.42.171:/data/gluster/hbr1       49154     0          Y       25098
Cold Bricks:
Brick 10.70.42.171:/data/gluster/p1         49152     0          Y       25101
Brick 10.70.42.171:/data/gluster/p2         49153     0          Y       25112
NFS Server on localhost                     N/A       N/A        N       N/A

Task Status of Volume vol1
------------------------------------------------------------------------------
Task                 : Detach tier
ID                   : 76d700b1-5bbd-43ed-95fd-1640b2b4af31
Status               : completed

Change-Id: I4bd3b340d4e700e8afed00e1478b8a8b54dfe2e2
BUG: 1261837
Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;
Signed-off-by: Hari Gowtham &lt;hgowtham@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12149
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Tested-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
when we trigger a detach tier start on a tier vol,
it shows in the volume status task as "remove brick" instead of "Detach tier"

Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.42.171:/data/gluster/hbr1       49154     0          Y       25098
Cold Bricks:
Brick 10.70.42.171:/data/gluster/p1         49152     0          Y       25101
Brick 10.70.42.171:/data/gluster/p2         49153     0          Y       25112
NFS Server on localhost                     N/A       N/A        N       N/A

Task Status of Volume vol1
------------------------------------------------------------------------------
Task                 : Tier migrate
ID                   : e11d5a3d-b1ae-4c3f-8f95-b28993c60939
Status               : in progress

Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.42.171:/data/gluster/hbr1       49154     0          Y       25098
Cold Bricks:
Brick 10.70.42.171:/data/gluster/p1         49152     0          Y       25101
Brick 10.70.42.171:/data/gluster/p2         49153     0          Y       25112
NFS Server on localhost                     N/A       N/A        N       N/A

Task Status of Volume vol1
------------------------------------------------------------------------------
Task                 : Detach tier
ID                   : 76d700b1-5bbd-43ed-95fd-1640b2b4af31
Status               : completed

Change-Id: I4bd3b340d4e700e8afed00e1478b8a8b54dfe2e2
BUG: 1261837
Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;
Signed-off-by: Hari Gowtham &lt;hgowtham@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12149
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Tested-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>all: reduce "inline" usage</title>
<updated>2015-09-01T11:55:15+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2015-07-28T16:11:12+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0773ca67fdb60a142207759fa6c07a69882ce59c'/>
<id>0773ca67fdb60a142207759fa6c07a69882ce59c</id>
<content type='text'>
There are three kinds of inline functions: plain inline, extern inline,
and static inline.  All three have been removed from .c files, except
those in "contrib" which aren't our problem.  Inlines in .h files, which
are overwhelmingly "static inline" already, have generally been left
alone.  Over time we should be able to "lower" these into .c files, but
that has to be done in a case-by-case fashion requiring more manual
effort.  This part was easy to do automatically without (as far as I can
tell) any ill effect.

In the process, several pieces of dead code were flagged by the
compiler, and were removed.

Change-Id: I56a5e614735c9e0a6ee420dab949eac22e25c155
BUG: 1245331
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11769
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
Reviewed-by: Venky Shankar &lt;vshankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There are three kinds of inline functions: plain inline, extern inline,
and static inline.  All three have been removed from .c files, except
those in "contrib" which aren't our problem.  Inlines in .h files, which
are overwhelmingly "static inline" already, have generally been left
alone.  Over time we should be able to "lower" these into .c files, but
that has to be done in a case-by-case fashion requiring more manual
effort.  This part was easy to do automatically without (as far as I can
tell) any ill effect.

In the process, several pieces of dead code were flagged by the
compiler, and were removed.

Change-Id: I56a5e614735c9e0a6ee420dab949eac22e25c155
BUG: 1245331
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11769
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
Reviewed-by: Venky Shankar &lt;vshankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: initialize the daemon services on demand</title>
<updated>2015-07-27T07:14:01+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2015-07-01T09:17:48+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d49b4b1086119873664b77ea7d7f2a5dd8f671ee'/>
<id>d49b4b1086119873664b77ea7d7f2a5dd8f671ee</id>
<content type='text'>
As of now all the daemon services are initialized at glusterD init path. Since
socket file path of per node daemon demands the uuid of the node, MY_UUID macro
is invoked as part of the initialization.

The above flow breaks the usecases where a gluster image is built following a
template could be Dockerfile, Vagrantfile or any kind of virtualization
environment. This means bringing instances of this image would have same UUIDs
for the node resulting in peer probe failure.

Solution is to lazily initialize the services on demand.

Change-Id: If7caa533026c83e98c7c7678bded67085d0bbc1e
BUG: 1238135
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11488
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
As of now all the daemon services are initialized at glusterD init path. Since
socket file path of per node daemon demands the uuid of the node, MY_UUID macro
is invoked as part of the initialization.

The above flow breaks the usecases where a gluster image is built following a
template could be Dockerfile, Vagrantfile or any kind of virtualization
environment. This means bringing instances of this image would have same UUIDs
for the node resulting in peer probe failure.

Solution is to lazily initialize the services on demand.

Change-Id: If7caa533026c83e98c7c7678bded67085d0bbc1e
BUG: 1238135
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11488
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
