<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-rebalance.c, branch v3.6.4beta2</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd/uss: Create rebalance volfile.</title>
<updated>2015-01-08T09:14:43+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2014-11-24T08:24:24+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c71dc72b7c4df6e1ea7193227d69ce0db18b5b93'/>
<id>c71dc72b7c4df6e1ea7193227d69ce0db18b5b93</id>
<content type='text'>
Create a new rebalance volfile, which will not contain
snap-view client translators, irrespective of the status
of USS.

This volfile, will be created and regenerated everytime
the fuse-volfile is generated, and will be consumed
by the rebalance process.

Change-Id: I514a8e88d06c0b8fb6949c3a3e6dc4dbe55e38af
BUG: 1175758
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9190
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9339
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Create a new rebalance volfile, which will not contain
snap-view client translators, irrespective of the status
of USS.

This volfile, will be created and regenerated everytime
the fuse-volfile is generated, and will be consumed
by the rebalance process.

Change-Id: I514a8e88d06c0b8fb6949c3a3e6dc4dbe55e38af
BUG: 1175758
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9190
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9339
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Handle rpc_connect failure in the event handler</title>
<updated>2014-06-05T17:11:45+00:00</updated>
<author>
<name>Vijaikumar M</name>
<email>vmallika@redhat.com</email>
</author>
<published>2014-05-23T09:12:08+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=42b956971c47fd0708cbbd17ce8c78c2ed79bfba'/>
<id>42b956971c47fd0708cbbd17ce8c78c2ed79bfba</id>
<content type='text'>
Currently rpc_connect calls the notification function on failure in the
same thread, glusterd notification holds the big_lock and
hence big_lock is released before rpc_connect

In snapshot creation, releasing the big-lock before completeing
operation can cause problem like deadlock or memory corruption.

Bricks are started as part of snapshot created operation.
brick_start releases the big_lock when doing brick_connect and this
might cause glusterd crash.
There is a similar issue in bug# 1088355.

Solution is let the event handler handle the failure than doing it in
the rpc_connect.

Change-Id: I088d44092ce845a07516c1d67abd02b220e08b38
BUG: 1101507
Signed-off-by: Vijaikumar M &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7843
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Tested-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently rpc_connect calls the notification function on failure in the
same thread, glusterd notification holds the big_lock and
hence big_lock is released before rpc_connect

In snapshot creation, releasing the big-lock before completeing
operation can cause problem like deadlock or memory corruption.

Bricks are started as part of snapshot created operation.
brick_start releases the big_lock when doing brick_connect and this
might cause glusterd crash.
There is a similar issue in bug# 1088355.

Solution is let the event handler handle the failure than doing it in
the rpc_connect.

Change-Id: I088d44092ce845a07516c1d67abd02b220e08b38
BUG: 1101507
Signed-off-by: Vijaikumar M &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7843
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Tested-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd : Port glusterd sync log messages to gf_msg API</title>
<updated>2014-05-07T04:48:03+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2014-05-03T07:22:44+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=199435aac3be170f6dadd4e88a576cec808ee419'/>
<id>199435aac3be170f6dadd4e88a576cec808ee419</id>
<content type='text'>
Change-Id: Ic3ed2c96d8fc3a15fedaa80517a2c79c0c858963
BUG: 1075611
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7652
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Ic3ed2c96d8fc3a15fedaa80517a2c79c0c858963
BUG: 1075611
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7652
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: port network failure log messages to gf_msg API</title>
<updated>2014-05-07T04:46:57+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2014-05-02T09:39:25+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=451246a58dbbc1ec777f379a6b779be374379abd'/>
<id>451246a58dbbc1ec777f379a6b779be374379abd</id>
<content type='text'>
Change-Id: I23df6d179e9d66a71721e9844a34c5b96586f90f
BUG: 1075611
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7462
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I23df6d179e9d66a71721e9844a34c5b96586f90f
BUG: 1075611
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7462
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Differentiate rebalance status and remove-brick status messages</title>
<updated>2014-05-02T16:31:45+00:00</updated>
<author>
<name>ggarg</name>
<email>ggarg@redhat.com</email>
</author>
<published>2014-04-21T13:29:00+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=dd5e318e020fab5914567885c1b83815b39d46f9'/>
<id>dd5e318e020fab5914567885c1b83815b39d46f9</id>
<content type='text'>
previously when user triggred 'gluster volume remove-brick VOLNAME
BRICK start' then command' gluster volume rebalance &lt;volname&gt; status'
showing output even user has not triggred "rebalance start" and when
user triggred 'gluster volume rebalance &lt;volname&gt; start' then command
'gluster volume remove-brick VOLNAME BRICK status' showing output even
user has not run rebalance start and remove brick start.

regression test failed in previous patch. file test/dht.rc and
test/bug/bug-973073 edited to avoid regression test failure.

now with this fix it will differentiate rebalance and remove-brick
status messages.

Signed-off-by: ggarg &lt;ggarg@redhat.com&gt;

Change-Id: I7f92ad247863b9f5fbc0887cc2ead07754bcfb4f
BUG: 1089668
Reviewed-on: http://review.gluster.org/7517
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Humble Devassy Chirammal &lt;humble.devassy@gmail.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
previously when user triggred 'gluster volume remove-brick VOLNAME
BRICK start' then command' gluster volume rebalance &lt;volname&gt; status'
showing output even user has not triggred "rebalance start" and when
user triggred 'gluster volume rebalance &lt;volname&gt; start' then command
'gluster volume remove-brick VOLNAME BRICK status' showing output even
user has not run rebalance start and remove brick start.

regression test failed in previous patch. file test/dht.rc and
test/bug/bug-973073 edited to avoid regression test failure.

now with this fix it will differentiate rebalance and remove-brick
status messages.

Signed-off-by: ggarg &lt;ggarg@redhat.com&gt;

Change-Id: I7f92ad247863b9f5fbc0887cc2ead07754bcfb4f
BUG: 1089668
Reviewed-on: http://review.gluster.org/7517
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Humble Devassy Chirammal &lt;humble.devassy@gmail.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc: transport may be destroyed while rpc isn't</title>
<updated>2014-03-06T05:26:59+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2014-01-21T18:11:07+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d6c1468b2779b6247e44b75276436021a3469a59'/>
<id>d6c1468b2779b6247e44b75276436021a3469a59</id>
<content type='text'>
rpc_clnt object is destroyed after the corresponding transport object is
destroyed. But rpc_clnt_reconnect, a timer driven function, refers to
the transport object beyond its 'life'. Instead, using the embedded
connection object prevents use after free problem wrt transport object.

Also, access transport object under conn-&gt;lock.

Change-Id: Iae28e8a657d02689963c510114ad7cb7e6764e62
BUG: 962619
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6751
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
rpc_clnt object is destroyed after the corresponding transport object is
destroyed. But rpc_clnt_reconnect, a timer driven function, refers to
the transport object beyond its 'life'. Instead, using the embedded
connection object prevents use after free problem wrt transport object.

Also, access transport object under conn-&gt;lock.

Change-Id: Iae28e8a657d02689963c510114ad7cb7e6764e62
BUG: 962619
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6751
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Volume locks and transaction specific opinfos</title>
<updated>2014-02-11T07:25:40+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2014-02-06T07:33:58+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=97ce783de326b51fcba65737f07db2c314d1e218'/>
<id>97ce783de326b51fcba65737f07db2c314d1e218</id>
<content type='text'>
With this patch we are replacing the existing cluster-wide
lock taken on glusterds across the cluster, with volume locks
which are also taken on glusterds across the cluster, but are
volume specific. So with the volume locks we are able to perform
more than one gluster operation at the same time, as long as the
operations are being performed on different volumes.

We maintain a global list of volume-locks (using a dict for a list)
where the key is the volume name, and which saves the uuid of the
originator glusterd. These locks are held and released per volume
transaction.

In order to acheive multiple gluster operations occuring at the
same time, we also separate opinfos in the op-state-machine, as a
part of this patch. To do so, we generate a unique transaction-id
(uuid) per gluster transaction. An opinfo is then associated with
this transaction id, which is used throughout the transaction. We
maintain a run-time global list(using a dict) of transaction-ids,
and their respective opinfos to achieve this.

Upstream Feature Page: http://www.gluster.org/community/documentation/index.php/Features/glusterd-volume-locks

Change-Id: Iaad505a854bac8de8f83beec0357eb6cde3f7ea8
BUG: 1011470
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5994
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With this patch we are replacing the existing cluster-wide
lock taken on glusterds across the cluster, with volume locks
which are also taken on glusterds across the cluster, but are
volume specific. So with the volume locks we are able to perform
more than one gluster operation at the same time, as long as the
operations are being performed on different volumes.

We maintain a global list of volume-locks (using a dict for a list)
where the key is the volume name, and which saves the uuid of the
originator glusterd. These locks are held and released per volume
transaction.

In order to acheive multiple gluster operations occuring at the
same time, we also separate opinfos in the op-state-machine, as a
part of this patch. To do so, we generate a unique transaction-id
(uuid) per gluster transaction. An opinfo is then associated with
this transaction id, which is used throughout the transaction. We
maintain a run-time global list(using a dict) of transaction-ids,
and their respective opinfos to achieve this.

Upstream Feature Page: http://www.gluster.org/community/documentation/index.php/Features/glusterd-volume-locks

Change-Id: Iaad505a854bac8de8f83beec0357eb6cde3f7ea8
BUG: 1011470
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5994
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Relocate rebalance sockfile</title>
<updated>2014-01-10T10:08:37+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@redhat.com</email>
</author>
<published>2013-12-30T04:29:18+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2edf1ec797e6f56515d0208be152d18ca6e71456'/>
<id>2edf1ec797e6f56515d0208be152d18ca6e71456</id>
<content type='text'>
The defrag sockfile was moved from priv-&gt;workdir to
DEFAULT_VAR_RUN_DIRECTORY. The format for the new path of the defrag
sockfile is 'DEFAULT_VAR_RUN_DIRECTORY/gluster-rebalance-&lt;vol-id&gt;.sock'.

This was needed because the earlier location didn't have a fixed length
and could exceed UNIX_PATH_MAX characters. This could lead to the
rebalance process failing to start as the socket file could not be
created.

Also, for keeping backward compatiblity, glusterd_rebalance_rpc_create
will try both the new and old sockfile locations when attempting
reconnection.

Change-Id: I6740ea665de84ebce1ef7199c412f426de54e3d0
BUG: 1049726
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6616
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The defrag sockfile was moved from priv-&gt;workdir to
DEFAULT_VAR_RUN_DIRECTORY. The format for the new path of the defrag
sockfile is 'DEFAULT_VAR_RUN_DIRECTORY/gluster-rebalance-&lt;vol-id&gt;.sock'.

This was needed because the earlier location didn't have a fixed length
and could exceed UNIX_PATH_MAX characters. This could lead to the
rebalance process failing to start as the socket file could not be
created.

Also, for keeping backward compatiblity, glusterd_rebalance_rpc_create
will try both the new and old sockfile locations when attempting
reconnection.

Change-Id: I6740ea665de84ebce1ef7199c412f426de54e3d0
BUG: 1049726
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6616
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: rebalance to ref volinfo before starting</title>
<updated>2013-12-20T08:55:41+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-12-16T19:42:05+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=79d5a31279825bdc61ad036b30fbe7e41b76fe5e'/>
<id>79d5a31279825bdc61ad036b30fbe7e41b76fe5e</id>
<content type='text'>
Change-Id: Ib316897dcbd0748bfb3bfcda186b9fe30c07f80f
BUG: 1038051
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6522
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Ib316897dcbd0748bfb3bfcda186b9fe30c07f80f
BUG: 1038051
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6522
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc,glusterd: Use rpc_clnt notifyfn to cleanup mydata</title>
<updated>2013-12-16T13:03:19+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@redhat.com</email>
</author>
<published>2013-08-08T10:20:31+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=40e13bc5b44d0b0cdaf7833c848d4a52352e0a13'/>
<id>40e13bc5b44d0b0cdaf7833c848d4a52352e0a13</id>
<content type='text'>
rpc:
- On a RPC_TRANSPORT_CLEANUP event, rpc_clnt_notify calls the registered
  notifyfn with a RPC_CLNT_DESTROY event. The notifyfn should properly
  cleanup the saved mydata on this event.
- Break the reconnect chain when an rpc client is disabled. This will
  prevent new disconnect events which can lead to crashes.

glusterd:
- Added support for RPC_CLNT_DESTROY in glusterd_brick_rpc_notify
- Use a common glusterd_rpc_clnt_unref() function throught glusterd in
  place of rpc_clnt_unref(). This function correctly gives up the
  big-lock before performing the unref.

Change-Id: I93230441c5089039643fc9f5632477ef1b695348
BUG: 962619
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5512
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
rpc:
- On a RPC_TRANSPORT_CLEANUP event, rpc_clnt_notify calls the registered
  notifyfn with a RPC_CLNT_DESTROY event. The notifyfn should properly
  cleanup the saved mydata on this event.
- Break the reconnect chain when an rpc client is disabled. This will
  prevent new disconnect events which can lead to crashes.

glusterd:
- Added support for RPC_CLNT_DESTROY in glusterd_brick_rpc_notify
- Use a common glusterd_rpc_clnt_unref() function throught glusterd in
  place of rpc_clnt_unref(). This function correctly gives up the
  big-lock before performing the unref.

Change-Id: I93230441c5089039643fc9f5632477ef1b695348
BUG: 962619
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5512
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
