<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/rpc/xdr/src/cli1-xdr.x, branch v3.8.12</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd/quota: upgrade quota.conf file during an upgrade</title>
<updated>2016-11-08T16:46:41+00:00</updated>
<author>
<name>Manikandan Selvaganesh</name>
<email>mselvaga@redhat.com</email>
</author>
<published>2016-08-30T12:23:09+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=9580864838c7c6044368cd969d1c11a2aad37c3c'/>
<id>9580864838c7c6044368cd969d1c11a2aad37c3c</id>
<content type='text'>
Problem
=======
When quota is enabled on 3.6, it will have quota conf version in quota.conf
as v1.1. This node gets upgraded to 3.7 but it will still have quota conf
version as v1.1 until a quota enable/disable/set limit is initiated. When
this is not initiated and when this node tries to peer probe a node which
is a fresh install of 3.7 (which will have quota conf version as v1.2), then this
will result in "Peer rejected" state. This patch fixes the issue.

Solution
========
When an upgrade happens from 3.6 to 3.7, quota.conf file needs
to be modified as well. With 3.6, in quota.conf the version will be
v1.1 and it needs to be changed to v1.2 from 3.7. This is because in
3.7, inode quota feature is introduced. So when an op-version bumpup
happens quota.conf needs to be upgraded with quota conf version v1.2
and all the 16 byte uuid needs to be changed to 17 bytes uuid as well.

Previously, when the cluster version is upgraded to 3.7, the quota.conf
got upgraded as well. But, the upgradation was done only when quota
enable/disable/set limit is done. With this patch, the upgradation is done
during a cluster op version bump up as well.

&gt; Reviewed-on: http://review.gluster.org/15352
&gt; Tested-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit 4b2cff614462508eef529c5d128e0974720e3f50)

Change-Id: Idb5ba29d3e1ea0e45c85d87c952c75da9e0f99f0
BUG: 1392716
Signed-off-by: Manikandan Selvaganesh &lt;mselvaga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15791
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Manikandan Selvaganesh &lt;manikandancs333@gmail.com&gt;
Tested-by: Manikandan Selvaganesh &lt;manikandancs333@gmail.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem
=======
When quota is enabled on 3.6, it will have quota conf version in quota.conf
as v1.1. This node gets upgraded to 3.7 but it will still have quota conf
version as v1.1 until a quota enable/disable/set limit is initiated. When
this is not initiated and when this node tries to peer probe a node which
is a fresh install of 3.7 (which will have quota conf version as v1.2), then this
will result in "Peer rejected" state. This patch fixes the issue.

Solution
========
When an upgrade happens from 3.6 to 3.7, quota.conf file needs
to be modified as well. With 3.6, in quota.conf the version will be
v1.1 and it needs to be changed to v1.2 from 3.7. This is because in
3.7, inode quota feature is introduced. So when an op-version bumpup
happens quota.conf needs to be upgraded with quota conf version v1.2
and all the 16 byte uuid needs to be changed to 17 bytes uuid as well.

Previously, when the cluster version is upgraded to 3.7, the quota.conf
got upgraded as well. But, the upgradation was done only when quota
enable/disable/set limit is done. With this patch, the upgradation is done
during a cluster op version bump up as well.

&gt; Reviewed-on: http://review.gluster.org/15352
&gt; Tested-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
(cherry picked from commit 4b2cff614462508eef529c5d128e0974720e3f50)

Change-Id: Idb5ba29d3e1ea0e45c85d87c952c75da9e0f99f0
BUG: 1392716
Signed-off-by: Manikandan Selvaganesh &lt;mselvaga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15791
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Manikandan Selvaganesh &lt;manikandancs333@gmail.com&gt;
Tested-by: Manikandan Selvaganesh &lt;manikandancs333@gmail.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot/cli: Fix snapshot status xml output</title>
<updated>2016-08-24T10:15:12+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2016-04-12T06:56:54+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=106a107ec5c8ecc48c3e048a9b8ad9ac809e278d'/>
<id>106a107ec5c8ecc48c3e048a9b8ad9ac809e278d</id>
<content type='text'>
    Backport of http://review.gluster.org/#/c/14018/

snap status --xml errors out if a brick is down and
doesn't have pid. It is handled in the cli of the snap
status where "N/A" is displayed in such a scenario.
Handled the same in xml

snap status &lt;snapname&gt; --xml fails as the writer is
not initialised for the same. Using GF_SNAP_STATUS_TYPE_ITER
instead of GF_SNAP_STATUS_TYPE_SNAP for all snap's
status to differentiate between the two scenarios.

Added testcase volume-snapshot-xml.t to check
all snapshot commands xml outputs

&gt; Reviewed-on: http://review.gluster.org/14018
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;

Change-Id: I99563e8f3e84f1aaeabd865326bb825c44f5c745
BUG: 1369372
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15291
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
    Backport of http://review.gluster.org/#/c/14018/

snap status --xml errors out if a brick is down and
doesn't have pid. It is handled in the cli of the snap
status where "N/A" is displayed in such a scenario.
Handled the same in xml

snap status &lt;snapname&gt; --xml fails as the writer is
not initialised for the same. Using GF_SNAP_STATUS_TYPE_ITER
instead of GF_SNAP_STATUS_TYPE_SNAP for all snap's
status to differentiate between the two scenarios.

Added testcase volume-snapshot-xml.t to check
all snapshot commands xml outputs

&gt; Reviewed-on: http://review.gluster.org/14018
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;

Change-Id: I99563e8f3e84f1aaeabd865326bb825c44f5c745
BUG: 1369372
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15291
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tier/glusterd : making new tier detach command throw warning</title>
<updated>2015-12-16T03:45:42+00:00</updated>
<author>
<name>hari gowtham</name>
<email>hgowtham@redhat.com</email>
</author>
<published>2015-12-04T13:04:36+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=55f4e8a74e89d61c97e79474c4488ba0bf40a3c1'/>
<id>55f4e8a74e89d61c97e79474c4488ba0bf40a3c1</id>
<content type='text'>
For detach tier, the validation was done using the string "detach-tier"
but the new commands used has the string "tier". Making the string use
"tier" to compare, creates problem as the tier status and tier detach
have the keyword "tier". So tier detach and tier status were separated.
and strtok was used to prevent the condition from passing when the
volume name has a substring of "tier". (only the second word from the
string is got and checked if the feature is tier)

Problem: new detach tier command doesnt throw warnings like
"not a tier volume" or " detach tier not started" respectively
instead it prints empty output.

Fix: while validate the volume is checked if its a tiered volume
if yes it is checked if the detach tier is started, else a warning
is thrown respectively.

Change-Id: I94246d53b18ab0e9406beaf459eaddb7c5b766c2
BUG: 1288517
Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12883
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
For detach tier, the validation was done using the string "detach-tier"
but the new commands used has the string "tier". Making the string use
"tier" to compare, creates problem as the tier status and tier detach
have the keyword "tier". So tier detach and tier status were separated.
and strtok was used to prevent the condition from passing when the
volume name has a substring of "tier". (only the second word from the
string is got and checked if the feature is tier)

Problem: new detach tier command doesnt throw warnings like
"not a tier volume" or " detach tier not started" respectively
instead it prints empty output.

Fix: while validate the volume is checked if its a tiered volume
if yes it is checked if the detach tier is started, else a warning
is thrown respectively.

Change-Id: I94246d53b18ab0e9406beaf459eaddb7c5b766c2
BUG: 1288517
Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12883
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: cli command implementation for bitrot scrub status</title>
<updated>2015-11-20T03:41:43+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>ggarg@redhat.com</email>
</author>
<published>2015-03-25T12:37:24+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=097e131481d25e5b1f859f4ea556b8bf56155472'/>
<id>097e131481d25e5b1f859f4ea556b8bf56155472</id>
<content type='text'>
CLI command  for bitrot scrub status will be :

gluster volume bitrot &lt;volname&gt; scrub status

Above command will show the statistics of bitrot scrubber.

Upon execution of this command it will show some common
scrubber tunable value of volume &lt;VOLNAME&gt; followed by
statistics of scrubber statistics of individual nodes.

sample ouput for single node:

Volume name : &lt;VOLNAME&gt;

State of scrub: Active

Scrub frequency: biweekly

Bitrot error log location: /var/log/glusterfs/bitd.log

Scrubber error log location: /var/log/glusterfs/scrub.log

=========================================================

Node name:

Number of Scrubbed files:

Number of Unsigned files:

Last completed scrub time:

Duration of last scrub:

Error count:

=========================================================

This is just infrastructure. list of bad file, last scrub
time, error count value will be taken care by
http://review.gluster.org/#/c/12503/  and
http://review.gluster.org/#/c/12654/ patches.

Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1
BUG: 1207627
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10231
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
CLI command  for bitrot scrub status will be :

gluster volume bitrot &lt;volname&gt; scrub status

Above command will show the statistics of bitrot scrubber.

Upon execution of this command it will show some common
scrubber tunable value of volume &lt;VOLNAME&gt; followed by
statistics of scrubber statistics of individual nodes.

sample ouput for single node:

Volume name : &lt;VOLNAME&gt;

State of scrub: Active

Scrub frequency: biweekly

Bitrot error log location: /var/log/glusterfs/bitd.log

Scrubber error log location: /var/log/glusterfs/scrub.log

=========================================================

Node name:

Number of Scrubbed files:

Number of Unsigned files:

Last completed scrub time:

Duration of last scrub:

Error count:

=========================================================

This is just infrastructure. list of bad file, last scrub
time, error count value will be taken care by
http://review.gluster.org/#/c/12503/  and
http://review.gluster.org/#/c/12654/ patches.

Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1
BUG: 1207627
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10231
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/tier: add pause tier for snapshots</title>
<updated>2015-10-21T22:44:35+00:00</updated>
<author>
<name>Dan Lambright</name>
<email>dlambrig@redhat.com</email>
</author>
<published>2015-10-05T19:52:02+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=36974c36fa4231df3f0e9428a9da6d1aa33348ab'/>
<id>36974c36fa4231df3f0e9428a9da6d1aa33348ab</id>
<content type='text'>
Snaps of tiered volumes cannot handle files undergoing migration.
We implement a helper mechanism to "pause" migration. Any files
undergoing migration are aborted. Clean up is done to remove
sticky bits and data at the destination. Migration is restarted
after snap completes.

For testing an internal switch is added. It is not exposed externally.

gluster volume set vol1 tier-pause [true|false]

Change-Id: Ia85bbf89ac142e9b7e73fcbef98bb9da86097799
BUG: 1267950
Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12304
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Snaps of tiered volumes cannot handle files undergoing migration.
We implement a helper mechanism to "pause" migration. Any files
undergoing migration are aborted. Clean up is done to remove
sticky bits and data at the destination. Migration is restarted
after snap completes.

For testing an internal switch is added. It is not exposed externally.

gluster volume set vol1 tier-pause [true|false]

Change-Id: Ia85bbf89ac142e9b7e73fcbef98bb9da86097799
BUG: 1267950
Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12304
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gluster/cli: snapshot delete all does not work with xml</title>
<updated>2015-08-29T05:10:58+00:00</updated>
<author>
<name>Rajesh Joseph</name>
<email>rjoseph@redhat.com</email>
</author>
<published>2015-08-26T02:58:59+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=fd47635a4ffab621a2357c99cd1edd0482940bd5'/>
<id>fd47635a4ffab621a2357c99cd1edd0482940bd5</id>
<content type='text'>
Problem: snapshot delete all command fails with --xml option
Fix: Provided xml support for delete all command

Change-Id: I77cad131473a9160e188c783f442b6a38a37f758
BUG: 1257533
Signed-off-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12027
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: snapshot delete all command fails with --xml option
Fix: Provided xml support for delete all command

Change-Id: I77cad131473a9160e188c783f442b6a38a37f758
BUG: 1257533
Signed-off-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12027
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/bitrot: tuanble object signing waiting time value for bitrot</title>
<updated>2015-06-15T12:23:46+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>ggarg@redhat.com</email>
</author>
<published>2015-06-05T08:28:28+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=554fa0c1315d0b4b78ba35a2d332d7ac0fd07d48'/>
<id>554fa0c1315d0b4b78ba35a2d332d7ac0fd07d48</id>
<content type='text'>
 Currently bitrot using 120 second waiting time for object to be signed
 after all fop's released. This signing waiting time value should be tunable.

 Command for changing the signing waiting time will be
 #gluster volume bitrot &lt;VOLNAME&gt; signing-time &lt;waiting time value in second&gt;

Change-Id: I89f3121564c1bbd0825f60aae6147413a2fbd798
BUG: 1228680
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Signed-off-by: Venky Shankar &lt;vshankar@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11105
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
 Currently bitrot using 120 second waiting time for object to be signed
 after all fop's released. This signing waiting time value should be tunable.

 Command for changing the signing waiting time will be
 #gluster volume bitrot &lt;VOLNAME&gt; signing-time &lt;waiting time value in second&gt;

Change-Id: I89f3121564c1bbd0825f60aae6147413a2fbd798
BUG: 1228680
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Signed-off-by: Venky Shankar &lt;vshankar@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11105
</pre>
</div>
</content>
</entry>
<entry>
<title>cli/tiering: Enhance cli output for tiering</title>
<updated>2015-05-09T04:27:45+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2015-05-02T12:01:07+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2676c402bc47ee89b763393e496a013e82d76e54'/>
<id>2676c402bc47ee89b763393e496a013e82d76e54</id>
<content type='text'>
Fix for handling cli output for attach-tier and
detach-tier

Change-Id: I4d17f4b09612754fe1b8cec6c2e14927029b9678
BUG: 1211562
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10284
Reviewed-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Fix for handling cli output for attach-tier and
detach-tier

Change-Id: I4d17f4b09612754fe1b8cec6c2e14927029b9678
BUG: 1211562
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10284
Reviewed-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: remove replace brick with data migration support form cli/glusterd</title>
<updated>2015-05-07T07:06:23+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>ggarg@redhat.com</email>
</author>
<published>2015-03-27T09:50:03+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=07e3f407b311c80e3437b1f650cae62f814d995b'/>
<id>07e3f407b311c80e3437b1f650cae62f814d995b</id>
<content type='text'>
Replace-brick operation with data migration support have been
deprecated from gluster.

With this fix replace brick command will support only one commad

gluster volume replace-brick &lt;VOLNAME&gt; &lt;SOURCE-BRICK&gt; &lt;NEW-BRICK&gt; {commit force}

Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b
BUG: 1094119
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10101
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Replace-brick operation with data migration support have been
deprecated from gluster.

With this fix replace brick command will support only one commad

gluster volume replace-brick &lt;VOLNAME&gt; &lt;SOURCE-BRICK&gt; &lt;NEW-BRICK&gt; {commit force}

Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b
BUG: 1094119
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10101
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>quota/marker: turn off inode quotas by default</title>
<updated>2015-05-06T17:48:46+00:00</updated>
<author>
<name>vmallika</name>
<email>vmallika@redhat.com</email>
</author>
<published>2015-04-30T11:20:47+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b054985d2f7db9ab72759c988db11feb855a1b5e'/>
<id>b054985d2f7db9ab72759c988db11feb855a1b5e</id>
<content type='text'>
inode quota is a new feature implemented in glusterfs-3.7
if quota is enabled in the older version and is upgraded
to a new version, we can hit setxattr spike during self-heal
of inode quotas. So, when a quota is enabled, turn off
inode-quotas with a xlator option.

With this patch, we still account for inode quotas but only
when a write operation is performed for a particular file.
User will be able to query inode quotas once the Inode-quota
xlator option is enabled.

Change-Id: I52fb28bf7024989ce7bb08ac63a303bf3ec1ec9a
BUG: 1209430
Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10152
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
inode quota is a new feature implemented in glusterfs-3.7
if quota is enabled in the older version and is upgraded
to a new version, we can hit setxattr spike during self-heal
of inode quotas. So, when a quota is enabled, turn off
inode-quotas with a xlator option.

With this patch, we still account for inode quotas but only
when a write operation is performed for a particular file.
User will be able to query inode quotas once the Inode-quota
xlator option is enabled.

Change-Id: I52fb28bf7024989ce7bb08ac63a303bf3ec1ec9a
BUG: 1209430
Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10152
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
