<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-store.h, branch v4.1.1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>Coverity Issue: PW.INCLUDE_RECURSION in several files</title>
<updated>2017-11-09T13:21:11+00:00</updated>
<author>
<name>Girjesh Rajoria</name>
<email>grajoria@redhat.com</email>
</author>
<published>2017-11-02T21:12:23+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0821a57bd2e7518d1c8df2d4403a2dfbb8ee5b6b'/>
<id>0821a57bd2e7518d1c8df2d4403a2dfbb8ee5b6b</id>
<content type='text'>
Coverity ID: 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417,
418, 419, 423, 424, 425, 426, 427, 428, 429, 436, 437, 438, 439,
440, 441, 442, 443

Issue: Event include_recursion

Removed redundant, recursive includes from the files.

Change-Id: I920776b1fa089a2d4917ca722d0075a9239911a7
BUG: 789278
Signed-off-by: Girjesh Rajoria &lt;grajoria@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Coverity ID: 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417,
418, 419, 423, 424, 425, 426, 427, 428, 429, 436, 437, 438, 439,
440, 441, 442, 443

Issue: Event include_recursion

Removed redundant, recursive includes from the files.

Change-Id: I920776b1fa089a2d4917ca722d0075a9239911a7
BUG: 789278
Signed-off-by: Girjesh Rajoria &lt;grajoria@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot: Issue with other processes accessing the mounted brick</title>
<updated>2017-10-23T10:05:02+00:00</updated>
<author>
<name>Sunny Kumar</name>
<email>sunkumar@redhat.com</email>
</author>
<published>2017-08-16T08:34:45+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2b3b3edee2d849b4aee314048987dc995d9679a1'/>
<id>2b3b3edee2d849b4aee314048987dc995d9679a1</id>
<content type='text'>
Added code for unmount of activated snapshot brick during snapshot
deactivation process which make sense as mount point for deactivated
bricks should not exist.

Removed code for mounting newly created snapshot, as newly created
snapshots should not mount until it is activated.

Added code for mount point creation and snapshot mount during snapshot
activation.

Added validation during glusterd init for mounting only those snapshot
whose status is either STARTED or RESTORED.

During snapshot restore, mount point for stopped snap should exist as
it is required to set extended attribute.

During handshake, after getting updates from friend mount point for
activated snapshot should exist and should not for deactivated
snapshot.

While getting snap status we should show relevent information for
deactivated snapshots, after this pathch 'gluster snap status' command
will show output like-

Snap Name : snap1
Snap UUID : snap-uuid

	Brick Path        :   server1:/run/gluster/snaps/snap-vol-name/brick
	Volume Group      :   N/A (Deactivated Snapshot)
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   N/A
	LV Size           :   N/A

Fixes: #276

Change-Id: I65783488e35fac43632615ce1b8ff7b8e84834dc
BUG: 1482023
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Added code for unmount of activated snapshot brick during snapshot
deactivation process which make sense as mount point for deactivated
bricks should not exist.

Removed code for mounting newly created snapshot, as newly created
snapshots should not mount until it is activated.

Added code for mount point creation and snapshot mount during snapshot
activation.

Added validation during glusterd init for mounting only those snapshot
whose status is either STARTED or RESTORED.

During snapshot restore, mount point for stopped snap should exist as
it is required to set extended attribute.

During handshake, after getting updates from friend mount point for
activated snapshot should exist and should not for deactivated
snapshot.

While getting snap status we should show relevent information for
deactivated snapshots, after this pathch 'gluster snap status' command
will show output like-

Snap Name : snap1
Snap UUID : snap-uuid

	Brick Path        :   server1:/run/gluster/snaps/snap-vol-name/brick
	Volume Group      :   N/A (Deactivated Snapshot)
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   N/A
	LV Size           :   N/A

Fixes: #276

Change-Id: I65783488e35fac43632615ce1b8ff7b8e84834dc
BUG: 1482023
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd : glusterd fails to start when  peer's network interface is down</title>
<updated>2017-07-28T04:47:24+00:00</updated>
<author>
<name>Gaurav Yadav</name>
<email>gyadav@redhat.com</email>
</author>
<published>2017-07-18T10:53:18+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=1477fa442a733d7b1a5ea74884cac8f29fbe7e6a'/>
<id>1477fa442a733d7b1a5ea74884cac8f29fbe7e6a</id>
<content type='text'>
Problem:
glusterd fails to start on nodes where glusterd tries to come up even
before network is up.

Fix:
On startup glusterd tries to resolve brick path which is based on
hostname/ip, but in the above scenario when network interface is not
up, glusterd is not able to resolve the brick path using ip_address or
hostname With this fix glusterd will use UUID to resolve brick path.

Change-Id: Icfa7b2652417135530479d0aa4e2a82b0476f710
BUG: 1472267
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17813
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
glusterd fails to start on nodes where glusterd tries to come up even
before network is up.

Fix:
On startup glusterd tries to resolve brick path which is based on
hostname/ip, but in the above scenario when network interface is not
up, glusterd is not able to resolve the brick path using ip_address or
hostname With this fix glusterd will use UUID to resolve brick path.

Change-Id: Icfa7b2652417135530479d0aa4e2a82b0476f710
BUG: 1472267
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17813
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>posix: option to handle the shared bricks for statvfs()</title>
<updated>2017-07-24T15:34:34+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2017-06-23T07:40:56+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=febf5ed4848ad705a34413353559482417c61467'/>
<id>febf5ed4848ad705a34413353559482417c61467</id>
<content type='text'>
Currently 'storage/posix' xlator has an option called option
`export-statfs-size no`, which exports zero as values for few
fields in `struct statvfs`. In a case of backend brick shared
between multiple brick processes, the values of these variables
should be `field_value / number-of-bricks-at-node`. This way,
even the issue of 'min-free-disk' etc at different layers would
also be handled properly when the statfs() sys call is made.

Fixes #241

Change-Id: I2e320e1fdcc819ab9173277ef3498201432c275f
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17618
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently 'storage/posix' xlator has an option called option
`export-statfs-size no`, which exports zero as values for few
fields in `struct statvfs`. In a case of backend brick shared
between multiple brick processes, the values of these variables
should be `field_value / number-of-bricks-at-node`. This way,
even the issue of 'min-free-disk' etc at different layers would
also be handled properly when the statfs() sys call is made.

Fixes #241

Change-Id: I2e320e1fdcc819ab9173277ef3498201432c275f
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17618
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: (storhaug) remove ganesha</title>
<updated>2017-03-21T17:13:44+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2017-02-01T11:39:03+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=843e1b04b554ab887ec656ae7b468bb93ee4e2f7'/>
<id>843e1b04b554ab887ec656ae7b468bb93ee4e2f7</id>
<content type='text'>
remove all vestiges of ganesha

The storhaug CLI is used to manage ganesha and Samba. Also any setup
and teardown of the ganesha HA is initiated using storhaug to preserve
the proper layering.

Change-Id: I0eec0016a1b7802a36e7b2d92896b86fdf8607d5
BUG: 1420713
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16504
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
remove all vestiges of ganesha

The storhaug CLI is used to manage ganesha and Samba. Also any setup
and teardown of the ganesha HA is initiated using storhaug to preserve
the proper layering.

Change-Id: I0eec0016a1b7802a36e7b2d92896b86fdf8607d5
BUG: 1420713
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16504
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tier : Tier as a service</title>
<updated>2017-01-17T04:49:47+00:00</updated>
<author>
<name>hari gowtham</name>
<email>hgowtham@redhat.com</email>
</author>
<published>2016-07-12T11:10:28+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=3263d1c4f4b7efd1a018c17e1ba4dd9245094f48'/>
<id>3263d1c4f4b7efd1a018c17e1ba4dd9245094f48</id>
<content type='text'>
tierd is implemented by separating from rebalance process.

The commands affected:

1) Attach tier will trigger this process instead of old one
2) tier start and tier start force will also trigger this process.
3) volume status [tier] will show tier daemon as a process instead
of task and normal tier status and tier detach status works.
4) tier stop implemented.
5) detach tier implemented separately along with new detach tier
status
6) volume tier volname status will work using the changes.
7) volume set works

This patch has separated the tier translator from the legacy
DHT rebalance code. It now sends the RPCs from the CLI
to glusterd separate to the DHT rebalance code.
The daemon is now a service, similar to the snapshot daemon,
and can be viewed using the volume status command.

The code for the validation and commit phase are the same
as the earlier tier validation code in DHT rebalance.

The “brickop” phase has been changed so that the status
command can use this framework.

The service management framework is now used.
DHT rebalance does not use this framework.

This service framework takes care of :

*) spawning the daemon, killing it and other such processes.
*) volume set options , which are written on the volfile.
*) restart and reconfigure functions. Restart is to restart
the daemon at two points
        1)after gluster goes down and comes up.
        2) to stop detach tier.
*) reconfigure is used to make immediate volfile changes.
By doing this, we don’t restart the daemon.
it has the code to rewrite the volfile for topological
changes too (which comes into place during add and remove brick).

With this patch the log, pid, and volfile are separated
and put into respective directories.

Change-Id: I3681d0d66894714b55aa02ca2a30ac000362a399
BUG: 1313838
Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13365
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: hari gowtham &lt;hari.gowtham005@gmail.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
tierd is implemented by separating from rebalance process.

The commands affected:

1) Attach tier will trigger this process instead of old one
2) tier start and tier start force will also trigger this process.
3) volume status [tier] will show tier daemon as a process instead
of task and normal tier status and tier detach status works.
4) tier stop implemented.
5) detach tier implemented separately along with new detach tier
status
6) volume tier volname status will work using the changes.
7) volume set works

This patch has separated the tier translator from the legacy
DHT rebalance code. It now sends the RPCs from the CLI
to glusterd separate to the DHT rebalance code.
The daemon is now a service, similar to the snapshot daemon,
and can be viewed using the volume status command.

The code for the validation and commit phase are the same
as the earlier tier validation code in DHT rebalance.

The “brickop” phase has been changed so that the status
command can use this framework.

The service management framework is now used.
DHT rebalance does not use this framework.

This service framework takes care of :

*) spawning the daemon, killing it and other such processes.
*) volume set options , which are written on the volfile.
*) restart and reconfigure functions. Restart is to restart
the daemon at two points
        1)after gluster goes down and comes up.
        2) to stop detach tier.
*) reconfigure is used to make immediate volfile changes.
By doing this, we don’t restart the daemon.
it has the code to rewrite the volfile for topological
changes too (which comes into place during add and remove brick).

With this patch the log, pid, and volfile are separated
and put into respective directories.

Change-Id: I3681d0d66894714b55aa02ca2a30ac000362a399
BUG: 1313838
Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13365
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: hari gowtham &lt;hari.gowtham005@gmail.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: glusterd must store all rebalance related information</title>
<updated>2016-07-04T12:34:57+00:00</updated>
<author>
<name>Sakshi Bansal</name>
<email>sabansal@redhat.com</email>
</author>
<published>2016-06-29T06:39:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0cd287189e5e9f876022a8c6481195bdc63ce5f8'/>
<id>0cd287189e5e9f876022a8c6481195bdc63ce5f8</id>
<content type='text'>
Change-Id: I8404b864a405411e3af2fbee46ca20330e656045
BUG: 1351021
Signed-off-by: Sakshi Bansal &lt;sabansal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14827
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I8404b864a405411e3af2fbee46ca20330e656045
BUG: 1351021
Signed-off-by: Sakshi Bansal &lt;sabansal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14827
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: persist brickinfo-&gt;real_path</title>
<updated>2016-04-29T16:17:22+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2016-04-26T09:57:43+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f0fb05d2cefae08c143f2bfdef151084f5ddb498'/>
<id>f0fb05d2cefae08c143f2bfdef151084f5ddb498</id>
<content type='text'>
Since real_path was not persisted and gets constructed at every glusterd
restart, glusterd will fail to come up if one of the brick's underlying file
system is crashed.

Solution is to construct real_path only once and get it persisted.

Change-Id: I97abc30372c1ffbbb2d43b716d7af09172147b47
BUG: 1330481
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14075
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Since real_path was not persisted and gets constructed at every glusterd
restart, glusterd will fail to come up if one of the brick's underlying file
system is crashed.

Solution is to construct real_path only once and get it persisted.

Change-Id: I97abc30372c1ffbbb2d43b716d7af09172147b47
BUG: 1330481
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14075
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/glusterd: Store arbiter-count and restore it</title>
<updated>2015-11-04T21:39:32+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2015-10-30T10:26:14+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=1918d618a91fe9d9021916d53091173f4ad6b882'/>
<id>1918d618a91fe9d9021916d53091173f4ad6b882</id>
<content type='text'>
Problem:
1) Glusterd doesn't remember about arbiter information of replica volume in
   store.  When glusterd goes down and comes backup, arbiter volumes will
   become replica volumes.

2) Glusterd doesn't import/export arbiter information to/from the other peers.

3) Volume info doesn't show any arbiter count in the output.

Fix:
1) Persist arbiter information in glusterd-store
2) Import/Export arbiter information of the volume
3) Change volume info output to show arbiter count.

Change-Id: I2db81e73d2694b01f7d07b08a17b41ad5a55c361
BUG: 1276675
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12475
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
1) Glusterd doesn't remember about arbiter information of replica volume in
   store.  When glusterd goes down and comes backup, arbiter volumes will
   become replica volumes.

2) Glusterd doesn't import/export arbiter information to/from the other peers.

3) Volume info doesn't show any arbiter count in the output.

Fix:
1) Persist arbiter information in glusterd-store
2) Import/Export arbiter information of the volume
3) Change volume info output to show arbiter count.

Change-Id: I2db81e73d2694b01f7d07b08a17b41ad5a55c361
BUG: 1276675
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12475
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>quota: add version to quota xattrs</title>
<updated>2015-11-02T08:54:35+00:00</updated>
<author>
<name>vmallika</name>
<email>vmallika@redhat.com</email>
</author>
<published>2015-10-15T07:11:13+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d90b87eed2fa68df5afcebbc54747e96f6d76cfb'/>
<id>d90b87eed2fa68df5afcebbc54747e96f6d76cfb</id>
<content type='text'>
When a quota is disable and the clean-up process terminated
without completely cleaning-up the quota xattrs.
Now when quota is enabled again, this can mess-up the accounting

A version number is suffixed for all quota xattrs and this version
number is specific to marker xaltor, i.e when quota xattrs are
requested by quotad/client marker will remove the version suffix in the
key before sending the response

Change-Id: I1ca2c11460645edba0f6b68db70d476d8d26e1eb
BUG: 1272411
Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12386
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Manikandan Selvaganesh &lt;mselvaga@redhat.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When a quota is disable and the clean-up process terminated
without completely cleaning-up the quota xattrs.
Now when quota is enabled again, this can mess-up the accounting

A version number is suffixed for all quota xattrs and this version
number is specific to marker xaltor, i.e when quota xattrs are
requested by quotad/client marker will remove the version suffix in the
key before sending the response

Change-Id: I1ca2c11460645edba0f6b68db70d476d8d26e1eb
BUG: 1272411
Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12386
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Manikandan Selvaganesh &lt;mselvaga@redhat.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
