<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-snapshot-utils.h, branch v4.1.9</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>snapshot: fix several coverity issues in glusterd-snapshot.c</title>
<updated>2017-12-21T04:30:28+00:00</updated>
<author>
<name>Sunny Kumar</name>
<email>sunkumar@redhat.com</email>
</author>
<published>2017-12-20T08:00:39+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=4a06f851dcad6bdd730f3d2e12bd8f26709f27fe'/>
<id>4a06f851dcad6bdd730f3d2e12bd8f26709f27fe</id>
<content type='text'>
This patch fixes issues 157, 426, 428, 431, 432, 437,439, 482 from [1].

[1] https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-12-13-e255385a/html/

Change-Id: Iff9df12bd9802db29434155badb1beda045aba5b
BUG: 789278
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch fixes issues 157, 426, 428, 431, 432, 437,439, 482 from [1].

[1] https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-12-13-e255385a/html/

Change-Id: Iff9df12bd9802db29434155badb1beda045aba5b
BUG: 789278
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot: Fix several coverity issues in glusterd-snapshot-utils.c</title>
<updated>2017-12-18T15:35:23+00:00</updated>
<author>
<name>Sunny Kumar</name>
<email>sunkumar@redhat.com</email>
</author>
<published>2017-12-13T12:56:36+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=42b8df4704c93b35c8b536074df87065ca8eb5c4'/>
<id>42b8df4704c93b35c8b536074df87065ca8eb5c4</id>
<content type='text'>
This patch fixes issues 622, 627, 630, 484, 32, 33 and 34 from [1]

[1] https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-10-30-9aa574a5/html/

Change-Id: I4c7ac2b2725474d73643367b38f8bf33eaddd8da
BUG: 789278
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch fixes issues 622, 627, 630, 484, 32, 33 and 34 from [1]

[1] https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-10-30-9aa574a5/html/

Change-Id: I4c7ac2b2725474d73643367b38f8bf33eaddd8da
BUG: 789278
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot: Issue with other processes accessing the mounted brick</title>
<updated>2017-10-23T10:05:02+00:00</updated>
<author>
<name>Sunny Kumar</name>
<email>sunkumar@redhat.com</email>
</author>
<published>2017-08-16T08:34:45+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2b3b3edee2d849b4aee314048987dc995d9679a1'/>
<id>2b3b3edee2d849b4aee314048987dc995d9679a1</id>
<content type='text'>
Added code for unmount of activated snapshot brick during snapshot
deactivation process which make sense as mount point for deactivated
bricks should not exist.

Removed code for mounting newly created snapshot, as newly created
snapshots should not mount until it is activated.

Added code for mount point creation and snapshot mount during snapshot
activation.

Added validation during glusterd init for mounting only those snapshot
whose status is either STARTED or RESTORED.

During snapshot restore, mount point for stopped snap should exist as
it is required to set extended attribute.

During handshake, after getting updates from friend mount point for
activated snapshot should exist and should not for deactivated
snapshot.

While getting snap status we should show relevent information for
deactivated snapshots, after this pathch 'gluster snap status' command
will show output like-

Snap Name : snap1
Snap UUID : snap-uuid

	Brick Path        :   server1:/run/gluster/snaps/snap-vol-name/brick
	Volume Group      :   N/A (Deactivated Snapshot)
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   N/A
	LV Size           :   N/A

Fixes: #276

Change-Id: I65783488e35fac43632615ce1b8ff7b8e84834dc
BUG: 1482023
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Added code for unmount of activated snapshot brick during snapshot
deactivation process which make sense as mount point for deactivated
bricks should not exist.

Removed code for mounting newly created snapshot, as newly created
snapshots should not mount until it is activated.

Added code for mount point creation and snapshot mount during snapshot
activation.

Added validation during glusterd init for mounting only those snapshot
whose status is either STARTED or RESTORED.

During snapshot restore, mount point for stopped snap should exist as
it is required to set extended attribute.

During handshake, after getting updates from friend mount point for
activated snapshot should exist and should not for deactivated
snapshot.

While getting snap status we should show relevent information for
deactivated snapshots, after this pathch 'gluster snap status' command
will show output like-

Snap Name : snap1
Snap UUID : snap-uuid

	Brick Path        :   server1:/run/gluster/snaps/snap-vol-name/brick
	Volume Group      :   N/A (Deactivated Snapshot)
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   N/A
	LV Size           :   N/A

Fixes: #276

Change-Id: I65783488e35fac43632615ce1b8ff7b8e84834dc
BUG: 1482023
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: (storhaug) remove ganesha</title>
<updated>2017-03-21T17:13:44+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2017-02-01T11:39:03+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=843e1b04b554ab887ec656ae7b468bb93ee4e2f7'/>
<id>843e1b04b554ab887ec656ae7b468bb93ee4e2f7</id>
<content type='text'>
remove all vestiges of ganesha

The storhaug CLI is used to manage ganesha and Samba. Also any setup
and teardown of the ganesha HA is initiated using storhaug to preserve
the proper layering.

Change-Id: I0eec0016a1b7802a36e7b2d92896b86fdf8607d5
BUG: 1420713
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16504
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
remove all vestiges of ganesha

The storhaug CLI is used to manage ganesha and Samba. Also any setup
and teardown of the ganesha HA is initiated using storhaug to preserve
the proper layering.

Change-Id: I0eec0016a1b7802a36e7b2d92896b86fdf8607d5
BUG: 1420713
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16504
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot: Fix the failure to recreate clones with same name</title>
<updated>2016-11-02T06:28:26+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2016-10-20T07:28:16+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=9a2b3fb8b9ff28edafa012dacc5f5f0e4ee1afab'/>
<id>9a2b3fb8b9ff28edafa012dacc5f5f0e4ee1afab</id>
<content type='text'>
The brick path of snapshot clones contained the clonename,
thereby failing to create newer clones with the same name
after the original clone had been deleted.

This fix creates the brick path with the clone's vol id
instead of the clones name. Hence future clones with the
same name will not have the namespace clash.

Change-Id: I262712adc576122f051b5d1ce171d020efaefd1a
BUG: 1387160
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15683
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The brick path of snapshot clones contained the clonename,
thereby failing to create newer clones with the same name
after the original clone had been deleted.

This fix creates the brick path with the clone's vol id
instead of the clones name. Hence future clones with the
same name will not have the namespace clash.

Change-Id: I262712adc576122f051b5d1ce171d020efaefd1a
BUG: 1387160
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15683
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/cli: cli to get local state representation from glusterd</title>
<updated>2016-08-26T15:23:37+00:00</updated>
<author>
<name>Samikshan Bairagya</name>
<email>samikshan@gmail.com</email>
</author>
<published>2016-07-07T15:03:02+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=4a3454753f6e4ddc309c8d1cb11a6e4e432c1da6'/>
<id>4a3454753f6e4ddc309c8d1cb11a6e4e432c1da6</id>
<content type='text'>
Currently there is no existing CLI that can be used to get the
local state representation of the cluster as maintained in glusterd
in a readable as well as parseable format.

The CLI added has the following usage:

 # gluster get-state [daemon] [odir &lt;path/to/output/dir&gt;] [file &lt;filename&gt;]

This would dump data points that reflect the local state
representation of the cluster as maintained in glusterd (no other
daemons are supported as of now) to a file inside the specified
output directory. The default output directory and filename is
/var/run/gluster and glusterd_state_&lt;timestamp&gt; respectively. The
option for specifying the daemon name leaves room to add support for
other daemons in the future. Following are the data points captured
as of now to represent the state from the local glusterd pov:

 * Peer:
    - Primary hostname
    - uuid
    - state
    - connection status
    - List of hostnames

 * Volumes:
    - name, id, transport type, status
    - counts: bricks, snap, subvol, stripe, arbiter, disperse,
 redundancy
    - snapd status
    - quorum status
    - tiering related information
    - rebalance status
    - replace bricks status
    - snapshots

 * Bricks:
    - Path, hostname (for all bricks these info will be shown)
    - port, rdma port, status, mount options, filesystem type and
signed in status for bricks running locally.

 * Services:
    - name, online status for initialised services

 * Others:
    - Base port, last allocated port
    - op-version
    - MYUUID

Change-Id: I4a45cc5407ab92d8afdbbd2098ece851f7e3d618
BUG: 1353156
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: http://review.gluster.org/14873
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently there is no existing CLI that can be used to get the
local state representation of the cluster as maintained in glusterd
in a readable as well as parseable format.

The CLI added has the following usage:

 # gluster get-state [daemon] [odir &lt;path/to/output/dir&gt;] [file &lt;filename&gt;]

This would dump data points that reflect the local state
representation of the cluster as maintained in glusterd (no other
daemons are supported as of now) to a file inside the specified
output directory. The default output directory and filename is
/var/run/gluster and glusterd_state_&lt;timestamp&gt; respectively. The
option for specifying the daemon name leaves room to add support for
other daemons in the future. Following are the data points captured
as of now to represent the state from the local glusterd pov:

 * Peer:
    - Primary hostname
    - uuid
    - state
    - connection status
    - List of hostnames

 * Volumes:
    - name, id, transport type, status
    - counts: bricks, snap, subvol, stripe, arbiter, disperse,
 redundancy
    - snapd status
    - quorum status
    - tiering related information
    - rebalance status
    - replace bricks status
    - snapshots

 * Bricks:
    - Path, hostname (for all bricks these info will be shown)
    - port, rdma port, status, mount options, filesystem type and
signed in status for bricks running locally.

 * Services:
    - name, online status for initialised services

 * Others:
    - Base port, last allocated port
    - op-version
    - MYUUID

Change-Id: I4a45cc5407ab92d8afdbbd2098ece851f7e3d618
BUG: 1353156
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: http://review.gluster.org/14873
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot : copying nfs-ganesha export file</title>
<updated>2015-10-30T20:39:42+00:00</updated>
<author>
<name>Jiffin Tony Thottan</name>
<email>jthottan@redhat.com</email>
</author>
<published>2015-08-27T17:56:40+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=5583bac79851d24f0a552478b361049fe63c32b7'/>
<id>5583bac79851d24f0a552478b361049fe63c32b7</id>
<content type='text'>
While taking snapshot, the export file used by the volume should
copy to snap directory. So that when restore of snapshot happens,
the volume can retain all its configuration for exporting via
nfs-ganesha. The export file is stored at "/etc/ganesha/export" in
the following format "export.&lt;volname&gt;.conf"

The fix handles given cases in the following manner :

case a: The nfs-ganesha(global) is ON during snapshot and restore.
        i.) Volume was exported during snapshot. When we restore snapshot,
            then volume should be exported back with old configuration file.
        ii.) Volume was unexported during snapshot. When we restore snapshot,
             then volume should unexported again.

case b: The nfs-ganesha is ON during snapshot and OFF during restore
        Volume was exported during snapshot. When we restore snapshot, the
        conf will be copied to corresponding location and if nfs-ganesha enabled
        again, then volume will be exported.

For the clones, export conf file will created in /etc/ganesha/export and then
export it via ganesha.

Change-Id: Ideecda15bd4db58e991cf6c8de7bb93f3db6cd20
BUG: 1257709
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12034
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
While taking snapshot, the export file used by the volume should
copy to snap directory. So that when restore of snapshot happens,
the volume can retain all its configuration for exporting via
nfs-ganesha. The export file is stored at "/etc/ganesha/export" in
the following format "export.&lt;volname&gt;.conf"

The fix handles given cases in the following manner :

case a: The nfs-ganesha(global) is ON during snapshot and restore.
        i.) Volume was exported during snapshot. When we restore snapshot,
            then volume should be exported back with old configuration file.
        ii.) Volume was unexported during snapshot. When we restore snapshot,
             then volume should unexported again.

case b: The nfs-ganesha is ON during snapshot and OFF during restore
        Volume was exported during snapshot. When we restore snapshot, the
        conf will be copied to corresponding location and if nfs-ganesha enabled
        again, then volume will be exported.

For the clones, export conf file will created in /etc/ganesha/export and then
export it via ganesha.

Change-Id: Ideecda15bd4db58e991cf6c8de7bb93f3db6cd20
BUG: 1257709
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12034
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot:cleanup snaps during unprobe</title>
<updated>2015-08-26T10:47:48+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2015-03-17T14:27:47+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e883e98998404a9e1ef18516d88520cfe2451b3f'/>
<id>e883e98998404a9e1ef18516d88520cfe2451b3f</id>
<content type='text'>
When doing an unprobe, the volume that doesnot
contain any brick of the particular node will be
deleted. So the snaps associated with that volume
should also delete

Change-Id: I9f3d23bd11b254ebf7d7722cc1e12455d6b024ff
BUG: 1203185
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9930
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When doing an unprobe, the volume that doesnot
contain any brick of the particular node will be
deleted. So the snaps associated with that volume
should also delete

Change-Id: I9f3d23bd11b254ebf7d7722cc1e12455d6b024ff
BUG: 1203185
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9930
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot: Return correct errno in events of failure - PATCH 2</title>
<updated>2015-06-02T09:59:34+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2015-05-05T12:38:25+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2df57ab7dc7b9d7deb0eebad96036149760d607b'/>
<id>2df57ab7dc7b9d7deb0eebad96036149760d607b</id>
<content type='text'>
ENUM           RETCODE        ERROR
-------------------------------------------------------------
EG_INTRNL      30800          Internal Error
EG_OPNOTSUP    30801          Gluster Op Not Supported
EG_ANOTRANS    30802          Another Transaction in Progress
EG_BRCKDWN     30803          One or more brick is down
EG_NODEDWN     30804          One or more node is down
EG_HRDLMT      30805          Hard Limit is reached
EG_NOVOL       30806          Volume does not exist
EG_NOSNAP      30807          Snap does not exist
EG_RBALRUN     30808          Rebalance is running
EG_VOLRUN      30809          Volume is running
EG_VOLSTP      30810          Volume is not running
EG_VOLEXST     30811          Volume exists
EG_SNAPEXST    30812          Snapshot exists
EG_ISSNAP      30813          Volume is a snap volume
EG_GEOREPRUN   30814          Geo-Replication is running
EG_NOTTHINP    30815          Bricks are not thinly provisioned

Change-Id: I49a170cdfd77df11fe677e09f4e063d99b159275
BUG: 1212413
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10588
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
ENUM           RETCODE        ERROR
-------------------------------------------------------------
EG_INTRNL      30800          Internal Error
EG_OPNOTSUP    30801          Gluster Op Not Supported
EG_ANOTRANS    30802          Another Transaction in Progress
EG_BRCKDWN     30803          One or more brick is down
EG_NODEDWN     30804          One or more node is down
EG_HRDLMT      30805          Hard Limit is reached
EG_NOVOL       30806          Volume does not exist
EG_NOSNAP      30807          Snap does not exist
EG_RBALRUN     30808          Rebalance is running
EG_VOLRUN      30809          Volume is running
EG_VOLSTP      30810          Volume is not running
EG_VOLEXST     30811          Volume exists
EG_SNAPEXST    30812          Snapshot exists
EG_ISSNAP      30813          Volume is a snap volume
EG_GEOREPRUN   30814          Geo-Replication is running
EG_NOTTHINP    30815          Bricks are not thinly provisioned

Change-Id: I49a170cdfd77df11fe677e09f4e063d99b159275
BUG: 1212413
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10588
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot : While snapshot restore, compute quota checksum.</title>
<updated>2015-04-10T08:19:09+00:00</updated>
<author>
<name>Sachin Pandit</name>
<email>spandit@redhat.com</email>
</author>
<published>2015-03-16T07:42:12+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=da48df4e91b69b8f586d658de9573287cad2ce64'/>
<id>da48df4e91b69b8f586d658de9573287cad2ce64</id>
<content type='text'>
Problem : During snapshot restore we anyways copy the quota conf file
after that we need to compute the checksum for that. If not, there
might be a checksum mismatch during glusterd handshake.

Solution : Compute a checksum file for quota conf file if its
present.

Change-Id: Ic4a6567c6ede9923443abf4ca59380679be88094
BUG: 1202436
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9901
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem : During snapshot restore we anyways copy the quota conf file
after that we need to compute the checksum for that. If not, there
might be a checksum mismatch during glusterd handshake.

Solution : Compute a checksum file for quota conf file if its
present.

Change-Id: Ic4a6567c6ede9923443abf4ca59380679be88094
BUG: 1202436
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9901
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
