<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/cli, branch v3.6.3</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>features/quota : Fix XML output for quota list command.</title>
<updated>2015-03-18T11:48:21+00:00</updated>
<author>
<name>Sachin Pandit</name>
<email>spandit@redhat.com</email>
</author>
<published>2015-02-02T23:31:38+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=286faf01e9ed4387ea36d9bb7e56f4917acd4e58'/>
<id>286faf01e9ed4387ea36d9bb7e56f4917acd4e58</id>
<content type='text'>
Sample output:
---------------

Sample 1)
----------
[root@snapshot-28 glusterfs]# gluster volume quota vol1 list /dir1 /dir4 /dir5 --xml
&lt;?xml version="1.0" encoding="UTF-8" standalone="yes"?&gt;
&lt;cliOutput&gt;
  &lt;opRet&gt;0&lt;/opRet&gt;
  &lt;opErrno&gt;0&lt;/opErrno&gt;
  &lt;opErrstr/&gt;
  &lt;volQuota&gt;
    &lt;limit&gt;
      &lt;path&gt;/dir1&lt;/path&gt;
      &lt;hard_limit&gt;10.0MB&lt;/hard_limit&gt;
      &lt;soft_limit&gt;80%&lt;/soft_limit&gt;
      &lt;used_space&gt;0Bytes&lt;/used_space&gt;
      &lt;avail_space&gt;10.0MB&lt;/avail_space&gt;
      &lt;hl_exceeded&gt;No&lt;/hl_exceeded&gt;
      &lt;sl_exceeded&gt;No&lt;/sl_exceeded&gt;
    &lt;/limit&gt;
    &lt;limit&gt;
      &lt;path&gt;/dir4&lt;/path&gt;
      &lt;path&gt;No such file or directory&lt;/path&gt;
    &lt;/limit&gt;
    &lt;limit&gt;
      &lt;path&gt;/dir5&lt;/path&gt;
      &lt;path&gt;No such file or directory&lt;/path&gt;
    &lt;/limit&gt;
  &lt;/volQuota&gt;
&lt;/cliOutput&gt;

Sample 2)
---------
gluster volume quota vol1 list --xml
&lt;?xml version="1.0" encoding="UTF-8" standalone="yes"?&gt;
&lt;cliOutput&gt;
  &lt;opRet&gt;0&lt;/opRet&gt;
  &lt;opErrno&gt;0&lt;/opErrno&gt;
  &lt;opErrstr/&gt;
  &lt;volQuota/&gt;
&lt;/cliOutput&gt;
&lt;?xml version="1.0" encoding="UTF-8" standalone="yes"?&gt;
&lt;cliOutput&gt;
  &lt;volQuota&gt;
    &lt;limit&gt;
      &lt;path&gt;/dir&lt;/path&gt;
      &lt;hard_limit&gt;10.0MB&lt;/hard_limit&gt;
      &lt;soft_limit&gt;80%&lt;/soft_limit&gt;
      &lt;used_space&gt;0Bytes&lt;/used_space&gt;
      &lt;avail_space&gt;10.0MB&lt;/avail_space&gt;
      &lt;hl_exceeded&gt;No&lt;/hl_exceeded&gt;
      &lt;sl_exceeded&gt;No&lt;/sl_exceeded&gt;
    &lt;/limit&gt;
    &lt;limit&gt;
      &lt;path&gt;/dir1&lt;/path&gt;
      &lt;hard_limit&gt;10.0MB&lt;/hard_limit&gt;
      &lt;soft_limit&gt;80%&lt;/soft_limit&gt;
      &lt;used_space&gt;0Bytes&lt;/used_space&gt;
      &lt;avail_space&gt;10.0MB&lt;/avail_space&gt;
      &lt;hl_exceeded&gt;No&lt;/hl_exceeded&gt;
      &lt;sl_exceeded&gt;No&lt;/sl_exceeded&gt;
    &lt;/limit&gt;
  &lt;/volQuota&gt;
&lt;/cliOutput&gt;

Change-Id: I8a8d83cff88f778e5ee01fbca07d9f94c412317a
BUG: 1200297
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9481
Reviewed-by: Vijaikumar Mallikarjuna &lt;vmallika@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9847
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Sample output:
---------------

Sample 1)
----------
[root@snapshot-28 glusterfs]# gluster volume quota vol1 list /dir1 /dir4 /dir5 --xml
&lt;?xml version="1.0" encoding="UTF-8" standalone="yes"?&gt;
&lt;cliOutput&gt;
  &lt;opRet&gt;0&lt;/opRet&gt;
  &lt;opErrno&gt;0&lt;/opErrno&gt;
  &lt;opErrstr/&gt;
  &lt;volQuota&gt;
    &lt;limit&gt;
      &lt;path&gt;/dir1&lt;/path&gt;
      &lt;hard_limit&gt;10.0MB&lt;/hard_limit&gt;
      &lt;soft_limit&gt;80%&lt;/soft_limit&gt;
      &lt;used_space&gt;0Bytes&lt;/used_space&gt;
      &lt;avail_space&gt;10.0MB&lt;/avail_space&gt;
      &lt;hl_exceeded&gt;No&lt;/hl_exceeded&gt;
      &lt;sl_exceeded&gt;No&lt;/sl_exceeded&gt;
    &lt;/limit&gt;
    &lt;limit&gt;
      &lt;path&gt;/dir4&lt;/path&gt;
      &lt;path&gt;No such file or directory&lt;/path&gt;
    &lt;/limit&gt;
    &lt;limit&gt;
      &lt;path&gt;/dir5&lt;/path&gt;
      &lt;path&gt;No such file or directory&lt;/path&gt;
    &lt;/limit&gt;
  &lt;/volQuota&gt;
&lt;/cliOutput&gt;

Sample 2)
---------
gluster volume quota vol1 list --xml
&lt;?xml version="1.0" encoding="UTF-8" standalone="yes"?&gt;
&lt;cliOutput&gt;
  &lt;opRet&gt;0&lt;/opRet&gt;
  &lt;opErrno&gt;0&lt;/opErrno&gt;
  &lt;opErrstr/&gt;
  &lt;volQuota/&gt;
&lt;/cliOutput&gt;
&lt;?xml version="1.0" encoding="UTF-8" standalone="yes"?&gt;
&lt;cliOutput&gt;
  &lt;volQuota&gt;
    &lt;limit&gt;
      &lt;path&gt;/dir&lt;/path&gt;
      &lt;hard_limit&gt;10.0MB&lt;/hard_limit&gt;
      &lt;soft_limit&gt;80%&lt;/soft_limit&gt;
      &lt;used_space&gt;0Bytes&lt;/used_space&gt;
      &lt;avail_space&gt;10.0MB&lt;/avail_space&gt;
      &lt;hl_exceeded&gt;No&lt;/hl_exceeded&gt;
      &lt;sl_exceeded&gt;No&lt;/sl_exceeded&gt;
    &lt;/limit&gt;
    &lt;limit&gt;
      &lt;path&gt;/dir1&lt;/path&gt;
      &lt;hard_limit&gt;10.0MB&lt;/hard_limit&gt;
      &lt;soft_limit&gt;80%&lt;/soft_limit&gt;
      &lt;used_space&gt;0Bytes&lt;/used_space&gt;
      &lt;avail_space&gt;10.0MB&lt;/avail_space&gt;
      &lt;hl_exceeded&gt;No&lt;/hl_exceeded&gt;
      &lt;sl_exceeded&gt;No&lt;/sl_exceeded&gt;
    &lt;/limit&gt;
  &lt;/volQuota&gt;
&lt;/cliOutput&gt;

Change-Id: I8a8d83cff88f778e5ee01fbca07d9f94c412317a
BUG: 1200297
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9481
Reviewed-by: Vijaikumar Mallikarjuna &lt;vmallika@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9847
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd : release cluster wide locks in op-sm during failures</title>
<updated>2015-03-04T07:31:08+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2014-10-27T06:42:03+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b646678334f4fab78883ecc1b993ec0cb1b49aba'/>
<id>b646678334f4fab78883ecc1b993ec0cb1b49aba</id>
<content type='text'>
glusterd op-sm infrastructure has some loophole in handing error cases in
locking/unlocking phases which ends up having stale locks restricting
further transactions to go through.

This patch still doesn't handle all possible unlocking error cases as the
framework neither has retry mechanism nor the lock timeout. For eg - if
unlocking fails in one of the peer, cluster wide lock is not released and
further transaction can not be made until and unless originator node/the node
where unlocking failed is restarted.

Following test cases were executed (with the help of gdb) after applying this
patch:

* RPC timesout in lock cbk
* Decoding of RPC response in lock cbk fails
* RPC response is received from unknown peer in lock cbk
* Setting peerinfo in dictionary fails while sending lock request for first peer
  in the list
* Setting peerinfo in dictionary fails while sending lock request for other
  peers
* Lock RPC could not be sent for peers

For all above test cases the success criteria is not to have any stale locks

Patch link : http://review.gluster.org/9012

Change-Id: Ia1550341c31005c7850ee1b2697161c9ca04b01a
BUG: 1179136
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9012
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9393
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
glusterd op-sm infrastructure has some loophole in handing error cases in
locking/unlocking phases which ends up having stale locks restricting
further transactions to go through.

This patch still doesn't handle all possible unlocking error cases as the
framework neither has retry mechanism nor the lock timeout. For eg - if
unlocking fails in one of the peer, cluster wide lock is not released and
further transaction can not be made until and unless originator node/the node
where unlocking failed is restarted.

Following test cases were executed (with the help of gdb) after applying this
patch:

* RPC timesout in lock cbk
* Decoding of RPC response in lock cbk fails
* RPC response is received from unknown peer in lock cbk
* Setting peerinfo in dictionary fails while sending lock request for first peer
  in the list
* Setting peerinfo in dictionary fails while sending lock request for other
  peers
* Lock RPC could not be sent for peers

For all above test cases the success criteria is not to have any stale locks

Patch link : http://review.gluster.org/9012

Change-Id: Ia1550341c31005c7850ee1b2697161c9ca04b01a
BUG: 1179136
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9012
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9393
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr : glfs-heal implementation</title>
<updated>2015-01-06T10:04:49+00:00</updated>
<author>
<name>Anuradha</name>
<email>atalur@redhat.com</email>
</author>
<published>2015-01-05T11:07:07+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=59ba78ae1461651e290ce72013786d828545d4c1'/>
<id>59ba78ae1461651e290ce72013786d828545d4c1</id>
<content type='text'>
    Backport of http://review.gluster.org/6529
and http://review.gluster.org/9119

Change-Id: Ie420efcb399b5119c61f448b421979c228b27b15
BUG: 1173528
Signed-off-by: Anuradha &lt;atalur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9335
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
    Backport of http://review.gluster.org/6529
and http://review.gluster.org/9119

Change-Id: Ie420efcb399b5119c61f448b421979c228b27b15
BUG: 1173528
Signed-off-by: Anuradha &lt;atalur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9335
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rdma:Removing RDMA tech preview cli message.</title>
<updated>2015-01-06T09:48:38+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2014-11-18T09:28:20+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2163e3e8d6184c6ae8f6bf5909389783b1bd40f4'/>
<id>2163e3e8d6184c6ae8f6bf5909389783b1bd40f4</id>
<content type='text'>
        Backport of http://review.gluster.org/#/c/9141/

Creation of rdma and tcp,rdma volume will display a warning
message since it was in tech preview. This patch will remove
the warning message during the volume creation.

Change-Id: If4adb22cb20e2ef8d32bc798a8002c3e8e23fbdd
BUG: 1166515
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9180
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Humble Devassy Chirammal &lt;humble.devassy@gmail.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
        Backport of http://review.gluster.org/#/c/9141/

Creation of rdma and tcp,rdma volume will display a warning
message since it was in tech preview. This patch will remove
the warning message during the volume creation.

Change-Id: If4adb22cb20e2ef8d32bc798a8002c3e8e23fbdd
BUG: 1166515
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9180
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Humble Devassy Chirammal &lt;humble.devassy@gmail.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot: Snapshot should be deactivated when it is created.</title>
<updated>2014-12-19T06:52:22+00:00</updated>
<author>
<name>vmallika</name>
<email>vmallika@redhat.com</email>
</author>
<published>2014-10-28T06:55:43+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=df0ff94a64bd597e61f26a2a56297de7abf80a0f'/>
<id>df0ff94a64bd597e61f26a2a56297de7abf80a0f</id>
<content type='text'>
By default snapshot should be deactivated and this should be a
configurable option.

This behaviour can be configured by the command below:
gluster snapshot config activate-on-create &lt;enable|disable&gt;

Change-Id: I1911595c32beed43bb2fca4bf99f0d264b422513
BUG: 1170921
Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8985
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9241
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
By default snapshot should be deactivated and this should be a
configurable option.

This behaviour can be configured by the command below:
gluster snapshot config activate-on-create &lt;enable|disable&gt;

Change-Id: I1911595c32beed43bb2fca4bf99f0d264b422513
BUG: 1170921
Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8985
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9241
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>porting: OSX build fixes</title>
<updated>2014-10-23T19:09:49+00:00</updated>
<author>
<name>Harshavardhana</name>
<email>harsha@harshavardhana.net</email>
</author>
<published>2014-08-26T21:40:01+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=8ea1a4844975940013d8704f87ee137dcb27bfb5'/>
<id>8ea1a4844975940013d8704f87ee137dcb27bfb5</id>
<content type='text'>
 - xml build
 - do not redefine AT_SYMLINK_FOLLOW

Change-Id: I516b3713904a6bad946a30f76fe4821f2ac61fd3
BUG: 1130307
Signed-off-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
Reviewed-on: http://review.gluster.org/8970
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
 - xml build
 - do not redefine AT_SYMLINK_FOLLOW

Change-Id: I516b3713904a6bad946a30f76fe4821f2ac61fd3
BUG: 1130307
Signed-off-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
Reviewed-on: http://review.gluster.org/8970
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Do not hardcode umount(8) path, emulate lazy umount</title>
<updated>2014-10-03T15:01:29+00:00</updated>
<author>
<name>Emmanuel Dreyfus</name>
<email>manu@netbsd.org</email>
</author>
<published>2014-09-26T00:28:15+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=89de9adbf2b7d446abe9a27c8e384d205a996176'/>
<id>89de9adbf2b7d446abe9a27c8e384d205a996176</id>
<content type='text'>
1) Use a system-dependent macro for umount(8) location instead of
relying on $PATH  to find it, for security and portability sake.

2) Introduce gf_umount_lazy() to replace umount -l (-l for lazy) invocations,
which is only supported on Linux; On Linux behavior in unchanged. On other
systems, we fork an external process (umountd) that will take care of
periodically attempt to unmount, and optionally rmdir.

Backport of Ia91167c0652f8ddab85136324b08f87c5ac1edd51d

BUG: 1138897
Change-Id: I9d82c87e85af0dee79f2de39bc697c486b7103c8
Signed-off-by: Emmanuel Dreyfus &lt;manu@netbsd.org&gt;
Reviewed-on: http://review.gluster.org/8863
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Csaba Henk &lt;csaba@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
1) Use a system-dependent macro for umount(8) location instead of
relying on $PATH  to find it, for security and portability sake.

2) Introduce gf_umount_lazy() to replace umount -l (-l for lazy) invocations,
which is only supported on Linux; On Linux behavior in unchanged. On other
systems, we fork an external process (umountd) that will take care of
periodically attempt to unmount, and optionally rmdir.

Backport of Ia91167c0652f8ddab85136324b08f87c5ac1edd51d

BUG: 1138897
Change-Id: I9d82c87e85af0dee79f2de39bc697c486b7103c8
Signed-off-by: Emmanuel Dreyfus &lt;manu@netbsd.org&gt;
Reviewed-on: http://review.gluster.org/8863
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Csaba Henk &lt;csaba@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli/snapshot : Add confirmation dialog to snapshot restore operation.</title>
<updated>2014-09-23T10:14:39+00:00</updated>
<author>
<name>Sachin Pandit</name>
<email>spandit@redhat.com</email>
</author>
<published>2014-08-25T00:12:38+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=40dcfefb73e3e00e7cbd22a44c3bd795612f356a'/>
<id>40dcfefb73e3e00e7cbd22a44c3bd795612f356a</id>
<content type='text'>
When restoring a volume, the user is not prompted for confirmation.
Since restoring a volume rolls back the data to a previous point in time,
there is the potential for updates to be lost.
Hence it is better to display a confirmation dialogue during snapshot
restore operation.

Change-Id: I7b23eaeb43ad2aafa508e2ca5750d9b0fc7d6e36
BUG: 1145092
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8525
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijaikumar Mallikarjuna &lt;vmallika@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8806
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When restoring a volume, the user is not prompted for confirmation.
Since restoring a volume rolls back the data to a previous point in time,
there is the potential for updates to be lost.
Hence it is better to display a confirmation dialogue during snapshot
restore operation.

Change-Id: I7b23eaeb43ad2aafa508e2ca5750d9b0fc7d6e36
BUG: 1145092
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8525
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijaikumar Mallikarjuna &lt;vmallika@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8806
</pre>
</div>
</content>
</entry>
<entry>
<title>cli/snapshot : update of a snapshot delete syntax in documentation.</title>
<updated>2014-09-23T10:13:40+00:00</updated>
<author>
<name>Sachin Pandit</name>
<email>spandit@redhat.com</email>
</author>
<published>2014-08-25T04:16:14+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=cd60de7b2fd8d40be83ee75773e703e6327c47db'/>
<id>cd60de7b2fd8d40be83ee75773e703e6327c47db</id>
<content type='text'>
Change-Id: Id1a4b9684a8dd5750ee6eed841e3d5195407fb7e
BUG: 1145084
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8534
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8805
Reviewed-by: Vijaikumar Mallikarjuna &lt;vmallika@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Id1a4b9684a8dd5750ee6eed841e3d5195407fb7e
BUG: 1145084
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8534
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8805
Reviewed-by: Vijaikumar Mallikarjuna &lt;vmallika@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>feature/snapshot : Interface to delete all snapshots belonging to a system as-well-as to a particular volume</title>
<updated>2014-09-23T09:01:04+00:00</updated>
<author>
<name>Sachin Pandit</name>
<email>spandit@redhat.com</email>
</author>
<published>2014-06-23T04:05:52+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c5aa277ec26cd7cf4109bc8854af50a254edbbd9'/>
<id>c5aa277ec26cd7cf4109bc8854af50a254edbbd9</id>
<content type='text'>
Problem :
With the current design we can only delete a single snapshot.
And the deletion of volume which contains snapshot is not allowed.
Because of that user might be forced to delete all the snapshots
manually before he is allowed to delete a volume.

Solution:
Following is the interface with which user can delete
all the snapshots of a system or belonging to a particular volume.

        Syntax : gluster snapshot delete all

        *To delete all the snapshots present in a system

        Syntax : gluster snapshot delete volume &lt;volname&gt;

        *To deletes all the snapshot present in a volume specified.

========================================================================
Sample Output:

Case 1 : Deleting a single snapshot.
[root@snapshot-24 glusterfs]# gluster snapshot delete snap1
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: snap1: snap removed successfully

-----------------------------------------------------------------
Case 2 : Deleting all the snapshots in a Volume.
[root@snapshot-24 glusterfs]# gluster snapshot delete volume vol1
Volume (vol1) contains 9 snapshot(s).
Do you still want to continue and delete them?  (y/n) y
snapshot delete: snap2: snap removed successfully
snapshot delete: snap3: snap removed successfully
snapshot delete: snap4: snap removed successfully
snapshot delete: snap5: snap removed successfully
.
.
.

-----------------------------------------------------------------
Case 3 : Deleting all the snapshots in a system.
[root@snapshot-24 glusterfs]# gluster snapshot delete all
System contains 4 snapshot(s).
Do you still want to continue and delete them?  (y/n) y
snapshot delete: snap7: snap removed successfully
snapshot delete: snap8: snap removed successfully
snapshot delete: snap9: snap removed successfully
snapshot delete: snap10: snap removed successfully
========================================================================

Change-Id: Ifec8e128ab2011cbbba208376b9c92cfbe7d8d71
BUG: 1145083
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8162
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8798
Reviewed-by: Vijaikumar Mallikarjuna &lt;vmallika@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem :
With the current design we can only delete a single snapshot.
And the deletion of volume which contains snapshot is not allowed.
Because of that user might be forced to delete all the snapshots
manually before he is allowed to delete a volume.

Solution:
Following is the interface with which user can delete
all the snapshots of a system or belonging to a particular volume.

        Syntax : gluster snapshot delete all

        *To delete all the snapshots present in a system

        Syntax : gluster snapshot delete volume &lt;volname&gt;

        *To deletes all the snapshot present in a volume specified.

========================================================================
Sample Output:

Case 1 : Deleting a single snapshot.
[root@snapshot-24 glusterfs]# gluster snapshot delete snap1
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: snap1: snap removed successfully

-----------------------------------------------------------------
Case 2 : Deleting all the snapshots in a Volume.
[root@snapshot-24 glusterfs]# gluster snapshot delete volume vol1
Volume (vol1) contains 9 snapshot(s).
Do you still want to continue and delete them?  (y/n) y
snapshot delete: snap2: snap removed successfully
snapshot delete: snap3: snap removed successfully
snapshot delete: snap4: snap removed successfully
snapshot delete: snap5: snap removed successfully
.
.
.

-----------------------------------------------------------------
Case 3 : Deleting all the snapshots in a system.
[root@snapshot-24 glusterfs]# gluster snapshot delete all
System contains 4 snapshot(s).
Do you still want to continue and delete them?  (y/n) y
snapshot delete: snap7: snap removed successfully
snapshot delete: snap8: snap removed successfully
snapshot delete: snap9: snap removed successfully
snapshot delete: snap10: snap removed successfully
========================================================================

Change-Id: Ifec8e128ab2011cbbba208376b9c92cfbe7d8d71
BUG: 1145083
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8162
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8798
Reviewed-by: Vijaikumar Mallikarjuna &lt;vmallika@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
