<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/rpc, branch v3.9rc0</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>io-stats: Add stats for upcall notifications</title>
<updated>2016-09-01T03:18:42+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2016-08-17T14:49:59+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=ee0d8ca53f685f8f27c93b3d7c808f2a78c1ae43'/>
<id>ee0d8ca53f685f8f27c93b3d7c808f2a78c1ae43</id>
<content type='text'>
With this patch, there will be additional entries seen in
the profile info:

UPCALL : Total number of upcall events that were sent from
         the brick(in brick profile), and number of upcall
         notifications recieved by client(in client profile)

Cache invalidation events:
-------------------------
CI_IATT : Number of upcalls that were cache invalidation and
         had one of the IATT_UPDATE_FLAGS set. This indicates
         that one of the iatt value was changed.

CI_XATTR : Number of upcalls that were cache invalidation, and
         had one of the UP_XATTR or UP_XATTR_RM set. This indicates
         that an xattr was updated or deleted.

CI_RENAME : Number of upcalls that were cache invalidation,
         resulted by the renaming of a file or directory

CI_UNLINK : Number of upcalls that were cache invalidation,
         resulted by the unlink of a file.

CI_FORGET : Number of upcalls that were cache invalidation,
         resulted by the forget of inode on the server side.

Lease events:
------------
LEASE_RECALL : Number of lease recalls sent by the brick (in
         brick profile), and number of lease recalls recieved
         by client(in client profile)

Note that the sum of CI_IATT, CI_XATTR, CI_RENAME, CI_UNLINK,
CI_FORGET, LEASE_RECALL may not be equal to UPCALL. This is
because, each cache invalidation can carry multiple flags.
Eg:
- Every CI_XATTR will have CI_IATT
- Every CI_UNLINK will also increment CI_IATT as link count is an
iatt attribute.

Also UP_PARENT_DENTRY_FLAGS is currently not accounted for,
as CI_RENAME and CI_UNLINK will always have the flag
UP_PARENT_DENTRY_FLAGS

Change-Id: Ieb8cd21dde2c4c7618f12d025a5e5156f9cc0fe9
BUG: 1371543
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15193
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With this patch, there will be additional entries seen in
the profile info:

UPCALL : Total number of upcall events that were sent from
         the brick(in brick profile), and number of upcall
         notifications recieved by client(in client profile)

Cache invalidation events:
-------------------------
CI_IATT : Number of upcalls that were cache invalidation and
         had one of the IATT_UPDATE_FLAGS set. This indicates
         that one of the iatt value was changed.

CI_XATTR : Number of upcalls that were cache invalidation, and
         had one of the UP_XATTR or UP_XATTR_RM set. This indicates
         that an xattr was updated or deleted.

CI_RENAME : Number of upcalls that were cache invalidation,
         resulted by the renaming of a file or directory

CI_UNLINK : Number of upcalls that were cache invalidation,
         resulted by the unlink of a file.

CI_FORGET : Number of upcalls that were cache invalidation,
         resulted by the forget of inode on the server side.

Lease events:
------------
LEASE_RECALL : Number of lease recalls sent by the brick (in
         brick profile), and number of lease recalls recieved
         by client(in client profile)

Note that the sum of CI_IATT, CI_XATTR, CI_RENAME, CI_UNLINK,
CI_FORGET, LEASE_RECALL may not be equal to UPCALL. This is
because, each cache invalidation can carry multiple flags.
Eg:
- Every CI_XATTR will have CI_IATT
- Every CI_UNLINK will also increment CI_IATT as link count is an
iatt attribute.

Also UP_PARENT_DENTRY_FLAGS is currently not accounted for,
as CI_RENAME and CI_UNLINK will always have the flag
UP_PARENT_DENTRY_FLAGS

Change-Id: Ieb8cd21dde2c4c7618f12d025a5e5156f9cc0fe9
BUG: 1371543
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15193
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd : Introduce reset brick</title>
<updated>2016-08-30T02:55:53+00:00</updated>
<author>
<name>Anuradha Talur</name>
<email>atalur@redhat.com</email>
</author>
<published>2016-08-22T17:22:03+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=936f8aeac3252951e7fa0cdaa5d260fad3bd5ea0'/>
<id>936f8aeac3252951e7fa0cdaa5d260fad3bd5ea0</id>
<content type='text'>
The command basically allows replace brick with src and
dst bricks as same.

Usage:
gluster v reset-brick &lt;volname&gt; &lt;hostname:brick-path&gt; start
This command kills the brick to be reset. Once this command is run,
admin can do other manual operations that they need to do,
like configuring some options for the brick. Once this is done,
resetting the brick can be continued with the following options.

gluster v reset-brick &lt;vname&gt; &lt;hostname:brick&gt; &lt;hostname:brick&gt; commit {force}

Does the job of resetting the brick. 'force' option should be used
when the brick already contains volinfo id.

Problem: On doing a disk-replacement of a brick in a replicate volume
the following 2 scenarios may occur :

a) there is a chance that reads are served from this replaced-disk brick,
which leads to empty reads. b) potential data loss if next writes succeed
only on replaced brick, and heal is done to other bricks from this one.

Solution: After disk-replacement, make sure that reset-brick command is
run for that brick so that pending markers are set for the brick and it
is not chosen as source for reads and heal. But, as of now replace-brick
for the same brick-path is not allowed. In order to fix the above
mentioned problem, same brick-path replace-brick is needed.
With this patch reset-brick commit {force} will be allowed even when
source and destination &lt;hostname:brickpath&gt; are identical as long as
1) destination brick is not alive
2) source and destination brick have the same brick uuid and path.
Also, the destination brick after replace-brick will use the same port
as the source brick.

Change-Id: I440b9e892ffb781ea4b8563688c3f85c7a7c89de
BUG: 1266876
Signed-off-by: Anuradha Talur &lt;atalur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12250
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The command basically allows replace brick with src and
dst bricks as same.

Usage:
gluster v reset-brick &lt;volname&gt; &lt;hostname:brick-path&gt; start
This command kills the brick to be reset. Once this command is run,
admin can do other manual operations that they need to do,
like configuring some options for the brick. Once this is done,
resetting the brick can be continued with the following options.

gluster v reset-brick &lt;vname&gt; &lt;hostname:brick&gt; &lt;hostname:brick&gt; commit {force}

Does the job of resetting the brick. 'force' option should be used
when the brick already contains volinfo id.

Problem: On doing a disk-replacement of a brick in a replicate volume
the following 2 scenarios may occur :

a) there is a chance that reads are served from this replaced-disk brick,
which leads to empty reads. b) potential data loss if next writes succeed
only on replaced brick, and heal is done to other bricks from this one.

Solution: After disk-replacement, make sure that reset-brick command is
run for that brick so that pending markers are set for the brick and it
is not chosen as source for reads and heal. But, as of now replace-brick
for the same brick-path is not allowed. In order to fix the above
mentioned problem, same brick-path replace-brick is needed.
With this patch reset-brick commit {force} will be allowed even when
source and destination &lt;hostname:brickpath&gt; are identical as long as
1) destination brick is not alive
2) source and destination brick have the same brick uuid and path.
Also, the destination brick after replace-brick will use the same port
as the source brick.

Change-Id: I440b9e892ffb781ea4b8563688c3f85c7a7c89de
BUG: 1266876
Signed-off-by: Anuradha Talur &lt;atalur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12250
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc: dead code fix</title>
<updated>2016-08-29T16:24:38+00:00</updated>
<author>
<name>ankit</name>
<email>anraj@redhat.com</email>
</author>
<published>2016-08-24T16:04:35+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=ec5c0ca6768b33761aa438f9948c4a6daaf4ee3c'/>
<id>ec5c0ca6768b33761aa438f9948c4a6daaf4ee3c</id>
<content type='text'>
CID: 1357866

Change-Id: I117cfd455b05df8d2bd9c8df4a081ffe4d053d09
BUG: 789278
Signed-off-by: ankit &lt;anraj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15310
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: ankitraj
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
CID: 1357866

Change-Id: I117cfd455b05df8d2bd9c8df4a081ffe4d053d09
BUG: 789278
Signed-off-by: ankit &lt;anraj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15310
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: ankitraj
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli,rpc : Removing logically dead code</title>
<updated>2016-08-29T12:11:46+00:00</updated>
<author>
<name>Muthu-vigneshwaran</name>
<email>mvignesh@redhat.com</email>
</author>
<published>2016-08-11T11:37:29+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=ddcf0183adebe528a3b069358a1f856eeeae5857'/>
<id>ddcf0183adebe528a3b069358a1f856eeeae5857</id>
<content type='text'>
CID = 1357865, 1357866

Change-Id: I05eb345e9357d889872d259a600759392b188bb3
BUG: 789278
Signed-off-by: Muthu-vigneshwaran &lt;mvignesh@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15148
Tested-by: Muthu Vigneshwaran
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Manikandan Selvaganesh &lt;mselvaga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
CID = 1357865, 1357866

Change-Id: I05eb345e9357d889872d259a600759392b188bb3
BUG: 789278
Signed-off-by: Muthu-vigneshwaran &lt;mvignesh@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15148
Tested-by: Muthu Vigneshwaran
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Manikandan Selvaganesh &lt;mselvaga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc: fix unused variable warnings/errors</title>
<updated>2016-08-29T12:00:49+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2016-08-22T16:11:24+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=00ee35093917f57b97b523decee9c58050d35d32'/>
<id>00ee35093917f57b97b523decee9c58050d35d32</id>
<content type='text'>
http://review.gluster.org/14085 fixes a/the "leak" - via the
generated rpc/xdr headers - of pragmas that mask these warnings.

However 14085 won't pass the smoke test until all the warnings are
fixed.

Change-Id: I20d91091bee0bf8f198a307ebba4b284bc3817ff
BUG: 1369124
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15240
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
http://review.gluster.org/14085 fixes a/the "leak" - via the
generated rpc/xdr headers - of pragmas that mask these warnings.

However 14085 won't pass the smoke test until all the warnings are
fixed.

Change-Id: I20d91091bee0bf8f198a307ebba4b284bc3817ff
BUG: 1369124
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15240
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/cli: cli to get local state representation from glusterd</title>
<updated>2016-08-26T15:23:37+00:00</updated>
<author>
<name>Samikshan Bairagya</name>
<email>samikshan@gmail.com</email>
</author>
<published>2016-07-07T15:03:02+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=4a3454753f6e4ddc309c8d1cb11a6e4e432c1da6'/>
<id>4a3454753f6e4ddc309c8d1cb11a6e4e432c1da6</id>
<content type='text'>
Currently there is no existing CLI that can be used to get the
local state representation of the cluster as maintained in glusterd
in a readable as well as parseable format.

The CLI added has the following usage:

 # gluster get-state [daemon] [odir &lt;path/to/output/dir&gt;] [file &lt;filename&gt;]

This would dump data points that reflect the local state
representation of the cluster as maintained in glusterd (no other
daemons are supported as of now) to a file inside the specified
output directory. The default output directory and filename is
/var/run/gluster and glusterd_state_&lt;timestamp&gt; respectively. The
option for specifying the daemon name leaves room to add support for
other daemons in the future. Following are the data points captured
as of now to represent the state from the local glusterd pov:

 * Peer:
    - Primary hostname
    - uuid
    - state
    - connection status
    - List of hostnames

 * Volumes:
    - name, id, transport type, status
    - counts: bricks, snap, subvol, stripe, arbiter, disperse,
 redundancy
    - snapd status
    - quorum status
    - tiering related information
    - rebalance status
    - replace bricks status
    - snapshots

 * Bricks:
    - Path, hostname (for all bricks these info will be shown)
    - port, rdma port, status, mount options, filesystem type and
signed in status for bricks running locally.

 * Services:
    - name, online status for initialised services

 * Others:
    - Base port, last allocated port
    - op-version
    - MYUUID

Change-Id: I4a45cc5407ab92d8afdbbd2098ece851f7e3d618
BUG: 1353156
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: http://review.gluster.org/14873
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently there is no existing CLI that can be used to get the
local state representation of the cluster as maintained in glusterd
in a readable as well as parseable format.

The CLI added has the following usage:

 # gluster get-state [daemon] [odir &lt;path/to/output/dir&gt;] [file &lt;filename&gt;]

This would dump data points that reflect the local state
representation of the cluster as maintained in glusterd (no other
daemons are supported as of now) to a file inside the specified
output directory. The default output directory and filename is
/var/run/gluster and glusterd_state_&lt;timestamp&gt; respectively. The
option for specifying the daemon name leaves room to add support for
other daemons in the future. Following are the data points captured
as of now to represent the state from the local glusterd pov:

 * Peer:
    - Primary hostname
    - uuid
    - state
    - connection status
    - List of hostnames

 * Volumes:
    - name, id, transport type, status
    - counts: bricks, snap, subvol, stripe, arbiter, disperse,
 redundancy
    - snapd status
    - quorum status
    - tiering related information
    - rebalance status
    - replace bricks status
    - snapshots

 * Bricks:
    - Path, hostname (for all bricks these info will be shown)
    - port, rdma port, status, mount options, filesystem type and
signed in status for bricks running locally.

 * Services:
    - name, online status for initialised services

 * Others:
    - Base port, last allocated port
    - op-version
    - MYUUID

Change-Id: I4a45cc5407ab92d8afdbbd2098ece851f7e3d618
BUG: 1353156
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: http://review.gluster.org/14873
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>feature/bitrot: Ondemand scrub option for bitrot</title>
<updated>2016-08-25T21:39:38+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2016-08-05T03:33:22+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0b3e4130b576c11156d6327e4cc3c9310a74c143'/>
<id>0b3e4130b576c11156d6327e4cc3c9310a74c143</id>
<content type='text'>
The bitrot scrubber takes 'hourly/daily/biweekly/monthly'
as the values for 'scrub-frequency'. There is no way
to schedule the scrubbing when the admin wants it.

Ondemand scrubbing brings in the new option 'ondemand'
with which the admin can start scrubbing ondemand.
It starts the scrubbing immediately.

Ondemand scrubbing is successful only if the scrubber
is in 'Active (Idle)' (waiting for it's next frequency
cycle to start scrubbing). It is not entertained when
the scrubber is in 'Paused' or already running.

Here is the command line syntax.

gluster volume bitrot &lt;vol name&gt; scrub ondemand

Change-Id: I84c28904367eed827a7dae8d6a535c14b28e9f4d
BUG: 1366195
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15111
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Venky Shankar &lt;vshankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The bitrot scrubber takes 'hourly/daily/biweekly/monthly'
as the values for 'scrub-frequency'. There is no way
to schedule the scrubbing when the admin wants it.

Ondemand scrubbing brings in the new option 'ondemand'
with which the admin can start scrubbing ondemand.
It starts the scrubbing immediately.

Ondemand scrubbing is successful only if the scrubber
is in 'Active (Idle)' (waiting for it's next frequency
cycle to start scrubbing). It is not entertained when
the scrubber is in 'Paused' or already running.

Here is the command line syntax.

gluster volume bitrot &lt;vol name&gt; scrub ondemand

Change-Id: I84c28904367eed827a7dae8d6a535c14b28e9f4d
BUG: 1366195
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15111
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Venky Shankar &lt;vshankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot/cli: Fix snapshot status xml output</title>
<updated>2016-08-23T07:11:54+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2016-04-12T06:56:54+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=efbae0fef5399a8826782b02140f44edaea0dac3'/>
<id>efbae0fef5399a8826782b02140f44edaea0dac3</id>
<content type='text'>
snap status --xml errors out if a brick is down and
doesn't have pid. It is handled in the cli of the snap
status where "N/A" is displayed in such a scenario.
Handled the same in xml

snap status &lt;snapname&gt; --xml fails as the writer is
not initialised for the same. Using GF_SNAP_STATUS_TYPE_ITER
instead of GF_SNAP_STATUS_TYPE_SNAP for all snap's
status to differentiate between the two scenarios.

Added testcase volume-snapshot-xml.t to check
all snapshot commands xml outputs

Change-Id: I99563e8f3e84f1aaeabd865326bb825c44f5c745
BUG: 1325831
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14018
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
snap status --xml errors out if a brick is down and
doesn't have pid. It is handled in the cli of the snap
status where "N/A" is displayed in such a scenario.
Handled the same in xml

snap status &lt;snapname&gt; --xml fails as the writer is
not initialised for the same. Using GF_SNAP_STATUS_TYPE_ITER
instead of GF_SNAP_STATUS_TYPE_SNAP for all snap's
status to differentiate between the two scenarios.

Added testcase volume-snapshot-xml.t to check
all snapshot commands xml outputs

Change-Id: I99563e8f3e84f1aaeabd865326bb825c44f5c745
BUG: 1325831
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14018
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>socket: SSL_shutdown should be called before socket shutdown</title>
<updated>2016-08-16T19:46:45+00:00</updated>
<author>
<name>Rajesh Joseph</name>
<email>rjoseph@redhat.com</email>
</author>
<published>2016-08-02T15:30:27+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=79e006b31a1e6d71f1af02176f8e8acaed7f8cd2'/>
<id>79e006b31a1e6d71f1af02176f8e8acaed7f8cd2</id>
<content type='text'>
SSL_shutdown shuts down an active SSL connection. But we
are calling this after underlying socket is disconnected.

Change-Id: Ia943179d23395f42b942450dbcf26336d4dfc813
BUG: 1362602
Signed-off-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15072
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
SSL_shutdown shuts down an active SSL connection. But we
are calling this after underlying socket is disconnected.

Change-Id: Ia943179d23395f42b942450dbcf26336d4dfc813
BUG: 1362602
Signed-off-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15072
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc: fixing illegal memory access</title>
<updated>2016-08-11T13:19:58+00:00</updated>
<author>
<name>Muthu-vigneshwaran</name>
<email>mvignesh@redhat.com</email>
</author>
<published>2016-08-05T08:35:36+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d7c7f87ee3f442ef24cbffb30df289bb26b20a5d'/>
<id>d7c7f87ee3f442ef24cbffb30df289bb26b20a5d</id>
<content type='text'>
CID:1357876

Change-Id: I34eee0bf0367f963ddf6e9d6b28b7d3a9af12c90
BUG: 789278
Signed-off-by: Muthu-vigneshwaran &lt;mvignesh@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15094
Tested-by: Muthu Vigneshwaran
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
CID:1357876

Change-Id: I34eee0bf0367f963ddf6e9d6b28b7d3a9af12c90
BUG: 789278
Signed-off-by: Muthu-vigneshwaran &lt;mvignesh@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15094
Tested-by: Muthu Vigneshwaran
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
