<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests/bugs/snapshot, branch v3.13.0beta</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>tests: Update tier CLI in .t files</title>
<updated>2017-10-30T15:50:44+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-10-23T07:12:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=62dbefde4bc5c8d9dcfb10f3be5d349db341bb30'/>
<id>62dbefde4bc5c8d9dcfb10f3be5d349db341bb30</id>
<content type='text'>
Update .t tier tests to use the new tier CLI.

Change-Id: I0e7f1769071108d8266fc86378c4466bcaf96e7d
BUG: 1505253
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Update .t tier tests to use the new tier CLI.

Change-Id: I0e7f1769071108d8266fc86378c4466bcaf96e7d
BUG: 1505253
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot: Issue with other processes accessing the mounted brick</title>
<updated>2017-10-23T10:05:02+00:00</updated>
<author>
<name>Sunny Kumar</name>
<email>sunkumar@redhat.com</email>
</author>
<published>2017-08-16T08:34:45+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2b3b3edee2d849b4aee314048987dc995d9679a1'/>
<id>2b3b3edee2d849b4aee314048987dc995d9679a1</id>
<content type='text'>
Added code for unmount of activated snapshot brick during snapshot
deactivation process which make sense as mount point for deactivated
bricks should not exist.

Removed code for mounting newly created snapshot, as newly created
snapshots should not mount until it is activated.

Added code for mount point creation and snapshot mount during snapshot
activation.

Added validation during glusterd init for mounting only those snapshot
whose status is either STARTED or RESTORED.

During snapshot restore, mount point for stopped snap should exist as
it is required to set extended attribute.

During handshake, after getting updates from friend mount point for
activated snapshot should exist and should not for deactivated
snapshot.

While getting snap status we should show relevent information for
deactivated snapshots, after this pathch 'gluster snap status' command
will show output like-

Snap Name : snap1
Snap UUID : snap-uuid

	Brick Path        :   server1:/run/gluster/snaps/snap-vol-name/brick
	Volume Group      :   N/A (Deactivated Snapshot)
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   N/A
	LV Size           :   N/A

Fixes: #276

Change-Id: I65783488e35fac43632615ce1b8ff7b8e84834dc
BUG: 1482023
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Added code for unmount of activated snapshot brick during snapshot
deactivation process which make sense as mount point for deactivated
bricks should not exist.

Removed code for mounting newly created snapshot, as newly created
snapshots should not mount until it is activated.

Added code for mount point creation and snapshot mount during snapshot
activation.

Added validation during glusterd init for mounting only those snapshot
whose status is either STARTED or RESTORED.

During snapshot restore, mount point for stopped snap should exist as
it is required to set extended attribute.

During handshake, after getting updates from friend mount point for
activated snapshot should exist and should not for deactivated
snapshot.

While getting snap status we should show relevent information for
deactivated snapshots, after this pathch 'gluster snap status' command
will show output like-

Snap Name : snap1
Snap UUID : snap-uuid

	Brick Path        :   server1:/run/gluster/snaps/snap-vol-name/brick
	Volume Group      :   N/A (Deactivated Snapshot)
	Brick Running     :   No
	Brick PID         :   N/A
	Data Percentage   :   N/A
	LV Size           :   N/A

Fixes: #276

Change-Id: I65783488e35fac43632615ce1b8ff7b8e84834dc
BUG: 1482023
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Gluster should keep PID file in correct location</title>
<updated>2017-08-11T07:36:41+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>garg.gaurav52@gmail.com</email>
</author>
<published>2016-03-02T12:12:07+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=220d406ad13d840e950eef001a2b36f87570058d'/>
<id>220d406ad13d840e950eef001a2b36f87570058d</id>
<content type='text'>
Currently Gluster keeps process pid information of all the daemons
and brick processes in Gluster configuration file directory
(ie., /var/lib/glusterd/*).

These pid files should be seperate from configuration files.
Deletion of the configuration file directory might result into serious problems.
Also, /var/run/gluster is the default placeholder directory for pid files.

So, with this fix Gluster will keep all process pid information of all
processes in /var/run/gluster/* directory.

Change-Id: Idb09e3fccb6a7355fbac1df31082637c8d7ab5b4
BUG: 1258561
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Reviewed-on: https://review.gluster.org/13580
Tested-by: MOHIT AGRAWAL &lt;moagrawa@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently Gluster keeps process pid information of all the daemons
and brick processes in Gluster configuration file directory
(ie., /var/lib/glusterd/*).

These pid files should be seperate from configuration files.
Deletion of the configuration file directory might result into serious problems.
Also, /var/run/gluster is the default placeholder directory for pid files.

So, with this fix Gluster will keep all process pid information of all
processes in /var/run/gluster/* directory.

Change-Id: Idb09e3fccb6a7355fbac1df31082637c8d7ab5b4
BUG: 1258561
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Reviewed-on: https://review.gluster.org/13580
Tested-by: MOHIT AGRAWAL &lt;moagrawa@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Fixes quota aux mount failure</title>
<updated>2017-05-08T06:15:55+00:00</updated>
<author>
<name>Sanoj Unnikrishnan</name>
<email>sunnikri@redhat.com</email>
</author>
<published>2017-03-22T09:32:12+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2ae4b4058691b324535d802f4e6d24cce89a10e5'/>
<id>2ae4b4058691b324535d802f4e6d24cce89a10e5</id>
<content type='text'>
The aux mount is created on the first limit/remove_limit/list command
and it remains until volume is stopped / deleted / (quota is disabled)
, where we do a lazy unmount. If the process is uncleanly terminated,
then the mount entry remains and we get (Transport disconnected) error
on subsequent attempts to run quota list/limit-usage/remove commands.

Second issue, There is also a risk of inadvertent rm -rf on the
/var/run/gluster causing data loss for the user. Ideally, /var/run is
a temp path for application use and should not cause any data loss to
persistent storage.

Solution:
1) unmount the aux mount after each use.
2) clean stale mount before mounting, if any.

One caveat with doing mount/unmount on each command is that we cannot
use same mount point for both list and limit commands.
The reason for this is that list command needs mount to be accessible
in cli after response from glusterd, So it could be unmounted by a
limit command if executed in parallel (had we used same mount point)
Hence we use separate mount points for list and limit commands.

Change-Id: I4f9e39da2ac2b65941399bffb6440db8a6ba59d0
BUG: 1433906
Signed-off-by: Sanoj Unnikrishnan &lt;sunnikri@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16938
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Manikandan Selvaganesh &lt;manikandancs333@gmail.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The aux mount is created on the first limit/remove_limit/list command
and it remains until volume is stopped / deleted / (quota is disabled)
, where we do a lazy unmount. If the process is uncleanly terminated,
then the mount entry remains and we get (Transport disconnected) error
on subsequent attempts to run quota list/limit-usage/remove commands.

Second issue, There is also a risk of inadvertent rm -rf on the
/var/run/gluster causing data loss for the user. Ideally, /var/run is
a temp path for application use and should not cause any data loss to
persistent storage.

Solution:
1) unmount the aux mount after each use.
2) clean stale mount before mounting, if any.

One caveat with doing mount/unmount on each command is that we cannot
use same mount point for both list and limit commands.
The reason for this is that list command needs mount to be accessible
in cli after response from glusterd, So it could be unmounted by a
limit command if executed in parallel (had we used same mount point)
Hence we use separate mount points for list and limit commands.

Change-Id: I4f9e39da2ac2b65941399bffb6440db8a6ba59d0
BUG: 1433906
Signed-off-by: Sanoj Unnikrishnan &lt;sunnikri@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16938
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Manikandan Selvaganesh &lt;manikandancs333@gmail.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: keep snapshot bricks separate from regular ones</title>
<updated>2017-02-10T13:16:48+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2017-02-03T15:51:21+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f1c6ae24361b1bf39794a34ea35a0202a6b49fa6'/>
<id>f1c6ae24361b1bf39794a34ea35a0202a6b49fa6</id>
<content type='text'>
The problem here is that a volume's transport options can change, but
any snapshots' bricks don't follow along even though they're now
incompatible (with respect to multiplexing).  This was causing the
USS+SSL test to fail.  By keeping the snapshot bricks separate
(though still potentially multiplexed with other snapshot bricks
including those for other volumes) we can ensure that they remain
unaffected by changes to their parent volumes.

Also fixed various issues with how the test waits (or more precisely
didn't) for various events to complete before it continues.

Change-Id: Iab4a8a44fac5760373fac36956a3bcc27cf969da
BUG: 1385758
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16544
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Tested-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The problem here is that a volume's transport options can change, but
any snapshots' bricks don't follow along even though they're now
incompatible (with respect to multiplexing).  This was causing the
USS+SSL test to fail.  By keeping the snapshot bricks separate
(though still potentially multiplexed with other snapshot bricks
including those for other volumes) we can ensure that they remain
unaffected by changes to their parent volumes.

Also fixed various issues with how the test waits (or more precisely
didn't) for various events to complete before it continues.

Change-Id: Iab4a8a44fac5760373fac36956a3bcc27cf969da
BUG: 1385758
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16544
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Tested-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: Fix spurious test failure in bug-1316437.t</title>
<updated>2016-12-16T14:36:46+00:00</updated>
<author>
<name>Rajesh Joseph</name>
<email>rjoseph@redhat.com</email>
</author>
<published>2016-12-15T15:21:30+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e9d8525a0d34130ba2a582109937b8e79eecf6ab'/>
<id>e9d8525a0d34130ba2a582109937b8e79eecf6ab</id>
<content type='text'>
After sending SIGTERM to gluster process we immediately
check if process exited. We should wait for some time
before checking process state.

BUG: 1404573
Change-Id: Iaba0067f6e880a7fe38e11b9fa0fe9bd103b19e2
Signed-off-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16162
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
After sending SIGTERM to gluster process we immediately
check if process exited. We should wait for some time
before checking process state.

BUG: 1404573
Change-Id: Iaba0067f6e880a7fe38e11b9fa0fe9bd103b19e2
Signed-off-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16162
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>uss: snapd should enable SSL if SSL is enabled on volume</title>
<updated>2016-12-01T09:27:07+00:00</updated>
<author>
<name>Rajesh Joseph</name>
<email>rjoseph@redhat.com</email>
</author>
<published>2016-11-29T16:27:37+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=182f0d12040dab5081ca645a3f370f65cd68b528'/>
<id>182f0d12040dab5081ca645a3f370f65cd68b528</id>
<content type='text'>
During snapd graph generation we should check if SSL is
enabled on main volume or not. This is because clients
will communicate with snapd as if it is communicating to
a brick.

Change-Id: I0d7fe86c567b297a8528a48faf06161d4c3cb415
Signed-off-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
BUG: 1400013
Reviewed-on: http://review.gluster.org/15979
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
During snapd graph generation we should check if SSL is
enabled on main volume or not. This is because clients
will communicate with snapd as if it is communicating to
a brick.

Change-Id: I0d7fe86c567b297a8528a48faf06161d4c3cb415
Signed-off-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
BUG: 1400013
Reviewed-on: http://review.gluster.org/15979
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: clean up old port and allocate new one on every restart</title>
<updated>2016-08-04T04:43:34+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2016-07-25T13:39:08+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c3dee6d35326c6495591eb5bbf7f52f64031e2c4'/>
<id>c3dee6d35326c6495591eb5bbf7f52f64031e2c4</id>
<content type='text'>
GlusterD as of now was blindly assuming that the brick port which was already
allocated would be available to be reused and that assumption is absolutely
wrong.

Solution : On first attempt, we thought GlusterD should check if the already
allocated brick ports are free, if not allocate new port and pass it to the
daemon. But with that approach there is a possibility that if PMAP_SIGNOUT is
missed out, the stale port will be given back to the clients where connection
will keep on failing. Now given the port allocation always start from base_port,
if everytime a new port has to be allocated for the daemons, the port range will
still be under control. So this fix tries to clean up old port using
pmap_registry_remove () if any and then goes for pmap_registry_alloc ()

Change-Id: If54a055d01ab0cbc06589dc1191d8fc52eb2c84f
BUG: 1221623
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15005
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
GlusterD as of now was blindly assuming that the brick port which was already
allocated would be available to be reused and that assumption is absolutely
wrong.

Solution : On first attempt, we thought GlusterD should check if the already
allocated brick ports are free, if not allocate new port and pass it to the
daemon. But with that approach there is a possibility that if PMAP_SIGNOUT is
missed out, the stale port will be given back to the clients where connection
will keep on failing. Now given the port allocation always start from base_port,
if everytime a new port has to be allocated for the daemons, the port range will
still be under control. So this fix tries to clean up old port using
pmap_registry_remove () if any and then goes for pmap_registry_alloc ()

Change-Id: If54a055d01ab0cbc06589dc1191d8fc52eb2c84f
BUG: 1221623
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15005
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot/snapd: Don't display pid when snapd is offline</title>
<updated>2016-07-27T22:07:23+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2016-07-22T06:10:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=ec6925a379c7bee071df1638bc2751b266cee346'/>
<id>ec6925a379c7bee071df1638bc2751b266cee346</id>
<content type='text'>
We were previously reading the pidfile, and displaying
the pid even if snapd daemon is not running. Now to fix
it, we re-assign pid value to -1, if snapd is offline.

Change-Id: I4baff8d489fe9380061c52aea006db90fa421cd7
BUG: 1358244
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14981
Tested-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We were previously reading the pidfile, and displaying
the pid even if snapd daemon is not running. Now to fix
it, we re-assign pid value to -1, if snapd is offline.

Change-Id: I4baff8d489fe9380061c52aea006db90fa421cd7
BUG: 1358244
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14981
Tested-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: Moving ./tests/bugs/snapshot/bug-1316437.t to bad test</title>
<updated>2016-07-25T14:51:44+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2016-07-25T10:53:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0d3e0da4fea3fa3acaab92a58d0b7c26bae765ff'/>
<id>0d3e0da4fea3fa3acaab92a58d0b7c26bae765ff</id>
<content type='text'>
Moving ./tests/bugs/snapshot/bug-1316437.t to bad test, while mulling
over the pros and cons of the fix. Will update the bug, as we go.
Sending this patch to unblock master.

Change-Id: Ia863312913686b4fa0ee0b63da13aedc0439a835
BUG: 1359717
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15001
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Moving ./tests/bugs/snapshot/bug-1316437.t to bad test, while mulling
over the pros and cons of the fix. Will update the bug, as we go.
Sending this patch to unblock master.

Change-Id: Ia863312913686b4fa0ee0b63da13aedc0439a835
BUG: 1359717
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15001
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
