<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git, branch v3.11.0rc1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>cluster/ec: return all node uuids from all subvolumes</title>
<updated>2017-05-22T22:41:18+00:00</updated>
<author>
<name>Xavier Hernandez</name>
<email>xhernandez@datalab.es</email>
</author>
<published>2017-05-12T07:23:47+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c674cf28ab2de312e6bc53d65e82827667e0cad3'/>
<id>c674cf28ab2de312e6bc53d65e82827667e0cad3</id>
<content type='text'>
EC was retuning the UUID of the brick with smaller value. This had
the side effect of not evenly balancing the load between bricks on
rebalance operations.

This patch modifies the common functions that combine multiple subvolume
values into a single result to take into account the subvolume order
and, optionally, other subvolumes that could be damaged.

This makes easier to add future features where brick order is important.
It also makes possible to easily identify the originating brick of each
answer, in case some brick will have an special meaning in the future.

 &gt;Change-Id: Iee0a4da710b41224a6dc8e13fa8dcddb36c73a2f
 &gt;BUG: 1366817
 &gt;Signed-off-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
 &gt;Reviewed-on: https://review.gluster.org/17297
 &gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
 &gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
 &gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
 &gt;Reviewed-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
 &gt;Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
 &gt;(cherry picked from commit bcc34ce05c1be76dae42838d55c15d3af5f80e48)

Change-Id: I055713c3c25b7ba99248be880414fb0e8f36a67e
BUG: 1451573
Signed-off-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17318
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
EC was retuning the UUID of the brick with smaller value. This had
the side effect of not evenly balancing the load between bricks on
rebalance operations.

This patch modifies the common functions that combine multiple subvolume
values into a single result to take into account the subvolume order
and, optionally, other subvolumes that could be damaged.

This makes easier to add future features where brick order is important.
It also makes possible to easily identify the originating brick of each
answer, in case some brick will have an special meaning in the future.

 &gt;Change-Id: Iee0a4da710b41224a6dc8e13fa8dcddb36c73a2f
 &gt;BUG: 1366817
 &gt;Signed-off-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
 &gt;Reviewed-on: https://review.gluster.org/17297
 &gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
 &gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
 &gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
 &gt;Reviewed-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
 &gt;Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
 &gt;(cherry picked from commit bcc34ce05c1be76dae42838d55c15d3af5f80e48)

Change-Id: I055713c3c25b7ba99248be880414fb0e8f36a67e
BUG: 1451573
Signed-off-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17318
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rda, glusterd: Change the max of rda-cache-limit to INFINITY</title>
<updated>2017-05-22T15:05:43+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2017-05-19T05:39:13+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=42fc1abdb41817b691cda87ddc7ea94129279475'/>
<id>42fc1abdb41817b691cda87ddc7ea94129279475</id>
<content type='text'>
Issue:
The max value of rda-cache-limit is 1GB before this patch.
When parallel-readdir is enabled, there will be many instances of
readdir-ahead, hence the rda-cache-limit depends on the number of
instances. Eg: On a volume with distribute count 4, rda-cache-limit
when parallel-readdir is enabled, will be 4GB instead of 1GB.
Consider a followinf sequence of operations:
- Enable parallel readdir
- Set rda-cache-limit to lets say 3GB
- Disable parallel-readdir, this results in one instance of readdir-ahead
  and the rda-cache-limit will be back to 1GB, but the current value is 3GB
  and hence the mount will stop working as 3GB &gt; max 1GB.

Solution:
To fix this, we can limit the cache to 1GB even when parallel-readdir
is enabled. But there is no necessity to limit the cache to 1GB, it
can be increased if the system has enough resources. Hence getting rid
of the rda-cache-limit max value is more apt. If we just change the
rda-cache-limit max to INFINITY, we will render older(&lt;3.11) clients
broken, when the rda-cache-limit is set to &gt; 1GB (as the older clients
still expect a value &lt; 1GB). To safely change the max value of
rda-cache-limit to INFINITY, add a check in glusted to verify all
the clients are &gt; 3.11 if the value exceeds 1GB.

&gt;Reviewed-on: https://review.gluster.org/17338
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;(cherry picked from commit e43b40296956d132c70ffa3aa07b0078733b39d4)

Change-Id: Id0cdda3b053287b659c7bf511b13db2e45b92032
BUG: 1453152
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17354
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Issue:
The max value of rda-cache-limit is 1GB before this patch.
When parallel-readdir is enabled, there will be many instances of
readdir-ahead, hence the rda-cache-limit depends on the number of
instances. Eg: On a volume with distribute count 4, rda-cache-limit
when parallel-readdir is enabled, will be 4GB instead of 1GB.
Consider a followinf sequence of operations:
- Enable parallel readdir
- Set rda-cache-limit to lets say 3GB
- Disable parallel-readdir, this results in one instance of readdir-ahead
  and the rda-cache-limit will be back to 1GB, but the current value is 3GB
  and hence the mount will stop working as 3GB &gt; max 1GB.

Solution:
To fix this, we can limit the cache to 1GB even when parallel-readdir
is enabled. But there is no necessity to limit the cache to 1GB, it
can be increased if the system has enough resources. Hence getting rid
of the rda-cache-limit max value is more apt. If we just change the
rda-cache-limit max to INFINITY, we will render older(&lt;3.11) clients
broken, when the rda-cache-limit is set to &gt; 1GB (as the older clients
still expect a value &lt; 1GB). To safely change the max value of
rda-cache-limit to INFINITY, add a check in glusted to verify all
the clients are &gt; 3.11 if the value exceeds 1GB.

&gt;Reviewed-on: https://review.gluster.org/17338
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;(cherry picked from commit e43b40296956d132c70ffa3aa07b0078733b39d4)

Change-Id: Id0cdda3b053287b659c7bf511b13db2e45b92032
BUG: 1453152
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17354
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Fix crash in dht_selfheal_dir_setattr</title>
<updated>2017-05-22T14:17:58+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-05-19T09:52:12+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7ff24a07fa282f987b9e296a5d8308af7afe3443'/>
<id>7ff24a07fa282f987b9e296a5d8308af7afe3443</id>
<content type='text'>
Use a local variable to store the call cnt used in the
for loop for the STACK_WIND so as not to access local
which may be freed by STACK_UNWIND after all fops return.

&gt; BUG: 1452102
&gt; Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/17343
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
(cherry picked from commit 17784aaa311494e4538c616f02bf95477ae781bc)
Change-Id: I24f49b6dbd29a2b706e388e2f6d5196c0f80afc5
BUG: 1453050
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17348
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Use a local variable to store the call cnt used in the
for loop for the STACK_WIND so as not to access local
which may be freed by STACK_UNWIND after all fops return.

&gt; BUG: 1452102
&gt; Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/17343
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
(cherry picked from commit 17784aaa311494e4538c616f02bf95477ae781bc)
Change-Id: I24f49b6dbd29a2b706e388e2f6d5196c0f80afc5
BUG: 1453050
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17348
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Don't spawn new glusterfsds on node reboot with brick-mux</title>
<updated>2017-05-22T14:17:23+00:00</updated>
<author>
<name>Samikshan Bairagya</name>
<email>samikshan@gmail.com</email>
</author>
<published>2017-05-16T09:37:21+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=671dfcd82f6a7c56fbcbfde33cba22c0b585a046'/>
<id>671dfcd82f6a7c56fbcbfde33cba22c0b585a046</id>
<content type='text'>
With brick multiplexing enabled, upon a node reboot new bricks were
not being attached to the first spawned brick process even though
there wasn't any compatibility issues.

The reason for this is that upon glusterd restart after a node
reboot, since brick services aren't running, glusterd starts the
bricks in a "no-wait" mode. So after a brick process is spawned for
the first brick, there isn't enough time for the corresponding pid
file to get populated with a value before the compatibilty check is
made for the next brick.

This commit solves this by iteratively waiting for the pidfile to be
populated in the brick compatibility comparison stage before checking
if the brick process is alive.

&gt; Reviewed-on: https://review.gluster.org/17307
&gt; Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;

(cherry picked from commit 13e7b3b354a252ad4065f7b2f0f805c40a3c5d18)

Change-Id: Ibd1f8e54c63e4bb04162143c9d70f09918a44aa4
BUG: 1453086
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: https://review.gluster.org/17351
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With brick multiplexing enabled, upon a node reboot new bricks were
not being attached to the first spawned brick process even though
there wasn't any compatibility issues.

The reason for this is that upon glusterd restart after a node
reboot, since brick services aren't running, glusterd starts the
bricks in a "no-wait" mode. So after a brick process is spawned for
the first brick, there isn't enough time for the corresponding pid
file to get populated with a value before the compatibilty check is
made for the next brick.

This commit solves this by iteratively waiting for the pidfile to be
populated in the brick compatibility comparison stage before checking
if the brick process is alive.

&gt; Reviewed-on: https://review.gluster.org/17307
&gt; Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;

(cherry picked from commit 13e7b3b354a252ad4065f7b2f0f805c40a3c5d18)

Change-Id: Ibd1f8e54c63e4bb04162143c9d70f09918a44aa4
BUG: 1453086
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: https://review.gluster.org/17351
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Return the list of node_uuids for the subvolume</title>
<updated>2017-05-19T13:20:57+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2017-04-19T12:34:46+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=74aa9ab2f2f6b2514847457101642b359823fde5'/>
<id>74aa9ab2f2f6b2514847457101642b359823fde5</id>
<content type='text'>
Problem:
AFR was returning the node uuid of the first node for every file if
the replica set was healthy, which was resulting in only one node
migrating all the files.

Fix:
With this patch AFR returns the list of node_uuids to the upper layer,
so that they can decide on which node to migrate which files, resulting
in improved performance. Ordering of node uuids will be maintained based
on the ordering of the bricks. If a brick is down, then the node uuid
for that will be set to all zeros.

&gt;Reviewed-on: https://review.gluster.org/17084
&gt; Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
&gt; Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
(cherry picked from commit 0a50167c0a8f950f5a1c76442b6c9abea466200d)

Change-Id: I73ee0f9898ae473584fdf487a2980d7a6db22f31
BUG: 1451573
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17336
Tested-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
AFR was returning the node uuid of the first node for every file if
the replica set was healthy, which was resulting in only one node
migrating all the files.

Fix:
With this patch AFR returns the list of node_uuids to the upper layer,
so that they can decide on which node to migrate which files, resulting
in improved performance. Ordering of node uuids will be maintained based
on the ordering of the bricks. If a brick is down, then the node uuid
for that will be set to all zeros.

&gt;Reviewed-on: https://review.gluster.org/17084
&gt; Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
&gt; Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
(cherry picked from commit 0a50167c0a8f950f5a1c76442b6c9abea466200d)

Change-Id: I73ee0f9898ae473584fdf487a2980d7a6db22f31
BUG: 1451573
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17336
Tested-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>performance/read-ahead: prevent stale data being returned to application.</title>
<updated>2017-05-18T14:53:06+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2014-04-11T10:28:47+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c1ee349754547b55aefc507f7bd056a8a2a6628d'/>
<id>c1ee349754547b55aefc507f7bd056a8a2a6628d</id>
<content type='text'>
Assume that fd is shared by two application threads/processes.

T0 read is triggered from app-thread t1 and read call passes through
   write-behind.
T1 app-thread t2 issues a write. The page on which read from t1 is
   waiting is marked stale
T2 write-behind caches write and indicates to application as write
   complete.
T3 app-thread t2 issues read to same region. Since, there is already a
   page for that region (created as part of read at T0), this read
   request waits on that page to be filled (though it is stale, which
   is a bug).
T4 read (triggered at T0) completes from brick (with write still
   pending). Now both read requests from t1 and t2 are served this data
   (though data is stale from app-thread t2's perspective - which is a
   bug)
T5 write is flushed to brick by write-behind.

Fix is to not to serve data from a stale page, but instead initiate a
fresh read to back-end.

&gt;Change-Id: Id6af733464fa41bb4e81fd29c7451c73d06453fb
&gt;BUG: 1414242
&gt;Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt;Reviewed-on: https://review.gluster.org/7447
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Csaba Henk &lt;csaba@redhat.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
&gt;Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;

(cherry picked from commit 2ff39c5cbea6fbda0d7a442f55e6dc2a72efb171)
Change-Id: Id6af733464fa41bb4e81fd29c7451c73d06453fb
BUG: 1449311
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17221
Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Assume that fd is shared by two application threads/processes.

T0 read is triggered from app-thread t1 and read call passes through
   write-behind.
T1 app-thread t2 issues a write. The page on which read from t1 is
   waiting is marked stale
T2 write-behind caches write and indicates to application as write
   complete.
T3 app-thread t2 issues read to same region. Since, there is already a
   page for that region (created as part of read at T0), this read
   request waits on that page to be filled (though it is stale, which
   is a bug).
T4 read (triggered at T0) completes from brick (with write still
   pending). Now both read requests from t1 and t2 are served this data
   (though data is stale from app-thread t2's perspective - which is a
   bug)
T5 write is flushed to brick by write-behind.

Fix is to not to serve data from a stale page, but instead initiate a
fresh read to back-end.

&gt;Change-Id: Id6af733464fa41bb4e81fd29c7451c73d06453fb
&gt;BUG: 1414242
&gt;Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt;Reviewed-on: https://review.gluster.org/7447
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Csaba Henk &lt;csaba@redhat.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
&gt;Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;

(cherry picked from commit 2ff39c5cbea6fbda0d7a442f55e6dc2a72efb171)
Change-Id: Id6af733464fa41bb4e81fd29c7451c73d06453fb
BUG: 1449311
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17221
Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: gfid-mismatch-resolution-with-fav-child-policy.t to bad tests</title>
<updated>2017-05-17T23:32:34+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2017-05-15T04:45:20+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7687ddbc6ace34226c826bbb3f29c11dcaf22112'/>
<id>7687ddbc6ace34226c826bbb3f29c11dcaf22112</id>
<content type='text'>
gfid-mismatch-resolution-with-fav-child-policy.t  does a `TEST ls
$M0/f3` (line #170) to trigger healing of a file in gfid split-brain in
a rep-3 volume. But the code to trigger name heal of gfid split-brain
file is not yet there. The test is passing due a lookup/ stat on $M0
which triggers a background entry self heal (which has the code to heal
gfid split-brain files) which may or may not complete the heal before
line 170. If it doesn't, lookup on f3 is failing with EIO.

Add the .t to bad tests until Karthik's patch for CLI based gfid
split-brain resolution fixes name heal also.


&gt; BUG: 1450730
&gt; Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/17290

(cherry picked from commit ba0fc77947c9e873350b58a0e3e93ab51cc56b37)

BUG: 1451887
Change-Id: Iba6e9d81db386bc406aff1ecb6a18851f09bf7c0
Reviewed-on: https://review.gluster.org/17319
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
Tested-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
gfid-mismatch-resolution-with-fav-child-policy.t  does a `TEST ls
$M0/f3` (line #170) to trigger healing of a file in gfid split-brain in
a rep-3 volume. But the code to trigger name heal of gfid split-brain
file is not yet there. The test is passing due a lookup/ stat on $M0
which triggers a background entry self heal (which has the code to heal
gfid split-brain files) which may or may not complete the heal before
line 170. If it doesn't, lookup on f3 is failing with EIO.

Add the .t to bad tests until Karthik's patch for CLI based gfid
split-brain resolution fixes name heal also.


&gt; BUG: 1450730
&gt; Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/17290

(cherry picked from commit ba0fc77947c9e873350b58a0e3e93ab51cc56b37)

BUG: 1451887
Change-Id: Iba6e9d81db386bc406aff1ecb6a18851f09bf7c0
Reviewed-on: https://review.gluster.org/17319
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
Tested-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Fixes quota aux mount failure</title>
<updated>2017-05-17T23:26:20+00:00</updated>
<author>
<name>Sanoj Unnikrishnan</name>
<email>sunnikri@redhat.com</email>
</author>
<published>2017-03-22T09:32:12+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=683cb46bd90d0cda42e0dfd71f5a5afad818fbbd'/>
<id>683cb46bd90d0cda42e0dfd71f5a5afad818fbbd</id>
<content type='text'>
The aux mount is created on the first limit/remove_limit/list command
and it remains until volume is stopped / deleted / (quota is disabled)
, where we do a lazy unmount. If the process is uncleanly terminated,
then the mount entry remains and we get (Transport disconnected) error
on subsequent attempts to run quota list/limit-usage/remove commands.

Second issue, There is also a risk of inadvertent rm -rf on the
/var/run/gluster causing data loss for the user. Ideally, /var/run is
a temp path for application use and should not cause any data loss to
persistent storage.

Solution:
1) unmount the aux mount after each use.
2) clean stale mount before mounting, if any.

One caveat with doing mount/unmount on each command is that we cannot
use same mount point for both list and limit commands.
The reason for this is that list command needs mount to be accessible
in cli after response from glusterd, So it could be unmounted by a
limit command if executed in parallel (had we used same mount point)
Hence we use separate mount points for list and limit commands.

&gt;Reviewed-on: https://review.gluster.org/16938
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Manikandan Selvaganesh &lt;manikandancs333@gmail.com&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;(cherry picked from commit 2ae4b4058691b324535d802f4e6d24cce89a10e5)

Change-Id: I4f9e39da2ac2b65941399bffb6440db8a6ba59d0
BUG: 1449775
Signed-off-by: Sanoj Unnikrishnan &lt;sunnikri@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17240
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The aux mount is created on the first limit/remove_limit/list command
and it remains until volume is stopped / deleted / (quota is disabled)
, where we do a lazy unmount. If the process is uncleanly terminated,
then the mount entry remains and we get (Transport disconnected) error
on subsequent attempts to run quota list/limit-usage/remove commands.

Second issue, There is also a risk of inadvertent rm -rf on the
/var/run/gluster causing data loss for the user. Ideally, /var/run is
a temp path for application use and should not cause any data loss to
persistent storage.

Solution:
1) unmount the aux mount after each use.
2) clean stale mount before mounting, if any.

One caveat with doing mount/unmount on each command is that we cannot
use same mount point for both list and limit commands.
The reason for this is that list command needs mount to be accessible
in cli after response from glusterd, So it could be unmounted by a
limit command if executed in parallel (had we used same mount point)
Hence we use separate mount points for list and limit commands.

&gt;Reviewed-on: https://review.gluster.org/16938
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Manikandan Selvaganesh &lt;manikandancs333@gmail.com&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt;Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt;(cherry picked from commit 2ae4b4058691b324535d802f4e6d24cce89a10e5)

Change-Id: I4f9e39da2ac2b65941399bffb6440db8a6ba59d0
BUG: 1449775
Signed-off-by: Sanoj Unnikrishnan &lt;sunnikri@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17240
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>nfs/nlm: remove lock request from the list after cancel</title>
<updated>2017-05-17T23:26:03+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-01-13T12:02:23+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=55a23c268a2451f2f4d25c06c05e4c3bc0933d5d'/>
<id>55a23c268a2451f2f4d25c06c05e4c3bc0933d5d</id>
<content type='text'>
Once an NLM client cancels a lock request, it should be removed from the
list. The list can also be cleaned of unneeded entries once the client
does not have any outstanding lock/share requests/granted.

Cherry picked from commit 71cb7f3eb4fb706aab7f83906592942a2ff2e924:
&gt; Change-Id: I2f2b666b627dcb52cddc6d5b95856e420b2b2e26
&gt; BUG: 1381970
&gt; Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/17188
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;

Change-Id: I2f2b666b627dcb52cddc6d5b95856e420b2b2e26
BUG: 1450377
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17268
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Once an NLM client cancels a lock request, it should be removed from the
list. The list can also be cleaned of unneeded entries once the client
does not have any outstanding lock/share requests/granted.

Cherry picked from commit 71cb7f3eb4fb706aab7f83906592942a2ff2e924:
&gt; Change-Id: I2f2b666b627dcb52cddc6d5b95856e420b2b2e26
&gt; BUG: 1381970
&gt; Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/17188
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;

Change-Id: I2f2b666b627dcb52cddc6d5b95856e420b2b2e26
BUG: 1450377
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17268
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>nfs/nlm: free the nlm_client upon RPC_DISCONNECT</title>
<updated>2017-05-17T23:25:59+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-01-20T13:15:31+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c71645532523191d3a364a8d1771e377db512e6b'/>
<id>c71645532523191d3a364a8d1771e377db512e6b</id>
<content type='text'>
When an NLM client disconnects, it should be removed from the list and
free'd.

&gt; Cherry picked from commit 6897ba5c51b29c05b270c447adb1a34cb8e61911:
&gt; Change-Id: Ib427c896bfcdc547a3aee42a652578ffd076e2ad
&gt; BUG: 1381970
&gt; Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/17189
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;

Change-Id: Ib427c896bfcdc547a3aee42a652578ffd076e2ad
BUG: 1450377
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17267
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When an NLM client disconnects, it should be removed from the list and
free'd.

&gt; Cherry picked from commit 6897ba5c51b29c05b270c447adb1a34cb8e61911:
&gt; Change-Id: Ib427c896bfcdc547a3aee42a652578ffd076e2ad
&gt; BUG: 1381970
&gt; Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/17189
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;

Change-Id: Ib427c896bfcdc547a3aee42a652578ffd076e2ad
BUG: 1450377
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17267
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
