<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators, branch v5.11</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>system/posix-acl: update ctx only if iatt is non-NULL</title>
<updated>2019-12-06T10:31:19+00:00</updated>
<author>
<name>Homma</name>
<email>homma@allworks.co.jp</email>
</author>
<published>2019-07-05T10:40:41+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f0f042780e5e8796a6a8e8fd9935fdc7bd974527'/>
<id>f0f042780e5e8796a6a8e8fd9935fdc7bd974527</id>
<content type='text'>
We need to safe-guard against possible zero'ing out of iatt
structure in acl ctx, which can cause many issues.

Backport of:
&gt; fixes: bz#1668286
&gt; Change-Id: Ie81a57d7453a6624078de3be8c0845bf4d432773
&gt; Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;

(cherry-picked from commit 6bf9637a93011298d032332ca93009ba4e377e46)
fixes: bz#1767305
Change-Id: Ie81a57d7453a6624078de3be8c0845bf4d432773
Signed-off-by: Alan Orth &lt;alan.orth@gmail.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We need to safe-guard against possible zero'ing out of iatt
structure in acl ctx, which can cause many issues.

Backport of:
&gt; fixes: bz#1668286
&gt; Change-Id: Ie81a57d7453a6624078de3be8c0845bf4d432773
&gt; Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;

(cherry-picked from commit 6bf9637a93011298d032332ca93009ba4e377e46)
fixes: bz#1767305
Change-Id: Ie81a57d7453a6624078de3be8c0845bf4d432773
Signed-off-by: Alan Orth &lt;alan.orth@gmail.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Detach iot_worker to release its resources</title>
<updated>2019-11-05T08:25:48+00:00</updated>
<author>
<name>Liguang Li</name>
<email>liguang.lee6@gmail.com</email>
</author>
<published>2019-11-05T08:25:39+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=aa60bea1753bb744c892b669a3e693803b0db385'/>
<id>aa60bea1753bb744c892b669a3e693803b0db385</id>
<content type='text'>
When iot_worker terminates, its resources have not been reaped, which
will consumes lots of memory.

Detach iot_worker to automically release its resources back to the
system.

Change-Id: I71fabb2940e76ad54dc56b4c41aeeead2644b8bb
fixes: bz#1718734
Signed-off-by: Liguang Li &lt;liguang.lee6@gmail.com&gt;
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When iot_worker terminates, its resources have not been reaped, which
will consumes lots of memory.

Detach iot_worker to automically release its resources back to the
system.

Change-Id: I71fabb2940e76ad54dc56b4c41aeeead2644b8bb
fixes: bz#1718734
Signed-off-by: Liguang Li &lt;liguang.lee6@gmail.com&gt;
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Heal entries when there is a source &amp; no healed_sinks</title>
<updated>2019-10-11T07:19:03+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2019-09-05T10:44:50+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=11cbd2da4a065332d0b8eee624b654a9991a0170'/>
<id>11cbd2da4a065332d0b8eee624b654a9991a0170</id>
<content type='text'>
Problem:
In a situation where B1 blames B2, B2 blames B1 and B3 doesn't blame
anything for entry heal, heal will not complete even though we have
clear source and sinks. This will happen because while doing
afr_selfheal_find_direction() only the bricks which are blamed by
non-accused bricks are considered as sinks. Later in
__afr_selfheal_entry_finalize_source() when it tries to mark all the
non-sources as sinks it fails to do so because there won't be any
healed_sinks marked, no witness present and there will be a source.

Fix:
If there is a source and no healed_sinks, then reset all the locked
sources to 0 and healed sinks to 1 to do conservative merge.

Change-Id: If40d8bc95d52a52b2730f55bdcf135109b421548
Fixes: bz#1760710
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In a situation where B1 blames B2, B2 blames B1 and B3 doesn't blame
anything for entry heal, heal will not complete even though we have
clear source and sinks. This will happen because while doing
afr_selfheal_find_direction() only the bricks which are blamed by
non-accused bricks are considered as sinks. Later in
__afr_selfheal_entry_finalize_source() when it tries to mark all the
non-sources as sinks it fails to do so because there won't be any
healed_sinks marked, no witness present and there will be a source.

Fix:
If there is a source and no healed_sinks, then reset all the locked
sources to 0 and healed sinks to 1 to do conservative merge.

Change-Id: If40d8bc95d52a52b2730f55bdcf135109b421548
Fixes: bz#1760710
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr/lookup: Pass xattr_req in while doing a selfheal in lookup</title>
<updated>2019-09-05T12:43:45+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-09-05T12:42:35+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=fcfcb0ad04402a34f07075d29981922a6dbc858c'/>
<id>fcfcb0ad04402a34f07075d29981922a6dbc858c</id>
<content type='text'>
We were not passing xattr_req when doing a name self heal
as well as a meta data heal. Because of this, some xdata
was missing which causes i/o errors

Backport of&gt;https://review.gluster.org/#/c/glusterfs/+/23024/
&gt;Change-Id: Ibfb1205a7eb0195632dc3820116ffbbb8043545f
&gt;Fixes: bz#1728770
&gt;Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;

Change-Id: I740ccdbfcf2667fe4a850c12ecdc4b9eeed08293
Fixes: bz#1749352
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We were not passing xattr_req when doing a name self heal
as well as a meta data heal. Because of this, some xdata
was missing which causes i/o errors

Backport of&gt;https://review.gluster.org/#/c/glusterfs/+/23024/
&gt;Change-Id: Ibfb1205a7eb0195632dc3820116ffbbb8043545f
&gt;Fixes: bz#1728770
&gt;Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;

Change-Id: I740ccdbfcf2667fe4a850c12ecdc4b9eeed08293
Fixes: bz#1749352
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/client: propagte GF_EVENT_CHILD_PING only for connections to brick</title>
<updated>2019-08-09T05:03:46+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2019-06-04T13:52:45+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=23e391b6d1db72563a0f8e239d7877c9839d49fd'/>
<id>23e391b6d1db72563a0f8e239d7877c9839d49fd</id>
<content type='text'>
Two reasons:
* ping responses from glusterd may not be relevant for Halo
  replication. Instead, it might be interested in only knowing whether
  the brick itself is responsive.
* When a brick is killed, propagating GF_EVENT_CHILD_PING of ping
  response from glusterd results in GF_EVENT_DISCONNECT spuriously
  propagated to parent xlators. These DISCONNECT events are from the
  connections client establishes with glusterd as part of its
  reconnect logic. Without GF_EVENT_CHILD_PING, the last event
  propagated to parent xlators would be the first DISCONNECT event
  from brick and hence subsequent DISCONNECTS to glusterd are not
  propagated as protocol/client prevents same event being propagated
  to parent xlators consecutively. propagating GF_EVENT_CHILD_PING for
  ping responses from glusterd would change the last_sent_event to
  GF_EVENT_CHILD_PING and hence protocol/client cannot prevent
  subsequent DISCONNECT events

Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Fixes: bz#1739336
Change-Id: I50276680c52f05ca9e12149a3094923622d6eaef
(cherry picked from commit 5d66eafec581fb3209af74595784be8854ca40a4)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Two reasons:
* ping responses from glusterd may not be relevant for Halo
  replication. Instead, it might be interested in only knowing whether
  the brick itself is responsive.
* When a brick is killed, propagating GF_EVENT_CHILD_PING of ping
  response from glusterd results in GF_EVENT_DISCONNECT spuriously
  propagated to parent xlators. These DISCONNECT events are from the
  connections client establishes with glusterd as part of its
  reconnect logic. Without GF_EVENT_CHILD_PING, the last event
  propagated to parent xlators would be the first DISCONNECT event
  from brick and hence subsequent DISCONNECTS to glusterd are not
  propagated as protocol/client prevents same event being propagated
  to parent xlators consecutively. propagating GF_EVENT_CHILD_PING for
  ping responses from glusterd would change the last_sent_event to
  GF_EVENT_CHILD_PING and hence protocol/client cannot prevent
  subsequent DISCONNECT events

Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Fixes: bz#1739336
Change-Id: I50276680c52f05ca9e12149a3094923622d6eaef
(cherry picked from commit 5d66eafec581fb3209af74595784be8854ca40a4)
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: add GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile</title>
<updated>2019-07-18T06:26:14+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2019-06-11T04:22:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c2e67ea0b7faa5f8a5875aaf7563e8658e392d96'/>
<id>c2e67ea0b7faa5f8a5875aaf7563e8658e392d96</id>
<content type='text'>
... with out which volume creation fails with "volume create: &lt;xyz&gt;: failed:
Failed to create volume files"

&gt;Fixes: bz#1716812
&gt;Change-Id: I2f4c2c6d5290f066b54e1c1db19e25db9937bedb
&gt;Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;

Fixes: bz#1721106
Change-Id: I2f4c2c6d5290f066b54e1c1db19e25db9937bedb
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
... with out which volume creation fails with "volume create: &lt;xyz&gt;: failed:
Failed to create volume files"

&gt;Fixes: bz#1716812
&gt;Change-Id: I2f4c2c6d5290f066b54e1c1db19e25db9937bedb
&gt;Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;

Fixes: bz#1721106
Change-Id: I2f4c2c6d5290f066b54e1c1db19e25db9937bedb
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>upcall: Avoid sending notifications for invalid inodes</title>
<updated>2019-07-04T06:27:48+00:00</updated>
<author>
<name>Soumya Koduri</name>
<email>skoduri@redhat.com</email>
</author>
<published>2019-06-07T14:03:07+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=6ed3b08223b9034ea4f488017e407d7c81b0b2d6'/>
<id>6ed3b08223b9034ea4f488017e407d7c81b0b2d6</id>
<content type='text'>
For nameless LOOKUPs, server creates a new inode which shall
remain invalid until the fop is successfully processed post
which it is linked to the inode table.

But incase if there is an already linked inode for that entry,
it discards that newly created inode which results in upcall
notification. This may result in client being bombarded with
unnecessary upcalls affecting performance if the data set is huge.

This issue can be avoided by looking up and storing the upcall
context in the original linked inode (if exists), thus saving up on
those extra callbacks.

This is backport of below mainline fix -
&gt; fixes: bz#1718338
&gt; patch url: https://review.gluster.org/#/c/glusterfs/+/22840/
&gt; Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;

Change-Id: I044a1737819bb40d1a049d2f53c0566e746d2a17
fixes: bz#1720634
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
For nameless LOOKUPs, server creates a new inode which shall
remain invalid until the fop is successfully processed post
which it is linked to the inode table.

But incase if there is an already linked inode for that entry,
it discards that newly created inode which results in upcall
notification. This may result in client being bombarded with
unnecessary upcalls affecting performance if the data set is huge.

This issue can be avoided by looking up and storing the upcall
context in the original linked inode (if exists), thus saving up on
those extra callbacks.

This is backport of below mainline fix -
&gt; fixes: bz#1718338
&gt; patch url: https://review.gluster.org/#/c/glusterfs/+/22840/
&gt; Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;

Change-Id: I044a1737819bb40d1a049d2f53c0566e746d2a17
fixes: bz#1720634
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: honor contention notifications for partially acquired locks</title>
<updated>2019-06-28T11:07:08+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-05-09T09:07:18+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=257fee6312513498ee70765b9f343d5df77b1fce'/>
<id>257fee6312513498ee70765b9f343d5df77b1fce</id>
<content type='text'>
EC was ignoring lock contention notifications received while a lock was
being acquired. When a lock is partially acquired (some bricks have
granted the lock but some others not yet) we can receive notifications
from acquired bricks, which should be honored, since we may not receive
more notifications after that.

Since EC was ignoring them, once the lock was acquired, it was not
released until the eager-lock timeout, causing unnecessary delays on
other clients.

This fix takes into consideration the notifications received before
having completed the full lock acquisition. After that, the lock will
be releaed as soon as possible.

Backport of:
&gt; BUG: bz#1708156
&gt; Change-Id: I2a306dbdb29fb557dcab7788a258bd75d826cc12
&gt; Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;

Fixes: bz#1717282
Change-Id: I2a306dbdb29fb557dcab7788a258bd75d826cc12
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
Signed-off-by: Hari Gowtham &lt;hgowtham@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
EC was ignoring lock contention notifications received while a lock was
being acquired. When a lock is partially acquired (some bricks have
granted the lock but some others not yet) we can receive notifications
from acquired bricks, which should be honored, since we may not receive
more notifications after that.

Since EC was ignoring them, once the lock was acquired, it was not
released until the eager-lock timeout, causing unnecessary delays on
other clients.

This fix takes into consideration the notifications received before
having completed the full lock acquisition. After that, the lock will
be releaed as soon as possible.

Backport of:
&gt; BUG: bz#1708156
&gt; Change-Id: I2a306dbdb29fb557dcab7788a258bd75d826cc12
&gt; Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;

Fixes: bz#1717282
Change-Id: I2a306dbdb29fb557dcab7788a258bd75d826cc12
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
Signed-off-by: Hari Gowtham &lt;hgowtham@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ec: fix truncate lock to cover the write in tuncate clean</title>
<updated>2019-05-08T14:20:13+00:00</updated>
<author>
<name>Kinglong Mee</name>
<email>kinglongmee@gmail.com</email>
</author>
<published>2019-04-12T03:35:55+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=515866cd49aadf52c9949f8745fd2ba1ea9c99c3'/>
<id>515866cd49aadf52c9949f8745fd2ba1ea9c99c3</id>
<content type='text'>
ec_truncate_clean does writing under the lock granted for truncate,
but the lock is calculated by ec_adjust_offset_up, so that,
the write in ec_truncate_clean is out of lock.

Updates: bz#1699500
Change-Id: I15ed1b0807d75c5eb817323f1c227e97d03e0e7c
Signed-off-by: Kinglong Mee &lt;kinglongmee@gmail.com&gt;
(cherry picked from commit 0e1223491e964096384edfae5032ed0d50d028ad)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
ec_truncate_clean does writing under the lock granted for truncate,
but the lock is calculated by ec_adjust_offset_up, so that,
the write in ec_truncate_clean is out of lock.

Updates: bz#1699500
Change-Id: I15ed1b0807d75c5eb817323f1c227e97d03e0e7c
Signed-off-by: Kinglong Mee &lt;kinglongmee@gmail.com&gt;
(cherry picked from commit 0e1223491e964096384edfae5032ed0d50d028ad)
</pre>
</div>
</content>
</entry>
<entry>
<title>performance/write-behind: remove request from wip list in wb_writev_cbk</title>
<updated>2019-05-08T14:07:56+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2019-05-03T04:44:48+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=09cb7406c28052c4643b2cb6e0ab986331b9576b'/>
<id>09cb7406c28052c4643b2cb6e0ab986331b9576b</id>
<content type='text'>
There is a race in the way O_DIRECT writes are handled. Assume two
overlapping write requests w1 and w2.

* w1 is issued and is in wb_inode-&gt;wip queue as the response is still
  pending from bricks. Also wb_request_unref in wb_do_winds is not yet
  invoked.

       list_for_each_entry_safe (req, tmp, tasks, winds) {
		list_del_init (&amp;req-&gt;winds);

                if (req-&gt;op_ret == -1) {
			call_unwind_error_keep_stub (req-&gt;stub, req-&gt;op_ret,
		                                     req-&gt;op_errno);
                } else {
                        call_resume_keep_stub (req-&gt;stub);
		}

                wb_request_unref (req);
        }

* w2 is issued and wb_process_queue is invoked. w2 is not picked up
  for winding as w1 is still in wb_inode-&gt;wip. w1 is added to todo
  list and wb_writev for w2 returns.

* response to w1 is received and invokes wb_request_unref. Assume
  wb_request_unref in wb_do_winds (see point 1) is not invoked
  yet. Since there is one more refcount, wb_request_unref in
  wb_writev_cbk of w1 doesn't remove w1 from wip.

* wb_process_queue is invoked as part of wb_writev_cbk of w1. But, it
  fails to wind w2 as w1 is still in wip.

* wb_requet_unref is invoked on w1 as part of wb_do_winds. w1 is
  removed from all queues including w1.

* After this point there is no invocation of wb_process_queue unless
  new request is issued from application causing w2 to be hung till
  the next request.

This bug is similar to bz 1626780 and bz 1379655.

Change-Id: Iaa47437613591699d4c8ad18bc0b32de6affcc31
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Fixes: bz#1707198
(cherry picked from commit 6454132342c0b549365d92bcf3572ecd914f7fa8)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There is a race in the way O_DIRECT writes are handled. Assume two
overlapping write requests w1 and w2.

* w1 is issued and is in wb_inode-&gt;wip queue as the response is still
  pending from bricks. Also wb_request_unref in wb_do_winds is not yet
  invoked.

       list_for_each_entry_safe (req, tmp, tasks, winds) {
		list_del_init (&amp;req-&gt;winds);

                if (req-&gt;op_ret == -1) {
			call_unwind_error_keep_stub (req-&gt;stub, req-&gt;op_ret,
		                                     req-&gt;op_errno);
                } else {
                        call_resume_keep_stub (req-&gt;stub);
		}

                wb_request_unref (req);
        }

* w2 is issued and wb_process_queue is invoked. w2 is not picked up
  for winding as w1 is still in wb_inode-&gt;wip. w1 is added to todo
  list and wb_writev for w2 returns.

* response to w1 is received and invokes wb_request_unref. Assume
  wb_request_unref in wb_do_winds (see point 1) is not invoked
  yet. Since there is one more refcount, wb_request_unref in
  wb_writev_cbk of w1 doesn't remove w1 from wip.

* wb_process_queue is invoked as part of wb_writev_cbk of w1. But, it
  fails to wind w2 as w1 is still in wip.

* wb_requet_unref is invoked on w1 as part of wb_do_winds. w1 is
  removed from all queues including w1.

* After this point there is no invocation of wb_process_queue unless
  new request is issued from application causing w2 to be hung till
  the next request.

This bug is similar to bz 1626780 and bz 1379655.

Change-Id: Iaa47437613591699d4c8ad18bc0b32de6affcc31
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Fixes: bz#1707198
(cherry picked from commit 6454132342c0b549365d92bcf3572ecd914f7fa8)
</pre>
</div>
</content>
</entry>
</feed>
