<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/performance, branch release-3.12</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterfsd: Memleak in glusterfsd process while  brick mux is on</title>
<updated>2018-04-06T12:47:34+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-02-10T06:55:15+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=479bea17e75d8e75a8901d01b3fd3627bfd8991c'/>
<id>479bea17e75d8e75a8901d01b3fd3627bfd8991c</id>
<content type='text'>
Problem: At the time of stopping the volume while brick multiplex is
         enabled memory is not cleanup from all server side xlators.

Solution: To cleanup memory for all server side xlators call fini
          in glusterfs_handle_terminate after send GF_EVENT_CLEANUP
          notification to top xlator.

&gt; BUG: 1544090
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; (cherry picked from commit 7c3cc485054e4ede1efb358552135b432fb7047a)

&gt;Note: Run all test-cases in separate build (https://review.gluster.org/19574)
&gt;      with same patch after enable brick mux forcefully, all test cases are
&gt;      passed.

BUG: 1549473
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Change-Id: Ia10dc7f2605aa50f2b90b3fe4eb380ba9299e2fc
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: At the time of stopping the volume while brick multiplex is
         enabled memory is not cleanup from all server side xlators.

Solution: To cleanup memory for all server side xlators call fini
          in glusterfs_handle_terminate after send GF_EVENT_CLEANUP
          notification to top xlator.

&gt; BUG: 1544090
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; (cherry picked from commit 7c3cc485054e4ede1efb358552135b432fb7047a)

&gt;Note: Run all test-cases in separate build (https://review.gluster.org/19574)
&gt;      with same patch after enable brick mux forcefully, all test cases are
&gt;      passed.

BUG: 1549473
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Change-Id: Ia10dc7f2605aa50f2b90b3fe4eb380ba9299e2fc
</pre>
</div>
</content>
</entry>
<entry>
<title>performance/write-behind: fix bug while handling short writes</title>
<updated>2018-01-02T09:51:52+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2017-12-22T06:32:09+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=cc9e239ce4909cbe8880ef083ae6be0b5bfa705c'/>
<id>cc9e239ce4909cbe8880ef083ae6be0b5bfa705c</id>
<content type='text'>
The variabled "fulfilled" in wb_fulfill_short_write is not reset to 0
while handling every member of the list.

This has some interesting consequences:

* If we break from the loop while processing last member of the list
  head-&gt;winds, req is reset to head as the list is a circular
  one. However, head is already fulfilled and can potentially be
  freed. So, we end up adding a freed request to wb_inode-&gt;todo
  list. This is the RCA for the crash tracked by the bug associated
  with this patch (Note that we saw "holder" which is freed in todo
  list).

* If we break from the loop while processing any of the last but one
  member of the list head-&gt;winds, req is set to next member in the
  list, skipping the current request, even though it is not entirely
  synced. This can lead to data corruption.

The fix is very simple and we've to change the code to make sure
"fulfilled" reflects whether the current request is fulfilled or not
and it doesn't carry history of previous requests in the list.

&gt;Change-Id: Ia3d6988175a51c9e08efdb521a7b7938b01f93c8
&gt;BUG: 1528558
&gt;Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;

(cherry picked from commit 0bc22bef7f3c24663aadfb3548b348aa121e3047)
Change-Id: Ia3d6988175a51c9e08efdb521a7b7938b01f93c8
BUG: 1529095
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The variabled "fulfilled" in wb_fulfill_short_write is not reset to 0
while handling every member of the list.

This has some interesting consequences:

* If we break from the loop while processing last member of the list
  head-&gt;winds, req is reset to head as the list is a circular
  one. However, head is already fulfilled and can potentially be
  freed. So, we end up adding a freed request to wb_inode-&gt;todo
  list. This is the RCA for the crash tracked by the bug associated
  with this patch (Note that we saw "holder" which is freed in todo
  list).

* If we break from the loop while processing any of the last but one
  member of the list head-&gt;winds, req is set to next member in the
  list, skipping the current request, even though it is not entirely
  synced. This can lead to data corruption.

The fix is very simple and we've to change the code to make sure
"fulfilled" reflects whether the current request is fulfilled or not
and it doesn't carry history of previous requests in the list.

&gt;Change-Id: Ia3d6988175a51c9e08efdb521a7b7938b01f93c8
&gt;BUG: 1528558
&gt;Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;

(cherry picked from commit 0bc22bef7f3c24663aadfb3548b348aa121e3047)
Change-Id: Ia3d6988175a51c9e08efdb521a7b7938b01f93c8
BUG: 1529095
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>md-cache: avoid checking the xattr value buffer with string functions.</title>
<updated>2017-11-09T09:07:06+00:00</updated>
<author>
<name>Günther Deschner</name>
<email>gd@samba.org</email>
</author>
<published>2017-10-09T16:02:25+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=535d677beabd4d6e06437c386ae80e41ae34f122'/>
<id>535d677beabd4d6e06437c386ae80e41ae34f122</id>
<content type='text'>
xattrs may very well contain binary, non-text data with leading 0
values. Using strcmp for checking empty values is not the appropriate
thing to do: In the best case, it might treat a binary xattr value
starting with 0 from being cached (and hence also from being reported
back with xattr). In the worst case, we might read beyond the end
of a data blob that does contain any zero byte.

We fix this by checking the length of the data blob and checking
the first byte against 0 if the length is one.

&gt; Signed-off-by: Guenther Deschner &lt;gd@samba.org&gt;
&gt; Pair-Programmed-With: Michael Adam &lt;obnox@samba.org&gt;
&gt; Change-Id: If723c465a630b8a37b6be58782a2724df7ac6b11
&gt; BUG: 1476324
&gt; Reviewed-on: https://review.gluster.org/17910
&gt; Reviewed-by: Michael Adam &lt;obnox@samba.org&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
&gt; Tested-by: Poornima G &lt;pgurusid@redhat.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; (cherry picked from commit ab4ffdac9dec1867f2d9b33242179cf2b347319d)

Change-Id: If723c465a630b8a37b6be58782a2724df7ac6b11
BUG: 1499892
Signed-off-by: Günther Deschner &lt;gd@samba.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
xattrs may very well contain binary, non-text data with leading 0
values. Using strcmp for checking empty values is not the appropriate
thing to do: In the best case, it might treat a binary xattr value
starting with 0 from being cached (and hence also from being reported
back with xattr). In the worst case, we might read beyond the end
of a data blob that does contain any zero byte.

We fix this by checking the length of the data blob and checking
the first byte against 0 if the length is one.

&gt; Signed-off-by: Guenther Deschner &lt;gd@samba.org&gt;
&gt; Pair-Programmed-With: Michael Adam &lt;obnox@samba.org&gt;
&gt; Change-Id: If723c465a630b8a37b6be58782a2724df7ac6b11
&gt; BUG: 1476324
&gt; Reviewed-on: https://review.gluster.org/17910
&gt; Reviewed-by: Michael Adam &lt;obnox@samba.org&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
&gt; Tested-by: Poornima G &lt;pgurusid@redhat.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; (cherry picked from commit ab4ffdac9dec1867f2d9b33242179cf2b347319d)

Change-Id: If723c465a630b8a37b6be58782a2724df7ac6b11
BUG: 1499892
Signed-off-by: Günther Deschner &lt;gd@samba.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>md-cache: Use correct xattr keynames for virtual glusterfs ACLs.</title>
<updated>2017-11-06T06:15:30+00:00</updated>
<author>
<name>Günther Deschner</name>
<email>gd@samba.org</email>
</author>
<published>2017-10-09T15:50:19+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b0790bc6ee41fc8a4a9ee5591169a1d8eca3f20d'/>
<id>b0790bc6ee41fc8a4a9ee5591169a1d8eca3f20d</id>
<content type='text'>
The "glusterfs.posix_acl." prefix does not catch the glusterfs posix acl
xattr keynames which are

	* "glusterfs.posix.acl" and
	* "glusterfs.posix.default_acl"

Using the GF_POSIX_ACL_ACCESS and GF_POSIX_ACL_DEFAULT defines directly
is the savest option.

Guenther

&gt; BUG: 1476295
&gt; Signed-off-by: Guenther Deschner &lt;gd@samba.org&gt;
&gt; Reviewed-on: https://review.gluster.org/17909
&gt; Reviewed-by: Michael Adam &lt;obnox@samba.org&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Tested-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; (cherry picked from commit 5fe8555800cbc9818e7c976f63499795a378cd8d)
&gt; Signed-off-by: Günther Deschner &lt;gd@samba.org&gt;

Change-Id: I5aba64b26b6cbec850ea02316dd9f069400e857f
BUG: 1499889
Signed-off-by: Günther Deschner &lt;gd@samba.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The "glusterfs.posix_acl." prefix does not catch the glusterfs posix acl
xattr keynames which are

	* "glusterfs.posix.acl" and
	* "glusterfs.posix.default_acl"

Using the GF_POSIX_ACL_ACCESS and GF_POSIX_ACL_DEFAULT defines directly
is the savest option.

Guenther

&gt; BUG: 1476295
&gt; Signed-off-by: Guenther Deschner &lt;gd@samba.org&gt;
&gt; Reviewed-on: https://review.gluster.org/17909
&gt; Reviewed-by: Michael Adam &lt;obnox@samba.org&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; Tested-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; (cherry picked from commit 5fe8555800cbc9818e7c976f63499795a378cd8d)
&gt; Signed-off-by: Günther Deschner &lt;gd@samba.org&gt;

Change-Id: I5aba64b26b6cbec850ea02316dd9f069400e857f
BUG: 1499889
Signed-off-by: Günther Deschner &lt;gd@samba.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>perf/qr: Use a ref-ed data to extract content</title>
<updated>2017-09-07T06:57:43+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-08-30T04:31:54+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=a862ca969cfd4dc9309e8123e6e90c4e47f9b89b'/>
<id>a862ca969cfd4dc9309e8123e6e90c4e47f9b89b</id>
<content type='text'>
qr_content_extract used dict_get to get the value of
the GF_CONTENT_KEY key. dict_get does not ref the data
before returning it so QR could be acting on freed memory
if another thread deletes the key before then.
This patch also fixes a race in dict_get_with_ref.

Fix: Use dict_get_with_ref to retrieve the file contents.

&gt; BUG: 1484709
&gt; Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/18115
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
&gt; Tested-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;

Change-Id: Ib1a7a70bb92eed7e70747ec530e0b3edc53127ec
BUG: 1486538
(cherry picked from commit 414d3e92fc56f08e320a3aa65b6b18e65b384551)
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18145
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: MOHIT AGRAWAL &lt;moagrawa@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
qr_content_extract used dict_get to get the value of
the GF_CONTENT_KEY key. dict_get does not ref the data
before returning it so QR could be acting on freed memory
if another thread deletes the key before then.
This patch also fixes a race in dict_get_with_ref.

Fix: Use dict_get_with_ref to retrieve the file contents.

&gt; BUG: 1484709
&gt; Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/18115
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
&gt; Tested-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;

Change-Id: Ib1a7a70bb92eed7e70747ec530e0b3edc53127ec
BUG: 1486538
(cherry picked from commit 414d3e92fc56f08e320a3aa65b6b18e65b384551)
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18145
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: MOHIT AGRAWAL &lt;moagrawa@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>performance/io-cache: update inode contexts of each entry in readdirplus</title>
<updated>2017-07-31T17:30:41+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>raghavendra@gluster.com</email>
</author>
<published>2013-05-17T07:22:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=6c10351a99005c358ccca8b6c4ee21e009618dd9'/>
<id>6c10351a99005c358ccca8b6c4ee21e009618dd9</id>
<content type='text'>
io-cache stores read-cache in inode which is currently created only in
lookup. But, with readdirplus and md-cache absorbing lookups, io-cache
need not receive a lookup before a fop like readv.

&gt;Change-Id: I6eba995b0a90d4d5055a4aef0489707b852da1b8
&gt;BUG: 1474180
&gt;Signed-off-by: Raghavendra G &lt;raghavendra@gluster.com&gt;
&gt;Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt;Reviewed-on: https://review.gluster.org/5029
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;

(cherry picked from commit b90e12134af85635199750967c326761d6c06e86)
Change-Id: I6eba995b0a90d4d5055a4aef0489707b852da1b8
BUG: 1475635
Signed-off-by: Raghavendra G &lt;raghavendra@gluster.com&gt;
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17889
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
io-cache stores read-cache in inode which is currently created only in
lookup. But, with readdirplus and md-cache absorbing lookups, io-cache
need not receive a lookup before a fop like readv.

&gt;Change-Id: I6eba995b0a90d4d5055a4aef0489707b852da1b8
&gt;BUG: 1474180
&gt;Signed-off-by: Raghavendra G &lt;raghavendra@gluster.com&gt;
&gt;Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt;Reviewed-on: https://review.gluster.org/5029
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;

(cherry picked from commit b90e12134af85635199750967c326761d6c06e86)
Change-Id: I6eba995b0a90d4d5055a4aef0489707b852da1b8
BUG: 1475635
Signed-off-by: Raghavendra G &lt;raghavendra@gluster.com&gt;
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17889
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: Name threads on creation</title>
<updated>2017-07-19T14:16:19+00:00</updated>
<author>
<name>Raghavendra Talur</name>
<email>rtalur@redhat.com</email>
</author>
<published>2017-07-18T06:06:19+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=33db9aff1deaa028f30516e49fdb1e8d6e31bb73'/>
<id>33db9aff1deaa028f30516e49fdb1e8d6e31bb73</id>
<content type='text'>
Set names to threads on creation for easier
debugging.

Output of top -H -p &lt;PID-OF-GLUSTERFSD&gt;
Before:
19773 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19774 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19775 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19776 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19777 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19778 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19779 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19780 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19781 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19782 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19783 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19784 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19785 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19786 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19787 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19789 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19790 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
25178 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
 5398 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
 7881 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd

After:
19773 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19774 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustertimer
19775 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19776 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustermemsweep
19777 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustersproc0
19778 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustersproc1
19779 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll0
19780 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusteridxwrker
19781 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusteriotwr0
19782 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterbrssign
19783 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterbrswrker
19784 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterclogecon
19785 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd0
19786 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd1
19787 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd2
19789 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixjan
19790 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixfsy
25178 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll1
 5398 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll2
 7881 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixhc

Change-Id: Id5f333755c1ba168a2ffaa4fce6e71c375e10703
BUG: 1254002
Updates: #271
Signed-off-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
Reviewed-on: https://review.gluster.org/11926
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Set names to threads on creation for easier
debugging.

Output of top -H -p &lt;PID-OF-GLUSTERFSD&gt;
Before:
19773 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19774 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19775 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19776 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19777 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19778 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19779 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19780 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19781 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19782 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19783 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19784 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19785 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19786 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19787 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19789 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19790 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
25178 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
 5398 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
 7881 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd

After:
19773 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19774 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustertimer
19775 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19776 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustermemsweep
19777 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustersproc0
19778 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustersproc1
19779 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll0
19780 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusteridxwrker
19781 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusteriotwr0
19782 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterbrssign
19783 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterbrswrker
19784 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterclogecon
19785 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd0
19786 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd1
19787 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd2
19789 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixjan
19790 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixfsy
25178 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll1
 5398 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll2
 7881 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixhc

Change-Id: Id5f333755c1ba168a2ffaa4fce6e71c375e10703
BUG: 1254002
Updates: #271
Signed-off-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
Reviewed-on: https://review.gluster.org/11926
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>nl-cache: Fix a possible crash and stale cache</title>
<updated>2017-06-13T05:01:17+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2017-05-26T10:15:57+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7674584fa53944a4e982e217798f31a3d1ef313b'/>
<id>7674584fa53944a4e982e217798f31a3d1ef313b</id>
<content type='text'>
Issue1:
Consider the followinf sequence of operations:
   ...
   nlc_ctx = nlc_ctx_get (inode i1)
   ....... -&gt; nlc_clear_cache (i1) gets called as a part of nlc_invalidate
                                   or any other callers
              ...
              GF_FREE (ii nlc_ctx)
   LOCK (nlc_ctx-&gt;lock);  -&gt;  This will result in crash as the ctx
                              got freed in nlc_clear_cache.

Issue2:
   lookup on dir1/file1 result in ENOENT
   add cache to dir1 at time T1
   ....
   CHILD_DOWN at T2
   lookup on dir1/file2 result in ENOENT
   add cache to dir1, but the cache time is still T1
   lookup on dir1/file2 - should have been served from cache
                          but the cache time is T1 &lt; T2, hence
                          cache is considered as invalid.
So, after CHILD_DOWN the right thing would be to clear the cache
and restart caching on that inode.

Solution:
Do not free nlc_ctx in nlc_clear_cache, but only in inode_forget()
The fix for both issue1 and 2 is interleaved hence sending it as
single patch.

Change-Id: I83d8ed36c049a93567c6d7e63d045dc14ccbb397
BUG: 1458539
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17453
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Issue1:
Consider the followinf sequence of operations:
   ...
   nlc_ctx = nlc_ctx_get (inode i1)
   ....... -&gt; nlc_clear_cache (i1) gets called as a part of nlc_invalidate
                                   or any other callers
              ...
              GF_FREE (ii nlc_ctx)
   LOCK (nlc_ctx-&gt;lock);  -&gt;  This will result in crash as the ctx
                              got freed in nlc_clear_cache.

Issue2:
   lookup on dir1/file1 result in ENOENT
   add cache to dir1 at time T1
   ....
   CHILD_DOWN at T2
   lookup on dir1/file2 result in ENOENT
   add cache to dir1, but the cache time is still T1
   lookup on dir1/file2 - should have been served from cache
                          but the cache time is T1 &lt; T2, hence
                          cache is considered as invalid.
So, after CHILD_DOWN the right thing would be to clear the cache
and restart caching on that inode.

Solution:
Do not free nlc_ctx in nlc_clear_cache, but only in inode_forget()
The fix for both issue1 and 2 is interleaved hence sending it as
single patch.

Change-Id: I83d8ed36c049a93567c6d7e63d045dc14ccbb397
BUG: 1458539
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17453
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>readdir-ahead: Fix duplicate listing and cache size calculation</title>
<updated>2017-06-12T10:18:24+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2017-06-12T05:29:04+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e97c32ee9913969a726f8a8286cf714f907729d6'/>
<id>e97c32ee9913969a726f8a8286cf714f907729d6</id>
<content type='text'>
Issue:
If a opendir is followed by a closedir without readdir, though
the prefetched entries were freed, the freed size was not accounted
in priv-&gt;rda_cache_size. Thus the cache limit will exceed if there
are multiple opendir followed by closedir.

Fix:
Fix the pric-&gt;rda_cache_size calculation. Also have removed the
inode_ctx_size. Each perf xlator has its own cache limit that
it works with. Also the inode_ctx size can change, if a forget/
invalidate or any other factor triggers the inode_ctx size.

Change-Id: I9707ec558076ce046e58a55989ec9513c70ea029
BUG: 1431908
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17504
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Issue:
If a opendir is followed by a closedir without readdir, though
the prefetched entries were freed, the freed size was not accounted
in priv-&gt;rda_cache_size. Thus the cache limit will exceed if there
are multiple opendir followed by closedir.

Fix:
Fix the pric-&gt;rda_cache_size calculation. Also have removed the
inode_ctx_size. Each perf xlator has its own cache limit that
it works with. Also the inode_ctx size can change, if a forget/
invalidate or any other factor triggers the inode_ctx size.

Change-Id: I9707ec558076ce046e58a55989ec9513c70ea029
BUG: 1431908
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17504
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>md-cache: Fix the dump of stat inode in .meta and statedump</title>
<updated>2017-06-12T03:53:13+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2017-06-08T05:36:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=33b426aa9f62f6c8abb2b537b7a784c5b8090fb3'/>
<id>33b426aa9f62f6c8abb2b537b7a784c5b8090fb3</id>
<content type='text'>
Change-Id: If61ed5e4462e98d18a1370734a0bcee1ed94d82d
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17491
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: If61ed5e4462e98d18a1370734a0bcee1ed94d82d
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17491
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
