<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/features/shard, branch v4.0dev1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>features/shard: Change default shard-block-size to 64MB</title>
<updated>2017-09-14T03:23:15+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2017-09-08T12:34:50+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e4a59b384f5bbaaeb937a53cef64f4e388f85153'/>
<id>e4a59b384f5bbaaeb937a53cef64f4e388f85153</id>
<content type='text'>
Change-Id: I55fa87e07136cff10b0d725ee24dd3151016e64e
BUG: 1489823
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18243
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Sunil Kumar Acharya &lt;sheggodu@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I55fa87e07136cff10b0d725ee24dd3151016e64e
BUG: 1489823
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18243
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Sunil Kumar Acharya &lt;sheggodu@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Increment counts in locks</title>
<updated>2017-09-06T02:31:29+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2017-09-05T08:00:53+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e50fc8f4e7eb51386f47bea9e6ca8d8490c09003'/>
<id>e50fc8f4e7eb51386f47bea9e6ca8d8490c09003</id>
<content type='text'>
Problem:
Because create_count/eexist_count are incremented without locks, all the shards may not
be created because call_count will be lesser than what it needs to be. This can lead
to crash in shard_common_inode_write_do() because inode on which we want to do
fd_anonymous() is NULL

Fix:
Increment the counts in frame-&gt;lock

Change-Id: Ibc87dcb1021e9f4ac2929f662da07aa7662ab0d6
BUG: 1488354
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18203
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Because create_count/eexist_count are incremented without locks, all the shards may not
be created because call_count will be lesser than what it needs to be. This can lead
to crash in shard_common_inode_write_do() because inode on which we want to do
fd_anonymous() is NULL

Fix:
Increment the counts in frame-&gt;lock

Change-Id: Ibc87dcb1021e9f4ac2929f662da07aa7662ab0d6
BUG: 1488354
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18203
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Return aggregated size in stbuf of LINK fop</title>
<updated>2017-09-06T02:31:09+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2017-09-05T16:12:26+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=91430817ce5bcbeabf057e9c978485728a85fb2b'/>
<id>91430817ce5bcbeabf057e9c978485728a85fb2b</id>
<content type='text'>
Change-Id: I42df7679d63fec9b4c03b8dbc66c5625f097fac0
BUG: 1488546
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18209
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I42df7679d63fec9b4c03b8dbc66c5625f097fac0
BUG: 1488546
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18209
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Remove ctx from LRU in shard_forget</title>
<updated>2017-06-30T05:05:15+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2017-06-28T03:40:53+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=97defef2375b911c7b6a3924c242ba8ef4593686'/>
<id>97defef2375b911c7b6a3924c242ba8ef4593686</id>
<content type='text'>
Problem:
There is a race when the following two commands are executed on the mount in
parallel from two different terminals on a sharded volume,
which leads to use-after-free.

Terminal-1:
while true; do dd if=/dev/zero of=file1 bs=1M count=4; done

Terminal-2:
while true; do cat file1 &gt; /dev/null; done

In the normal case this is the life-cycle of a shard-inode
1) Shard is added to LRU when it is first looked-up
2) For every operation on the shard it is moved up in LRU
3) When "unlink of the shard"/"LRU limit is hit" happens it is removed from LRU

But we are seeing a race where the inode stays in Shard LRU even after it is
forgotten which leads to Use-after-free and then some memory-corruptions.

These are the steps:
1) Shard is added to LRU when it is first looked-up
2) For every operation on the shard it is moved up in LRU

Reader-handler                                    Truncate-handler
1) Reader handler needs shard-x to be read.     1) Truncate has just deleted shard-x
2) In shard_common_resolve_shards(), it does
   inode_resolve() and that leads to
   a hit in LRU, so it is going to call
__shard_update_shards_inode_list() to move the
   inode to top of LRU
						2) shard-x gets unlinked from the itable
						   and inode_forget(inode, 0) is called
						   to make sure the inode can be purged
						   upon last unref
3) when __shard_update_shards_inode_list() is
   called it finds that the inode is not in LRU
   so it adds it back to the LRU-list

Both these operations complete and call inode_unref(shard-x) which leads to the inode
getting freed and forgotten, even when it is in Shard LRU list. When more inodes are
added to LRU, use-after-free will happen and it leads to undefined behaviors.

Fix:
I see that the inode can be removed from LRU even by the protocol layers like gfapi/gNFS
when LRU limit is reached. So it is better to add a check in shard_forget() to remove itself
from LRU list if it exists.

BUG: 1466037
Change-Id: Ia79c0c5c9d5febc56c41ddb12b5daf03e5281638
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17644
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
There is a race when the following two commands are executed on the mount in
parallel from two different terminals on a sharded volume,
which leads to use-after-free.

Terminal-1:
while true; do dd if=/dev/zero of=file1 bs=1M count=4; done

Terminal-2:
while true; do cat file1 &gt; /dev/null; done

In the normal case this is the life-cycle of a shard-inode
1) Shard is added to LRU when it is first looked-up
2) For every operation on the shard it is moved up in LRU
3) When "unlink of the shard"/"LRU limit is hit" happens it is removed from LRU

But we are seeing a race where the inode stays in Shard LRU even after it is
forgotten which leads to Use-after-free and then some memory-corruptions.

These are the steps:
1) Shard is added to LRU when it is first looked-up
2) For every operation on the shard it is moved up in LRU

Reader-handler                                    Truncate-handler
1) Reader handler needs shard-x to be read.     1) Truncate has just deleted shard-x
2) In shard_common_resolve_shards(), it does
   inode_resolve() and that leads to
   a hit in LRU, so it is going to call
__shard_update_shards_inode_list() to move the
   inode to top of LRU
						2) shard-x gets unlinked from the itable
						   and inode_forget(inode, 0) is called
						   to make sure the inode can be purged
						   upon last unref
3) when __shard_update_shards_inode_list() is
   called it finds that the inode is not in LRU
   so it adds it back to the LRU-list

Both these operations complete and call inode_unref(shard-x) which leads to the inode
getting freed and forgotten, even when it is in Shard LRU list. When more inodes are
added to LRU, use-after-free will happen and it leads to undefined behaviors.

Fix:
I see that the inode can be removed from LRU even by the protocol layers like gfapi/gNFS
when LRU limit is reached. So it is better to add a check in shard_forget() to remove itself
from LRU list if it exists.

BUG: 1466037
Change-Id: Ia79c0c5c9d5febc56c41ddb12b5daf03e5281638
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17644
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Handle offset in appending writes</title>
<updated>2017-05-27T16:00:50+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2017-05-24T17:00:29+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=bea02e26a3967a6e679e30fbb77ecfeff1e71f37'/>
<id>bea02e26a3967a6e679e30fbb77ecfeff1e71f37</id>
<content type='text'>
When a file is opened with append, all writes are appended at the end of file
irrespective of the offset given in the write syscall. This needs to be
considered in shard size update function and also for choosing which shard to
write to.

At the moment shard piggybacks on queuing from write-behind
xlator for ordering of the operations. So if write-behind is disabled and
two parallel appending-writes come both of which can increase the file size
beyond shard-size the file will be corrupted.

BUG: 1455301
Change-Id: I9007e6a39098ab0b5d5386367bd07eb5f89cb09e
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17387
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When a file is opened with append, all writes are appended at the end of file
irrespective of the offset given in the write syscall. This needs to be
considered in shard size update function and also for choosing which shard to
write to.

At the moment shard piggybacks on queuing from write-behind
xlator for ordering of the operations. So if write-behind is disabled and
two parallel appending-writes come both of which can increase the file size
beyond shard-size the file will be corrupted.

BUG: 1455301
Change-Id: I9007e6a39098ab0b5d5386367bd07eb5f89cb09e
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17387
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Set size in inode ctx before size update for truncate too</title>
<updated>2017-05-10T12:33:53+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2017-05-05T09:00:49+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=9df83b504a01e86d3b73af6c40df0c94cd2cd97a'/>
<id>9df83b504a01e86d3b73af6c40df0c94cd2cd97a</id>
<content type='text'>
Change-Id: I7e984bb0f50c7d42764c0648e697d94d6c768dc7
BUG: 1448299
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17184
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I7e984bb0f50c7d42764c0648e697d94d6c768dc7
BUG: 1448299
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17184
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Initialize local-&gt;fop in readv</title>
<updated>2017-04-11T13:32:19+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2017-04-10T05:34:31+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=594a7c6a187cf780bd666e7343c39a2d92fc67ef'/>
<id>594a7c6a187cf780bd666e7343c39a2d92fc67ef</id>
<content type='text'>
Change-Id: I9008ca9960df4821636501ae84f93a68f370c67f
BUG: 1440051
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17014
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I9008ca9960df4821636501ae84f93a68f370c67f
BUG: 1440051
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17014
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Fix vm corruption upon fix-layout</title>
<updated>2017-04-07T18:52:00+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2017-04-06T12:40:41+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=99c8c0b03a3368d81756440ab48091e1f2430a5f'/>
<id>99c8c0b03a3368d81756440ab48091e1f2430a5f</id>
<content type='text'>
shard's writev implementation, as part of identifying
presence of participant shards that aren't in memory,
first sends an MKNOD on these shards, and upon EEXIST error,
looks up the shards before proceeding with the writes.

The VM corruption was caused when the following happened:
1. DHT had n subvolumes initially.
2. Upon add-brick + fix-layout, the layout of .shard changed
   although the existing shards under it were yet to be migrated
   to their new hashed subvolumes.
3. During this time, there were writes on the VM falling in regions
   of the file whose corresponding shards were already existing under
   .shard.
4. Sharding xl sent MKNOD on these shards, now creating them in their
   new hashed subvolumes although there already exist shard blocks for
   this region with valid data.
5. All subsequent writes were wound on these newly created copies.

The net outcome is that both copies of the shard didn't have the correct
data. This caused the affected VMs to be unbootable.

FIX:
For want of better alternatives in DHT, the fix changes shard fops to do
a LOOKUP before the MKNOD and upon EEXIST error, perform another lookup.

Change-Id: I8a2e97d91ba3275fbc7174a008c7234fa5295d36
BUG: 1440051
RCA'd-by: Raghavendra Gowdappa &lt;rgowdapp@redhat.com&gt;
Reported-by: Mahdi Adnan &lt;mahdi.adnan@outlook.com&gt;
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17010
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
shard's writev implementation, as part of identifying
presence of participant shards that aren't in memory,
first sends an MKNOD on these shards, and upon EEXIST error,
looks up the shards before proceeding with the writes.

The VM corruption was caused when the following happened:
1. DHT had n subvolumes initially.
2. Upon add-brick + fix-layout, the layout of .shard changed
   although the existing shards under it were yet to be migrated
   to their new hashed subvolumes.
3. During this time, there were writes on the VM falling in regions
   of the file whose corresponding shards were already existing under
   .shard.
4. Sharding xl sent MKNOD on these shards, now creating them in their
   new hashed subvolumes although there already exist shard blocks for
   this region with valid data.
5. All subsequent writes were wound on these newly created copies.

The net outcome is that both copies of the shard didn't have the correct
data. This caused the affected VMs to be unbootable.

FIX:
For want of better alternatives in DHT, the fix changes shard fops to do
a LOOKUP before the MKNOD and upon EEXIST error, perform another lookup.

Change-Id: I8a2e97d91ba3275fbc7174a008c7234fa5295d36
BUG: 1440051
RCA'd-by: Raghavendra Gowdappa &lt;rgowdapp@redhat.com&gt;
Reported-by: Mahdi Adnan &lt;mahdi.adnan@outlook.com&gt;
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17010
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>shared: simplify and fix crash when we run out of memory</title>
<updated>2017-04-07T03:48:06+00:00</updated>
<author>
<name>Michael Scherer</name>
<email>misc@redhat.com</email>
</author>
<published>2017-02-23T20:26:33+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c932c0df4b70abaa85354e56efe6f1ce88ce4a38'/>
<id>c932c0df4b70abaa85354e56efe6f1ce88ce4a38</id>
<content type='text'>
Since mem_get0 can return NULL, local-&gt;op_ret is gonna
crash. Found by coverity. And since we only have ENOMEM
as potential error, we can also simplify the code by avoiding
using 'local' for that.

Change-Id: I778747b57f520b1a52347c0fc9f27efd7a7c5ca0
BUG: 789278
Signed-off-by: Michael Scherer &lt;misc@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16739
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Michael Scherer &lt;misc@fedoraproject.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Since mem_get0 can return NULL, local-&gt;op_ret is gonna
crash. Found by coverity. And since we only have ENOMEM
as potential error, we can also simplify the code by avoiding
using 'local' for that.

Change-Id: I778747b57f520b1a52347c0fc9f27efd7a7c5ca0
BUG: 789278
Signed-off-by: Michael Scherer &lt;misc@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16739
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Michael Scherer &lt;misc@fedoraproject.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Pass the correct iatt for cache invalidation</title>
<updated>2017-03-30T06:15:51+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2017-03-28T13:56:41+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=dd5ada1f11d76b4c55c7c55d23718617f11a6c12'/>
<id>dd5ada1f11d76b4c55c7c55d23718617f11a6c12</id>
<content type='text'>
This fixes a performance issue with shard which was causing
the translator to trigger unusually high number of lookups
for cache invalidation even when there was no modification to
the file.

In shard_common_stat_cbk(), it is local-&gt;prebuf that contains the
aggregated size and block count as opposed to buf which only holds the
attributes for the physical copy of base shard. Passing buf for
inode_ctx invalidation would always set refresh to true since the file
size in inode ctx contains the aggregated size and would never be same
as @buf-&gt;ia_size. This was leading to every write/read being preceded
by a lookup on the base shard even when the file underwent no
modification.

Change-Id: Ib0349291d2d01f3782d6d0bdd90c6db5e0609210
BUG: 1436739
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16961
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This fixes a performance issue with shard which was causing
the translator to trigger unusually high number of lookups
for cache invalidation even when there was no modification to
the file.

In shard_common_stat_cbk(), it is local-&gt;prebuf that contains the
aggregated size and block count as opposed to buf which only holds the
attributes for the physical copy of base shard. Passing buf for
inode_ctx invalidation would always set refresh to true since the file
size in inode ctx contains the aggregated size and would never be same
as @buf-&gt;ia_size. This was leading to every write/read being preceded
by a lookup on the base shard even when the file underwent no
modification.

Change-Id: Ib0349291d2d01f3782d6d0bdd90c6db5e0609210
BUG: 1436739
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16961
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
