<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/cluster/ec/src/ec.c, branch release-3.10</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>cluster/ec: send list-node-uuids request to all subvolumes</title>
<updated>2018-04-05T15:35:51+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2018-03-28T09:34:49+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=4fa3cc86d0c5160eccb998ba8bc950b924d7d477'/>
<id>4fa3cc86d0c5160eccb998ba8bc950b924d7d477</id>
<content type='text'>
The xattr trusted.glusterfs.list-node-uuids was only sent to a single
subvolume. This was returning null uuids from the other subvolumes as
if they were down.

This fix forces that xattr to be requested from all subvolumes.

Backport of:
&gt; BUG: 1561406

Change-Id: If62eb39a6857258923ba625e153d4ad79018ea2f
BUG: 1561732
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The xattr trusted.glusterfs.list-node-uuids was only sent to a single
subvolume. This was returning null uuids from the other subvolumes as
if they were down.

This fix forces that xattr to be requested from all subvolumes.

Backport of:
&gt; BUG: 1561406

Change-Id: If62eb39a6857258923ba625e153d4ad79018ea2f
BUG: 1561732
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: avoid delays in self-heal</title>
<updated>2018-03-15T07:37:02+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>jahernan@redhat.com</email>
</author>
<published>2018-02-21T16:47:37+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0baf7f7cd47f24021d55da1eaecc19c88c275f3a'/>
<id>0baf7f7cd47f24021d55da1eaecc19c88c275f3a</id>
<content type='text'>
Self-heal creates a thread per brick to sweep the index looking for
files that need to be healed. These threads are started before the
volume comes online, so nothing is done but waiting for the next
sweep. This happens once per minute.

When a replace brick command is executed, the new graph is loaded and
all index sweeper threads started. When all bricks have reported, a
getxattr request is sent to the root directory of the volume. This
causes a heal on it (because the new brick doesn't have good data),
and marks its contents as pending to be healed. This is done by the
index sweeper thread on the next round, one minute later.

This patch solves this problem by waking all index sweeper threads
after a successful check on the root directory.

Additionally, the index sweep thread scans the index directory
sequentially, but it might happen that after healing a directory entry
more index entries are created but skipped by the current directory
scan. This causes the remaining entries to be processed on the next
round, one minute later. The same can happen in the next round, so
the heal is running in bursts and taking a lot to finish, specially
on volumes with many directory levels.

This patch solves this problem by immediately restarting the index
sweep if a directory has been healed.

Backport of:
&gt; BUG: 1547662

Change-Id: I58d9ab6ef17b30f704dc322e1d3d53b904e5f30e
BUG: 1555203
Signed-off-by: Xavi Hernandez &lt;jahernan@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Self-heal creates a thread per brick to sweep the index looking for
files that need to be healed. These threads are started before the
volume comes online, so nothing is done but waiting for the next
sweep. This happens once per minute.

When a replace brick command is executed, the new graph is loaded and
all index sweeper threads started. When all bricks have reported, a
getxattr request is sent to the root directory of the volume. This
causes a heal on it (because the new brick doesn't have good data),
and marks its contents as pending to be healed. This is done by the
index sweeper thread on the next round, one minute later.

This patch solves this problem by waking all index sweeper threads
after a successful check on the root directory.

Additionally, the index sweep thread scans the index directory
sequentially, but it might happen that after healing a directory entry
more index entries are created but skipped by the current directory
scan. This causes the remaining entries to be processed on the next
round, one minute later. The same can happen in the next round, so
the heal is running in bursts and taking a lot to finish, specially
on volumes with many directory levels.

This patch solves this problem by immediately restarting the index
sweep if a directory has been healed.

Backport of:
&gt; BUG: 1547662

Change-Id: I58d9ab6ef17b30f704dc322e1d3d53b904e5f30e
BUG: 1555203
Signed-off-by: Xavi Hernandez &lt;jahernan@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: return all node uuids from all subvolumes</title>
<updated>2017-09-01T07:44:52+00:00</updated>
<author>
<name>Xavier Hernandez</name>
<email>xhernandez@datalab.es</email>
</author>
<published>2017-05-12T07:23:47+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b7d6c070f161fdd9aa0700d11e624b23cefd36cd'/>
<id>b7d6c070f161fdd9aa0700d11e624b23cefd36cd</id>
<content type='text'>
EC was retuning the UUID of the brick with smaller value. This had
the side effect of not evenly balancing the load between bricks on
rebalance operations.

This patch modifies the common functions that combine multiple subvolume
values into a single result to take into account the subvolume order
and, optionally, other subvolumes that could be damaged.

This makes easier to add future features where brick order is important.
It also makes possible to easily identify the originating brick of each
answer, in case some brick will have an special meaning in the future.

&gt;Change-Id: Iee0a4da710b41224a6dc8e13fa8dcddb36c73a2f
&gt;BUG: 1366817
&gt;Signed-off-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
&gt;Reviewed-on: https://review.gluster.org/17297
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
&gt;Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;

BUG: 1487042
Change-Id: Iee0a4da710b41224a6dc8e13fa8dcddb36c73a2f
Signed-off-by: Sunil Kumar Acharya &lt;sheggodu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18148
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
EC was retuning the UUID of the brick with smaller value. This had
the side effect of not evenly balancing the load between bricks on
rebalance operations.

This patch modifies the common functions that combine multiple subvolume
values into a single result to take into account the subvolume order
and, optionally, other subvolumes that could be damaged.

This makes easier to add future features where brick order is important.
It also makes possible to easily identify the originating brick of each
answer, in case some brick will have an special meaning in the future.

&gt;Change-Id: Iee0a4da710b41224a6dc8e13fa8dcddb36c73a2f
&gt;BUG: 1366817
&gt;Signed-off-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
&gt;Reviewed-on: https://review.gluster.org/17297
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
&gt;Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;

BUG: 1487042
Change-Id: Iee0a4da710b41224a6dc8e13fa8dcddb36c73a2f
Signed-off-by: Sunil Kumar Acharya &lt;sheggodu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18148
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: fix selinux issues with mmap()</title>
<updated>2017-02-06T15:26:56+00:00</updated>
<author>
<name>Xavier Hernandez</name>
<email>xhernandez@datalab.es</email>
</author>
<published>2017-01-13T12:54:35+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=cc0f3623c037529a3c3f3d1c81f2c8d281d64dba'/>
<id>cc0f3623c037529a3c3f3d1c81f2c8d281d64dba</id>
<content type='text'>
EC uses mmap() to create a memory area for the dynamic code. Since
the code is created on the fly and executed when needed, this region
of memory needs to have write and execution privileges.

This combination is not allowed by default by selinux. To solve the
problem a file is used as a backend storage for the dynamic code and
it's mapped into two distinct memory regions, one with write access
and the other one with execution access. This approach is the
recommended way to create dynamic code by a program in a more secure
way, and selinux allows it.

Additionally selinux requires that the backend file be stored in a
directory marked with type bin_t to be able to map it in an executable
area. To satisfy this condition, GLUSTERFS_LIBEXECDIR has been used.

This fix also changes the error check for mmap(), that was done
incorrectly (it checked against NULL instead of MAP_FAILED), and it
also correctly propagates the error codes and makes sure they aren't
silently ignored.

&gt; Change-Id: I71c2f88be4e4d795b6cfff96ab3799c362c54291
&gt; BUG: 1402661
&gt; Signed-off-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
&gt; Reviewed-on: https://review.gluster.org/16405
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;

Change-Id: I5c2dd51b1161505316c8f78b73e9a585d0c115d0
BUG: 1418650
Signed-off-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Reviewed-on: https://review.gluster.org/16522
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
EC uses mmap() to create a memory area for the dynamic code. Since
the code is created on the fly and executed when needed, this region
of memory needs to have write and execution privileges.

This combination is not allowed by default by selinux. To solve the
problem a file is used as a backend storage for the dynamic code and
it's mapped into two distinct memory regions, one with write access
and the other one with execution access. This approach is the
recommended way to create dynamic code by a program in a more secure
way, and selinux allows it.

Additionally selinux requires that the backend file be stored in a
directory marked with type bin_t to be able to map it in an executable
area. To satisfy this condition, GLUSTERFS_LIBEXECDIR has been used.

This fix also changes the error check for mmap(), that was done
incorrectly (it checked against NULL instead of MAP_FAILED), and it
also correctly propagates the error codes and makes sure they aren't
silently ignored.

&gt; Change-Id: I71c2f88be4e4d795b6cfff96ab3799c362c54291
&gt; BUG: 1402661
&gt; Signed-off-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
&gt; Reviewed-on: https://review.gluster.org/16405
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;

Change-Id: I5c2dd51b1161505316c8f78b73e9a585d0c115d0
BUG: 1418650
Signed-off-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Reviewed-on: https://review.gluster.org/16522
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: run many bricks within one glusterfsd process</title>
<updated>2017-02-02T00:54:58+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2017-01-31T19:49:45+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=83803b4b2d70e9e6e16bb050d7ac8e49ba420893'/>
<id>83803b4b2d70e9e6e16bb050d7ac8e49ba420893</id>
<content type='text'>
This patch adds support for multiple brick translator stacks running in
a single brick server process.  This reduces our per-brick memory usage
by approximately 3x, and our appetite for TCP ports even more.  It also
creates potential to avoid process/thread thrashing, and to improve QoS
by scheduling more carefully across the bricks, but realizing that
potential will require further work.

Multiplexing is controlled by the "cluster.brick-multiplex" global
option.  By default it's off, and bricks are started in separate
processes as before.  If multiplexing is enabled, then *compatible*
bricks (mostly those with the same transport options) will be started in
the same process.

Backport of:
&gt; Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
&gt; BUG: 1385758
&gt; Reviewed-on: https://review.gluster.org/14763

Change-Id: I4bce9080f6c93d50171823298fdf920258317ee8
BUG: 1418091
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16496
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch adds support for multiple brick translator stacks running in
a single brick server process.  This reduces our per-brick memory usage
by approximately 3x, and our appetite for TCP ports even more.  It also
creates potential to avoid process/thread thrashing, and to improve QoS
by scheduling more carefully across the bricks, but realizing that
potential will require further work.

Multiplexing is controlled by the "cluster.brick-multiplex" global
option.  By default it's off, and bricks are started in separate
processes as before.  If multiplexing is enabled, then *compatible*
bricks (mostly those with the same transport options) will be started in
the same process.

Backport of:
&gt; Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
&gt; BUG: 1385758
&gt; Reviewed-on: https://review.gluster.org/14763

Change-Id: I4bce9080f6c93d50171823298fdf920258317ee8
BUG: 1418091
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16496
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ec: Invalidations in disperse volume should not update the stat</title>
<updated>2017-01-06T05:12:18+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2017-01-05T10:06:02+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=95d07a3d2d68805d93d36a447436e27c48777939'/>
<id>95d07a3d2d68805d93d36a447436e27c48777939</id>
<content type='text'>
Issue:
In disperse volume, the file is present across bricks, hence the stat
from one brick doesn't carry the valid size of the file. Therefore
the upcall from one brick updating the md-cache results in wrong size
being updated.

Fix:
If the notification is cache invalidation then, indicate md-cache that
the attributes is invalid.

BUG: 1410375
Change-Id: Id89d2283478e70b62b435a8891fffc86d2be8cb2
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16329
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Issue:
In disperse volume, the file is present across bricks, hence the stat
from one brick doesn't carry the valid size of the file. Therefore
the upcall from one brick updating the md-cache results in wrong size
being updated.

Fix:
If the notification is cache invalidation then, indicate md-cache that
the attributes is invalid.

BUG: 1410375
Change-Id: Id89d2283478e70b62b435a8891fffc86d2be8cb2
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16329
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: Fix lk-owner set race in ec_unlock</title>
<updated>2016-12-13T15:24:51+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2016-12-08T09:23:04+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=a0a4163ce6a8dd8bb83b60a4484578fadd02c88f'/>
<id>a0a4163ce6a8dd8bb83b60a4484578fadd02c88f</id>
<content type='text'>
Problem:
Rename does two locks. There is a case where when it tries to unlock it sends
xattrop of the directory with new version, callback of these two xattrops can
be picked up by two separate epoll threads. Both of them will try to set the
lk-owner for unlock in parallel on the same frame so one of these unlocks will
fail because the lk-owner doesn't match.

Fix:
Specify the lk-owner which will be set on inodelk frame which will not be over
written by any other thread/operation.

BUG: 1402710
Change-Id: I666ffc931440dc5253d72df666efe0ef1d73f99a
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16074
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Rename does two locks. There is a case where when it tries to unlock it sends
xattrop of the directory with new version, callback of these two xattrops can
be picked up by two separate epoll threads. Both of them will try to set the
lk-owner for unlock in parallel on the same frame so one of these unlocks will
fail because the lk-owner doesn't match.

Fix:
Specify the lk-owner which will be set on inodelk frame which will not be over
written by any other thread/operation.

BUG: 1402710
Change-Id: I666ffc931440dc5253d72df666efe0ef1d73f99a
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16074
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr,dht,ec: Replace GF_EVENT_CHILD_MODIFIED with event SOME_DESCENDENT_DOWN/UP</title>
<updated>2016-11-21T09:32:05+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2016-10-28T09:57:15+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f7ab6c45963fa0da68acedfb14281cd2456abc68'/>
<id>f7ab6c45963fa0da68acedfb14281cd2456abc68</id>
<content type='text'>
Currently these are few events related to child_up/down:
GF_EVENT_CHILD_UP :  Issued when any of the protocol client
connects.
GF_EVENT_CHILD_MODIFIED : Issued by afr/dht/ec
GF_EVENT_CHILD_DOWN : Issued when any of the protocol client
disconnects.
These events get modified at the dht/afr/ec layers. Here is a
brief on the same.

DHT:
- All the subvolumes reported once, and atleast one child came
  up, then GF_EVENT_CHILD_UP is issued
- connect GF_EVENT_CHILD_UP is issued
- disconnect GF_EVENT_CHILD_MODIFIED is issued
- All the subvolumes disconnected, GF_EVENT_CHILD_DOWN is issued

AFR:
- First subvolume came up, then GF_EVENT_CHILD_UP is issued
- Subsequent subvolumes coming up, results in GF_EVENT_CHILD_MODIFIED
- Any of the subvolumes go down, then GF_EVENT_SOME_CHILD_DOWN is issued
- Last up subvolume goes down, then GF_EVENT_CHILD_DOWN is issued

Until the patch [1] introduced GF_EVENT_SOME_CHILD_UP,
GF_EVENT_CHILD_MODIFIED was issued by afr/dht when any of the subvolumes
go up or down.

Now with md-cache changes, there is a necessity to differentiate between
child up and down. Hence, introducing GF_EVENT_SOME_DESCENDENT_DOWN/UP and
getting rid of GF_EVENT_CHILD_MODIFIED.

[1] http://review.gluster.org/12573

Change-Id: I704140b6598f7ec705493251d2dbc4191c965a58
BUG: 1396038
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15764
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently these are few events related to child_up/down:
GF_EVENT_CHILD_UP :  Issued when any of the protocol client
connects.
GF_EVENT_CHILD_MODIFIED : Issued by afr/dht/ec
GF_EVENT_CHILD_DOWN : Issued when any of the protocol client
disconnects.
These events get modified at the dht/afr/ec layers. Here is a
brief on the same.

DHT:
- All the subvolumes reported once, and atleast one child came
  up, then GF_EVENT_CHILD_UP is issued
- connect GF_EVENT_CHILD_UP is issued
- disconnect GF_EVENT_CHILD_MODIFIED is issued
- All the subvolumes disconnected, GF_EVENT_CHILD_DOWN is issued

AFR:
- First subvolume came up, then GF_EVENT_CHILD_UP is issued
- Subsequent subvolumes coming up, results in GF_EVENT_CHILD_MODIFIED
- Any of the subvolumes go down, then GF_EVENT_SOME_CHILD_DOWN is issued
- Last up subvolume goes down, then GF_EVENT_CHILD_DOWN is issued

Until the patch [1] introduced GF_EVENT_SOME_CHILD_UP,
GF_EVENT_CHILD_MODIFIED was issued by afr/dht when any of the subvolumes
go up or down.

Now with md-cache changes, there is a necessity to differentiate between
child up and down. Hence, introducing GF_EVENT_SOME_DESCENDENT_DOWN/UP and
getting rid of GF_EVENT_CHILD_MODIFIED.

[1] http://review.gluster.org/12573

Change-Id: I704140b6598f7ec705493251d2dbc4191c965a58
BUG: 1396038
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15764
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: Implement heal info with lock</title>
<updated>2016-10-11T09:29:27+00:00</updated>
<author>
<name>Ashish Pandey</name>
<email>aspandey@redhat.com</email>
</author>
<published>2016-09-20T07:02:28+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0fed7e7f0aad9973900c89434f736797d9ace2bd'/>
<id>0fed7e7f0aad9973900c89434f736797d9ace2bd</id>
<content type='text'>
Problem: Currently heal info command prints all
the files/directories if the index for the
file/directory is present in .glusterfs/indices folder.
After implementing patch http://review.gluster.org/#/c/13733/
indices of the file which is going through update fop
will also be present in .glusterfs/indices even
if the fop is successful on all the brick. At this time
if heal info command is being used, it will also display this
file which is actually healthy and does not require any heal.

Solution: Take lock on a file corresponding to the indices
and inspect xattrs to decide if the file needs heal or not.

Change-Id: I6361e2813ece369be12d02e74816df4eddb81cfa
BUG: 1366815
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15543
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: Currently heal info command prints all
the files/directories if the index for the
file/directory is present in .glusterfs/indices folder.
After implementing patch http://review.gluster.org/#/c/13733/
indices of the file which is going through update fop
will also be present in .glusterfs/indices even
if the fop is successful on all the brick. At this time
if heal info command is being used, it will also display this
file which is actually healthy and does not require any heal.

Solution: Take lock on a file corresponding to the indices
and inspect xattrs to decide if the file needs heal or not.

Change-Id: I6361e2813ece369be12d02e74816df4eddb81cfa
BUG: 1366815
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15543
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ec: Implement ipc fop</title>
<updated>2016-09-25T19:30:18+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2016-09-02T07:17:15+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=359b72a57b7c92fc2a11236ac05f5d740db2f540'/>
<id>359b72a57b7c92fc2a11236ac05f5d740db2f540</id>
<content type='text'>
The ipc will be wound to all the bricks, but for it to be
successfull, the fop should succeed on minimum number of bricks.

Change-Id: I3f8cb6a349e87bafd0773583def9d4e3765aa140
BUG: 1211863
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15387
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The ipc will be wound to all the bricks, but for it to be
successfull, the fop should succeed on minimum number of bricks.

Change-Id: I3f8cb6a349e87bafd0773583def9d4e3765aa140
BUG: 1211863
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15387
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
