<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/cluster/afr, branch v3.12dev</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>cluster/afr: GFID split brain resolution with favorite-child-policy</title>
<updated>2017-04-21T00:38:54+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2017-03-09T12:38:28+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=799a2ff8299db6d6dc75f1533f4bd5a3bb72164d'/>
<id>799a2ff8299db6d6dc75f1533f4bd5a3bb72164d</id>
<content type='text'>
Problem:
Currently the automatic split brain resolution with favorite child policy
is not resolving the GFID split brains.

Fix:
When there is a GFID split brain and the favorite child policy is set to
size/mtime/ctime/majority, based on the policy decide on the source and
sinks. Delete the entry from the sinks and recreate it from the source.
Mark the appropriate pending attributes and resolve the GFID split brain.
When the heal takes place it will complete the pending heals and reset
the attributes.

Change-Id: Ie30e5373f94ca6f276745d9c3ad662b8acca6946
BUG: 1430719
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16878
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Currently the automatic split brain resolution with favorite child policy
is not resolving the GFID split brains.

Fix:
When there is a GFID split brain and the favorite child policy is set to
size/mtime/ctime/majority, based on the policy decide on the source and
sinks. Delete the entry from the sinks and recreate it from the source.
Mark the appropriate pending attributes and resolve the GFID split brain.
When the heal takes place it will complete the pending heals and reset
the attributes.

Change-Id: Ie30e5373f94ca6f276745d9c3ad662b8acca6946
BUG: 1430719
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16878
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: don't do a post-op on a brick if op failed</title>
<updated>2017-04-19T02:29:25+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2017-04-02T12:38:04+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=10dad995c989e9d77c341135d7c48817baba966c'/>
<id>10dad995c989e9d77c341135d7c48817baba966c</id>
<content type='text'>
Problem:
In afr-v2, self-blaming xattrs are not there by design. But if the FOP
failed on a brick due to an error other than ENOTCONN (or even due to
ENOTCONN, but we regained connection before postop was wound), we wind
the post-op also on the failed brick, leading to setting self-blaming
xattrs on that brick. This can lead to undesired results like healing of
files in split-brain etc.

Fix:
If a fop failed on a brick on which pre-op was successful, do not
perform post-op on it. This also produces the desired effect of not
resetting the dirty xattr on the brick, which is how it should be
because if the fop failed on a brick, there is no reason to clear the
dirty bit which actually serves as an indication of the failure.

Change-Id: I5f1caf4d1b39f36cf8093ccef940118638caa9c4
BUG: 1438255
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16976
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In afr-v2, self-blaming xattrs are not there by design. But if the FOP
failed on a brick due to an error other than ENOTCONN (or even due to
ENOTCONN, but we regained connection before postop was wound), we wind
the post-op also on the failed brick, leading to setting self-blaming
xattrs on that brick. This can lead to undesired results like healing of
files in split-brain etc.

Fix:
If a fop failed on a brick on which pre-op was successful, do not
perform post-op on it. This also produces the desired effect of not
resetting the dirty xattr on the brick, which is how it should be
because if the fop failed on a brick, there is no reason to clear the
dirty bit which actually serves as an indication of the failure.

Change-Id: I5f1caf4d1b39f36cf8093ccef940118638caa9c4
BUG: 1438255
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16976
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>syncop:  don't wake task in synctask_wake unless really needed</title>
<updated>2017-03-28T22:34:48+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2017-03-21T05:32:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0f98f5c8070904810252c6fc1df23747afa4b1d7'/>
<id>0f98f5c8070904810252c6fc1df23747afa4b1d7</id>
<content type='text'>
Problem:

In EC and AFR, we launch synctasks during self-heal.

(i) These tasks usually stackwind a FOP to all its children and call
synctask_yield() which does a swapcontext to synctask_switchto() and puts the
task in syncenv's waitq by calling __wait(task). This happends as long as the
FOP ckbs from all children haven't been received.

(ii) For each FOP cbk, we call synctask_wake() which again does a swapcontext
to synctask_switchto() which now puts the task in syncenv's runq by calling
__run(task). When the task runs and the conext switches back to the FOP path,
it puts the task in waitq because we haven't heard from all children as
explained in (i).

Thus we are unnecessarily using the swapcontext syscalls to just toggle
the task back and forth between the waitq and runq.

Fix:
Store the stackwind count in new variable 'syncbarrier-&gt;waitfor' before
winding the fop. In each cbk when we call synctask_wake(),  perform an actual
wake only if the cbk count == stackwind count.

Change-Id: Id62d3b6ffed5a8c50f8b79267fb34e9470ba5ed5
BUG: 1434274
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16931
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:

In EC and AFR, we launch synctasks during self-heal.

(i) These tasks usually stackwind a FOP to all its children and call
synctask_yield() which does a swapcontext to synctask_switchto() and puts the
task in syncenv's waitq by calling __wait(task). This happends as long as the
FOP ckbs from all children haven't been received.

(ii) For each FOP cbk, we call synctask_wake() which again does a swapcontext
to synctask_switchto() which now puts the task in syncenv's runq by calling
__run(task). When the task runs and the conext switches back to the FOP path,
it puts the task in waitq because we haven't heard from all children as
explained in (i).

Thus we are unnecessarily using the swapcontext syscalls to just toggle
the task back and forth between the waitq and runq.

Fix:
Store the stackwind count in new variable 'syncbarrier-&gt;waitfor' before
winding the fop. In each cbk when we call synctask_wake(),  perform an actual
wake only if the cbk count == stackwind count.

Change-Id: Id62d3b6ffed5a8c50f8b79267fb34e9470ba5ed5
BUG: 1434274
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16931
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Undo pending xattrs only on the up bricks</title>
<updated>2017-03-27T09:52:31+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2017-03-18T08:14:56+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f91596e6566c605e70a31a60523d11f78a097c3c'/>
<id>f91596e6566c605e70a31a60523d11f78a097c3c</id>
<content type='text'>
Problem:
While doing conservative merge, even if a brick is down, it will reset
the pending xattr on that. When that brick comes up, as part of the
heal, it will consider this brick as the source and removes the entries
on the other bricks, which leads to data loss.

Fix:
Undo pending only for the bricks which are up.

Change-Id: I18436fa0bb1faa5f60531b357dea3f6b20446303
BUG: 1433571
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16913
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
While doing conservative merge, even if a brick is down, it will reset
the pending xattr on that. When that brick comes up, as part of the
heal, it will consider this brick as the source and removes the entries
on the other bricks, which leads to data loss.

Fix:
Undo pending only for the bricks which are up.

Change-Id: I18436fa0bb1faa5f60531b357dea3f6b20446303
BUG: 1433571
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16913
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: do not mention split-brain in log message in read_txn</title>
<updated>2017-03-20T13:58:50+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2017-03-19T17:12:33+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=71e023fcaab0058f32fedc7b6b702040fdd85f46'/>
<id>71e023fcaab0058f32fedc7b6b702040fdd85f46</id>
<content type='text'>
I am seeing a lot of messages in qe/customer logs where read_txn
complains that file is possibly in split-brain because of no readable
subvol being found, does inode refresh and then there is no split-brain
message post the inode refresh. This means that a lookup was not issued
on the indoe to populate 'readable' or it can mean one brick is source
for data and the other for metadata, making readable to be zero (because
readable=intersection of (data,metadata readable) since commit
7a1c1e290470149696.

Since we anyway log actual split-brains post inode-refresh, move this
message to DEBUG log level.

Change-Id: Idb88b8ea362515279dc9b246f06b6b646c6d8013
BUG: 1433838
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16879
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
I am seeing a lot of messages in qe/customer logs where read_txn
complains that file is possibly in split-brain because of no readable
subvol being found, does inode refresh and then there is no split-brain
message post the inode refresh. This means that a lookup was not issued
on the indoe to populate 'readable' or it can mean one brick is source
for data and the other for metadata, making readable to be zero (because
readable=intersection of (data,metadata readable) since commit
7a1c1e290470149696.

Since we anyway log actual split-brains post inode-refresh, move this
message to DEBUG log level.

Change-Id: Idb88b8ea362515279dc9b246f06b6b646c6d8013
BUG: 1433838
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16879
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: restore atime/mtime for non-regular files</title>
<updated>2017-03-06T10:01:43+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2017-03-03T19:34:10+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=804a65f07ea8e2093f781807651d0d07513b2627'/>
<id>804a65f07ea8e2093f781807651d0d07513b2627</id>
<content type='text'>
AFR restores atime/mtime only as a part of data heal. For non-regular
files (dirs, symlinks, char/block/socket files etc) which do not undergo
data-heal, atime/mtime is not restored.

This patch restores atime/mtime as a part of metadata heal for such
files.

Change-Id: Id8da885fc93fdf65c2f4bae2af3605b146ac1f16
BUG: 1429198
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16844
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
AFR restores atime/mtime only as a part of data heal. For non-regular
files (dirs, symlinks, char/block/socket files etc) which do not undergo
data-heal, atime/mtime is not restored.

This patch restores atime/mtime as a part of metadata heal for such
files.

Change-Id: Id8da885fc93fdf65c2f4bae2af3605b146ac1f16
BUG: 1429198
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16844
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Perform new entry mark before creating new entry</title>
<updated>2017-02-16T15:24:31+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2017-01-23T09:28:45+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=6588204568ab73bf8456ca3b2eccf2ae1182fb95'/>
<id>6588204568ab73bf8456ca3b2eccf2ae1182fb95</id>
<content type='text'>
There is a chance for the source brick to go down just after
the new entry is created and before source brick is marked with
necessary pending markers. If after this any I/O happens then
new entry will become source and reverse heal will happen.
To prevent this mark the pending xattrs before creating the new
entry.

BUG: 1417466
Change-Id: I233b87e694d32e5d734df5a83b4d2ca711c17503
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16474
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There is a chance for the source brick to go down just after
the new entry is created and before source brick is marked with
necessary pending markers. If after this any I/O happens then
new entry will become source and reverse heal will happen.
To prevent this mark the pending xattrs before creating the new
entry.

BUG: 1417466
Change-Id: I233b87e694d32e5d734df5a83b4d2ca711c17503
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16474
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: all children of AFR must be up to resolve s-brain</title>
<updated>2017-02-10T01:37:00+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2017-01-30T04:24:16+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0e03336a9362e5717e561f76b0c543e5a197b31b'/>
<id>0e03336a9362e5717e561f76b0c543e5a197b31b</id>
<content type='text'>
Problem:
The various split-brain resolution policies (favorite-child-policy based,
CLI based and mount (get/setfattr) based) attempt to resolve split-brain
even when not all bricks of replica are up. This can be a problem when
say in a replica 3, the only good copy is down and the other 2 bricks
are up and blame each other (i.e. split-brain). We end up healing the
file in such a  case and allow I/O on it.

Fix:
A decision on whether the file is in split-brain or not must be taken
only if we are able to examine the afr xattrs of *all* bricks of a given
replica.

Change-Id: Icddb1268b380005799990f5379ef957d84639ef9
BUG: 1417522
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16476
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
The various split-brain resolution policies (favorite-child-policy based,
CLI based and mount (get/setfattr) based) attempt to resolve split-brain
even when not all bricks of replica are up. This can be a problem when
say in a replica 3, the only good copy is down and the other 2 bricks
are up and blame each other (i.e. split-brain). We end up healing the
file in such a  case and allow I/O on it.

Fix:
A decision on whether the file is in split-brain or not must be taken
only if we are able to examine the afr xattrs of *all* bricks of a given
replica.

Change-Id: Icddb1268b380005799990f5379ef957d84639ef9
BUG: 1417522
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16476
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr/cluster: Restore data-self-heal-window option</title>
<updated>2017-02-08T07:53:43+00:00</updated>
<author>
<name>Richard Wareing</name>
<email>rwareing@fb.com</email>
</author>
<published>2015-12-12T05:03:40+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c57808c4e36547233d20e31b54c818c8d77fa646'/>
<id>c57808c4e36547233d20e31b54c818c8d77fa646</id>
<content type='text'>
Summary:
- Fixes a bug where data-self-heal-window was ignored and instead
  hard-coded to 128k
- Cherry-pick of D2752781

Test Plan:
- Prove tests

Reviewed By: sshreyas

Signed-off-by: Shreyas Siravara &lt;sshreyas@fb.com&gt;

Change-Id: Ie38456ce9ad90921f7456fe02aaace88393433a9
BUG: 1404424
Reviewed-on-release-3.8-fb: http://review.gluster.org/16083
Tested-by: Shreyas Siravara &lt;sshreyas@fb.com&gt;
Reviewed-by: Kevin Vigor &lt;kvigor@fb.com&gt;
Reviewed-on: https://review.gluster.org/16123
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Summary:
- Fixes a bug where data-self-heal-window was ignored and instead
  hard-coded to 128k
- Cherry-pick of D2752781

Test Plan:
- Prove tests

Reviewed By: sshreyas

Signed-off-by: Shreyas Siravara &lt;sshreyas@fb.com&gt;

Change-Id: Ie38456ce9ad90921f7456fe02aaace88393433a9
BUG: 1404424
Reviewed-on-release-3.8-fb: http://review.gluster.org/16083
Tested-by: Shreyas Siravara &lt;sshreyas@fb.com&gt;
Reviewed-by: Kevin Vigor &lt;kvigor@fb.com&gt;
Reviewed-on: https://review.gluster.org/16123
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: run many bricks within one glusterfsd process</title>
<updated>2017-01-31T00:13:58+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2016-12-08T21:24:15+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=1a95fc3036db51b82b6a80952f0908bc2019d24a'/>
<id>1a95fc3036db51b82b6a80952f0908bc2019d24a</id>
<content type='text'>
This patch adds support for multiple brick translator stacks running
in a single brick server process.  This reduces our per-brick memory usage by
approximately 3x, and our appetite for TCP ports even more.  It also creates
potential to avoid process/thread thrashing, and to improve QoS by scheduling
more carefully across the bricks, but realizing that potential will require
further work.

Multiplexing is controlled by the "cluster.brick-multiplex" global option.  By
default it's off, and bricks are started in separate processes as before.  If
multiplexing is enabled, then *compatible* bricks (mostly those with the same
transport options) will be started in the same process.

Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
BUG: 1385758
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/14763
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch adds support for multiple brick translator stacks running
in a single brick server process.  This reduces our per-brick memory usage by
approximately 3x, and our appetite for TCP ports even more.  It also creates
potential to avoid process/thread thrashing, and to improve QoS by scheduling
more carefully across the bricks, but realizing that potential will require
further work.

Multiplexing is controlled by the "cluster.brick-multiplex" global option.  By
default it's off, and bricks are started in separate processes as before.  If
multiplexing is enabled, then *compatible* bricks (mostly those with the same
transport options) will be started in the same process.

Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
BUG: 1385758
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/14763
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
