<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests/basic/multiplex.t, branch v3.10.7</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>trash: fix problem with trash feature under multiplexing</title>
<updated>2017-02-10T11:44:01+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2017-02-08T15:48:55+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=5c3f113cd8acc341a62245cea60a2249034091c5'/>
<id>5c3f113cd8acc341a62245cea60a2249034091c5</id>
<content type='text'>
With multiplexing, the trash translator gets a reconfigure call before
a notify(CHILD_UP).  In this case, priv-&gt;trash_itable was not yet
initialized, so the reconfigure would get a SEGV.  Moving the itable
allocation to init seems to fix it, so trash can be reenabled.

Backport of:
&gt; Change-Id: I21ac2d7fc66bac1bc4ec70fbc8bae306d73ac565
&gt; BUG: 1420434
&gt; Reviewed-on: https://review.gluster.org/16567

Change-Id: I43a6de6ac5070848619c5f905f075e4a4099c1bd
BUG: 1420808
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16582
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Anoop C S &lt;anoopcs@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With multiplexing, the trash translator gets a reconfigure call before
a notify(CHILD_UP).  In this case, priv-&gt;trash_itable was not yet
initialized, so the reconfigure would get a SEGV.  Moving the itable
allocation to init seems to fix it, so trash can be reenabled.

Backport of:
&gt; Change-Id: I21ac2d7fc66bac1bc4ec70fbc8bae306d73ac565
&gt; BUG: 1420434
&gt; Reviewed-on: https://review.gluster.org/16567

Change-Id: I43a6de6ac5070848619c5f905f075e4a4099c1bd
BUG: 1420808
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16582
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Anoop C S &lt;anoopcs@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: fix online_brick_count for multiplexing</title>
<updated>2017-02-08T20:03:37+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2017-02-02T15:22:00+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0b3255e0ce1d4d407467b34f7d6ad91161b43cfc'/>
<id>0b3255e0ce1d4d407467b34f7d6ad91161b43cfc</id>
<content type='text'>
The number of brick processes no longer matches the number of bricks,
therefore counting processes doesn't work.  Counting *pidfiles* does.
Ironically, the fix broke multiplex.t which used this function, so it
now uses a different function with the old process-counting behavior.
Also had to fix online_brick_count and kill_node in cluster.rc to be
consistent with the new reality.

Backport of:
&gt; Change-Id: I4e81a6633b93227e10604f53e18a0b802c75cbcc
&gt; BUG: 1385758
&gt; Reviewed-on: https://review.gluster.org/16527

Change-Id: I70b5cd169eafe3ad5b523bc0a30d21d864b3036a
BUG: 1418091
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16565
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The number of brick processes no longer matches the number of bricks,
therefore counting processes doesn't work.  Counting *pidfiles* does.
Ironically, the fix broke multiplex.t which used this function, so it
now uses a different function with the old process-counting behavior.
Also had to fix online_brick_count and kill_node in cluster.rc to be
consistent with the new reality.

Backport of:
&gt; Change-Id: I4e81a6633b93227e10604f53e18a0b802c75cbcc
&gt; BUG: 1385758
&gt; Reviewed-on: https://review.gluster.org/16527

Change-Id: I70b5cd169eafe3ad5b523bc0a30d21d864b3036a
BUG: 1418091
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16565
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: run many bricks within one glusterfsd process</title>
<updated>2017-02-02T00:54:58+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2017-01-31T19:49:45+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=83803b4b2d70e9e6e16bb050d7ac8e49ba420893'/>
<id>83803b4b2d70e9e6e16bb050d7ac8e49ba420893</id>
<content type='text'>
This patch adds support for multiple brick translator stacks running in
a single brick server process.  This reduces our per-brick memory usage
by approximately 3x, and our appetite for TCP ports even more.  It also
creates potential to avoid process/thread thrashing, and to improve QoS
by scheduling more carefully across the bricks, but realizing that
potential will require further work.

Multiplexing is controlled by the "cluster.brick-multiplex" global
option.  By default it's off, and bricks are started in separate
processes as before.  If multiplexing is enabled, then *compatible*
bricks (mostly those with the same transport options) will be started in
the same process.

Backport of:
&gt; Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
&gt; BUG: 1385758
&gt; Reviewed-on: https://review.gluster.org/14763

Change-Id: I4bce9080f6c93d50171823298fdf920258317ee8
BUG: 1418091
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16496
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch adds support for multiple brick translator stacks running in
a single brick server process.  This reduces our per-brick memory usage
by approximately 3x, and our appetite for TCP ports even more.  It also
creates potential to avoid process/thread thrashing, and to improve QoS
by scheduling more carefully across the bricks, but realizing that
potential will require further work.

Multiplexing is controlled by the "cluster.brick-multiplex" global
option.  By default it's off, and bricks are started in separate
processes as before.  If multiplexing is enabled, then *compatible*
bricks (mostly those with the same transport options) will be started in
the same process.

Backport of:
&gt; Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
&gt; BUG: 1385758
&gt; Reviewed-on: https://review.gluster.org/14763

Change-Id: I4bce9080f6c93d50171823298fdf920258317ee8
BUG: 1418091
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16496
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
