diff options
author | Jeff Darcy <jdarcy@redhat.com> | 2016-12-08 16:24:15 -0500 |
---|---|---|
committer | Vijay Bellur <vbellur@redhat.com> | 2017-01-30 19:13:58 -0500 |
commit | 1a95fc3036db51b82b6a80952f0908bc2019d24a (patch) | |
tree | b983ac196a8165d5cb5e860a5ef97d3e9a41b5c9 /xlators/mgmt/glusterd/src/glusterd-brick-ops.c | |
parent | 7f7d7a939e46b330a084d974451eee4757ba61b4 (diff) |
core: run many bricks within one glusterfsd process
This patch adds support for multiple brick translator stacks running
in a single brick server process. This reduces our per-brick memory usage by
approximately 3x, and our appetite for TCP ports even more. It also creates
potential to avoid process/thread thrashing, and to improve QoS by scheduling
more carefully across the bricks, but realizing that potential will require
further work.
Multiplexing is controlled by the "cluster.brick-multiplex" global option. By
default it's off, and bricks are started in separate processes as before. If
multiplexing is enabled, then *compatible* bricks (mostly those with the same
transport options) will be started in the same process.
Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
BUG: 1385758
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: https://review.gluster.org/14763
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Diffstat (limited to 'xlators/mgmt/glusterd/src/glusterd-brick-ops.c')
-rw-r--r-- | xlators/mgmt/glusterd/src/glusterd-brick-ops.c | 12 |
1 files changed, 9 insertions, 3 deletions
diff --git a/xlators/mgmt/glusterd/src/glusterd-brick-ops.c b/xlators/mgmt/glusterd/src/glusterd-brick-ops.c index 938663ba863..c78fbd8345c 100644 --- a/xlators/mgmt/glusterd/src/glusterd-brick-ops.c +++ b/xlators/mgmt/glusterd/src/glusterd-brick-ops.c @@ -2905,18 +2905,24 @@ glusterd_op_remove_brick (dict_t *dict, char **op_errstr) defrag_cmd = GF_DEFRAG_CMD_START_FORCE; if (cmd == GF_OP_CMD_DETACH_START) defrag_cmd = GF_DEFRAG_CMD_START_DETACH_TIER; + /* + * We need to set this *before* we issue commands to the + * bricks, or else we might end up setting it after the bricks + * have responded. If we fail to send the request(s) we'll + * clear it ourselves because nobody else will. + */ + volinfo->decommission_in_progress = 1; ret = glusterd_handle_defrag_start (volinfo, err_str, sizeof (err_str), defrag_cmd, glusterd_remove_brick_migrate_cbk, GD_OP_REMOVE_BRICK); - if (!ret) - volinfo->decommission_in_progress = 1; - if (ret) { gf_msg (this->name, GF_LOG_ERROR, 0, GD_MSG_REBALANCE_START_FAIL, "failed to start the rebalance"); + /* TBD: shouldn't we do more than print a message? */ + volinfo->decommission_in_progress = 0; } } else { if (GLUSTERD_STATUS_STARTED == volinfo->status) |