summaryrefslogtreecommitdiffstats
path: root/tests/basic/tier/new-tier-cmds.t
diff options
context:
space:
mode:
authorJeff Darcy <jdarcy@redhat.com>2016-12-08 16:24:15 -0500
committerVijay Bellur <vbellur@redhat.com>2017-01-30 19:13:58 -0500
commit1a95fc3036db51b82b6a80952f0908bc2019d24a (patch)
treeb983ac196a8165d5cb5e860a5ef97d3e9a41b5c9 /tests/basic/tier/new-tier-cmds.t
parent7f7d7a939e46b330a084d974451eee4757ba61b4 (diff)
core: run many bricks within one glusterfsd process
This patch adds support for multiple brick translator stacks running in a single brick server process. This reduces our per-brick memory usage by approximately 3x, and our appetite for TCP ports even more. It also creates potential to avoid process/thread thrashing, and to improve QoS by scheduling more carefully across the bricks, but realizing that potential will require further work. Multiplexing is controlled by the "cluster.brick-multiplex" global option. By default it's off, and bricks are started in separate processes as before. If multiplexing is enabled, then *compatible* bricks (mostly those with the same transport options) will be started in the same process. Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb BUG: 1385758 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/14763 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Diffstat (limited to 'tests/basic/tier/new-tier-cmds.t')
-rw-r--r--tests/basic/tier/new-tier-cmds.t19
1 files changed, 19 insertions, 0 deletions
diff --git a/tests/basic/tier/new-tier-cmds.t b/tests/basic/tier/new-tier-cmds.t
index afc875710ac..af5cd791b94 100644
--- a/tests/basic/tier/new-tier-cmds.t
+++ b/tests/basic/tier/new-tier-cmds.t
@@ -19,6 +19,14 @@ function create_dist_tier_vol () {
TEST $CLI_1 volume attach-tier $V0 $H1:$B1/${V0}_h1 $H2:$B2/${V0}_h2 $H3:$B3/${V0}_h3
}
+function tier_daemon_status {
+ local _VAR=CLI_$1
+ local xpath_sel='//node[hostname="Tier Daemon"][path="localhost"]/status'
+ ${!_VAR} --xml volume status $V0 \
+ | xmllint --xpath "$xpath_sel" - \
+ | sed -n '/.*<status>\([0-9]*\).*/s//\1/p'
+}
+
cleanup;
#setup cluster and test volume
@@ -54,6 +62,17 @@ EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" tier_status_node_down
TEST $glusterd_2;
EXPECT_WITHIN $PROBE_TIMEOUT 2 check_peers;
+# Make sure we check that the *bricks* are up and not just the node. >:-(
+EXPECT_WITHIN $CHILD_UP_TIMEOUT 1 brick_up_status_1 $V0 $H2 $B2/${V0}
+EXPECT_WITHIN $CHILD_UP_TIMEOUT 1 brick_up_status_1 $V0 $H2 $B2/${V0}_h2
+
+# Parsing normal output doesn't work because of line-wrap issues on our
+# regression machines, and the version of xmllint there doesn't support --xpath
+# so we can't do it that way either. In short, there's no way for us to detect
+# when we can stop waiting, so we just have to wait the maximum time every time
+# and hope any failures will show up later in the script.
+sleep $PROCESS_UP_TIMEOUT
+#XPECT_WITHIN $PROCESS_UP_TIMEOUT 1 tier_daemon_status 2
EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" tier_detach_status