diff options
author | Susant Palai <spalai@redhat.com> | 2017-03-22 17:14:25 +0530 |
---|---|---|
committer | Raghavendra G <rgowdapp@redhat.com> | 2017-04-29 14:29:34 +0000 |
commit | d51288540241d1f7785bb17bdc0702c0879087a9 (patch) | |
tree | 8557ef36e61d0f3ff4bfea48d78e478971c36c57 /xlators/mgmt | |
parent | 8b2ef5076284e44a87698393c8094c925fa863fa (diff) |
cluster/dht: Make rebalance throttle option tuned by number
Current rebalance throttle options: lazy/normal/aggressive may not always be
sufficient for the purpose of throttling. In our recent test, we observed for
certain setups, normal and aggressive modes behaved similarly consuming full
disk bandwidth. So in cases like this admin should be able to tune it
down(or vice versa) depending on the need.
Along with old throttle configurations, thread counts are tuned based on number.
e.g. gluster v set vol-name cluster-rebal.throttle 5.
Admin can tune up/down between 0 and the number of cores available.
Note: For heterogenous servers, validation will fail on the old server if "number"
is given for throttle configuration.
The message looks something like this:
"volume set: failed: Staging failed on vm2. Error: cluster.rebal-throttle should be {lazy|normal|aggressive}"
Test: Manual test by logging active thread number after reconfiguring throttle option.
testcase: tests/basic/distribute/throttle-rebal.t
Change-Id: I46e3cde546900307831028b344ecf601fd9b02c3
BUG: 1438370
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: https://review.gluster.org/16980
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Diffstat (limited to 'xlators/mgmt')
-rw-r--r-- | xlators/mgmt/glusterd/src/glusterd-volume-set.c | 32 |
1 files changed, 28 insertions, 4 deletions
diff --git a/xlators/mgmt/glusterd/src/glusterd-volume-set.c b/xlators/mgmt/glusterd/src/glusterd-volume-set.c index 08557d1bd86..728da74b7a6 100644 --- a/xlators/mgmt/glusterd/src/glusterd-volume-set.c +++ b/xlators/mgmt/glusterd/src/glusterd-volume-set.c @@ -546,21 +546,45 @@ static int validate_defrag_throttle_option (glusterd_volinfo_t *volinfo, dict_t *dict, char *key, char *value, char **op_errstr) { - char errstr[2048] = ""; - int ret = 0; - xlator_t *this = NULL; + char errstr[2048] = ""; + int ret = 0; + xlator_t *this = NULL; + int thread_count = 0; + long int cores_available = 0; this = THIS; GF_ASSERT (this); + cores_available = sysconf(_SC_NPROCESSORS_ONLN); + + /* Throttle option should be one of lazy|normal|aggressive or a number + * configured by user max up to the number of cores in the machine */ + if (!strcasecmp (value, "lazy") || !strcasecmp (value, "normal") || !strcasecmp (value, "aggressive")) { ret = 0; + } else if ((gf_string2int (value, &thread_count) == 0)) { + if ((thread_count > 0) && (thread_count <= cores_available)) { + ret = 0; + } else { + ret = -1; + snprintf (errstr, sizeof (errstr), "%s should be within" + " range of 0 and maximum number of cores " + "available (cores available - %ld)", key, + cores_available); + + gf_msg (this->name, GF_LOG_ERROR, EINVAL, + GD_MSG_INVALID_ENTRY, "%s", errstr); + + *op_errstr = gf_strdup (errstr); + } } else { ret = -1; snprintf (errstr, sizeof (errstr), "%s should be " - "{lazy|normal|aggressive}", key); + "{lazy|normal|aggressive} or a number upto number of" + " cores available (cores availble - %ld)", key, + cores_available); gf_msg (this->name, GF_LOG_ERROR, EINVAL, GD_MSG_INVALID_ENTRY, "%s", errstr); *op_errstr = gf_strdup (errstr); |