diff options
author | Avra Sengupta <asengupt@redhat.com> | 2015-05-14 15:00:59 +0530 |
---|---|---|
committer | Krishnan Parthasarathi <kparthas@redhat.com> | 2015-06-04 02:37:19 -0700 |
commit | 402589f58cbb350dfedafa83e133664855ed37b2 (patch) | |
tree | 0c81042fa7cfc15a636f4fc353c6f385d213e062 /cli/src/cli-cmd-volume.c | |
parent | c2898f040937492c69a603ab3605cbd441e1e1f3 (diff) |
glusterd/shared_storage: Provide a volume set option to create and mount the shared storage
Introducing a global volume set option(cluster.enable-shared-storage)
which helps create and set-up the shared storage meta volume.
gluster volume set all cluster.enable-shared-storage enable
On enabling this option, the system analyzes the number of peers
in the cluster, which are currently connected, and chooses three
such peers(including the node the command is issued from). From these
peers a volume(gluster_shared_storage) is created. Depending on the
number of peers available the volume is either a replica 3
volume(if there are 3 connected peers), or a replica 2 volume(if there
are 2 connected peers). "/var/run/gluster/ss_brick" serves as the
brick path on each node for the shared storage volume. We also mount
the shared storage at "/var/run/gluster/shared_storage" on all the nodes
in the cluster as part of enabling this option. If there is only one node
in the cluster, or only one node is up then the command will fail
Once the volume is created, and mounted the maintainance of the
volume like adding-bricks, removing bricks etc., is expected to
be the onus of the user.
On disabling the option, we provide the user a warning, and on
affirmation from the user we stop the shared storage volume, and unmount
it from all the nodes in the cluster.
gluster volume set all cluster.enable-shared-storage disable
Change-Id: Idd92d67b93f444244f99ede9f634ef18d2945dbc
BUG: 1222013
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/10793
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Diffstat (limited to 'cli/src/cli-cmd-volume.c')
-rw-r--r-- | cli/src/cli-cmd-volume.c | 51 |
1 files changed, 42 insertions, 9 deletions
diff --git a/cli/src/cli-cmd-volume.c b/cli/src/cli-cmd-volume.c index 3ce88394925..3793863890d 100644 --- a/cli/src/cli-cmd-volume.c +++ b/cli/src/cli-cmd-volume.c @@ -281,22 +281,28 @@ cli_cmd_volume_delete_cbk (struct cli_state *state, struct cli_cmd_word *word, goto out; } - answer = cli_cmd_get_confirmation (state, question); - - if (GF_ANSWER_NO == answer) { - ret = 0; - goto out; - } - volname = (char *)words[2]; ret = dict_set_str (dict, "volname", volname); - if (ret) { gf_log (THIS->name, GF_LOG_WARNING, "dict set failed"); goto out; } + if (!strcmp (volname, GLUSTER_SHARED_STORAGE)) { + question = "Deleting the shared storage volume" + "(gluster_shared_storage), will affect features " + "like snapshot scheduler, geo-replication " + "and NFS-Ganesha. Do you still want to " + "continue?"; + } + + answer = cli_cmd_get_confirmation (state, question); + if (GF_ANSWER_NO == answer) { + ret = 0; + goto out; + } + CLI_LOCAL_INIT (local, words, frame, dict); if (proc->fn) { @@ -468,6 +474,14 @@ cli_cmd_volume_stop_cbk (struct cli_state *state, struct cli_cmd_word *word, goto out; } + if (!strcmp (volname, GLUSTER_SHARED_STORAGE)) { + question = "Stopping the shared storage volume" + "(gluster_shared_storage), will affect features " + "like snapshot scheduler, geo-replication " + "and NFS-Ganesha. Do you still want to " + "continue?"; + } + if (wordcount == 4) { if (!strcmp("force", words[3])) { flags |= GF_CLI_FLAG_OP_FORCE; @@ -478,6 +492,7 @@ cli_cmd_volume_stop_cbk (struct cli_state *state, struct cli_cmd_word *word, goto out; } } + ret = dict_set_int32 (dict, "flags", flags); if (ret) { gf_log (THIS->name, GF_LOG_ERROR, @@ -727,7 +742,8 @@ cli_cmd_volume_set_cbk (struct cli_state *state, struct cli_cmd_word *word, if (!frame) goto out; - ret = cli_cmd_volume_set_parse (words, wordcount, &options, &op_errstr); + ret = cli_cmd_volume_set_parse (state, words, wordcount, + &options, &op_errstr); if (ret) { if (op_errstr) { cli_err ("%s", op_errstr); @@ -1607,6 +1623,7 @@ cli_cmd_volume_remove_brick_cbk (struct cli_state *state, int parse_error = 0; int need_question = 0; cli_local_t *local = NULL; + char *volname = NULL; const char *question = "Removing brick(s) can result in data loss. " "Do you want to Continue?"; @@ -1623,6 +1640,22 @@ cli_cmd_volume_remove_brick_cbk (struct cli_state *state, goto out; } + ret = dict_get_str (options, "volname", &volname); + if (ret || !volname) { + gf_log ("cli", GF_LOG_ERROR, "Failed to fetch volname"); + ret = -1; + goto out; + } + + if (!strcmp (volname, GLUSTER_SHARED_STORAGE)) { + question = "Removing brick from the shared storage volume" + "(gluster_shared_storage), will affect features " + "like snapshot scheduler, geo-replication " + "and NFS-Ganesha. Do you still want to " + "continue?"; + need_question = _gf_true; + } + if (!(state->mode & GLUSTER_MODE_SCRIPT) && need_question) { /* we need to ask question only in case of 'commit or force' */ answer = cli_cmd_get_confirmation (state, question); |