summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorAravinda VK <avishwan@redhat.com>2016-05-31 13:39:05 +0530
committerJeff Darcy <jdarcy@redhat.com>2016-07-12 09:07:29 -0700
commit19adaad015a8e13206f656eaee135881a2da58eb (patch)
treecbb86821ae58b28d915596617edd47900e7c8477
parentd94bf608b16b82f2c8f8588a96459cb746773b32 (diff)
extras/cliutils: Utils for creating CLI tools for Gluster
Refer README.md for documentation. BUG: 1342356 Change-Id: Ic88504177137136bbb4b8b2c304ecc4af9bcfe30 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/14627 Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
-rw-r--r--configure.ac1
-rw-r--r--extras/Makefile.am2
-rw-r--r--extras/cliutils/Makefile.am4
-rw-r--r--extras/cliutils/README.md233
-rw-r--r--extras/cliutils/__init__.py29
-rw-r--r--extras/cliutils/cliutils.py212
-rw-r--r--glusterfs.spec.in7
7 files changed, 487 insertions, 1 deletions
diff --git a/configure.ac b/configure.ac
index 8433840bce0..70c53b08a8c 100644
--- a/configure.ac
+++ b/configure.ac
@@ -212,6 +212,7 @@ AC_CONFIG_FILES([Makefile
doc/Makefile
extras/Makefile
extras/glusterd.vol
+ extras/cliutils/Makefile
extras/init.d/Makefile
extras/init.d/glusterd.plist
extras/init.d/glusterd-Debian
diff --git a/extras/Makefile.am b/extras/Makefile.am
index 91c25c65fe6..091d7a9df36 100644
--- a/extras/Makefile.am
+++ b/extras/Makefile.am
@@ -5,7 +5,7 @@ EditorModedir = $(docdir)
EditorMode_DATA = glusterfs-mode.el glusterfs.vim
SUBDIRS = init.d systemd benchmarking hook-scripts $(OCF_SUBDIR) LinuxRPM \
- $(GEOREP_EXTRAS_SUBDIR) ganesha snap_scheduler firewalld
+ $(GEOREP_EXTRAS_SUBDIR) ganesha snap_scheduler firewalld cliutils
confdir = $(sysconfdir)/glusterfs
conf_DATA = glusterfs-logrotate gluster-rsyslog-7.2.conf gluster-rsyslog-5.8.conf \
diff --git a/extras/cliutils/Makefile.am b/extras/cliutils/Makefile.am
new file mode 100644
index 00000000000..7039703e275
--- /dev/null
+++ b/extras/cliutils/Makefile.am
@@ -0,0 +1,4 @@
+EXTRA_DIST= cliutils.py __init__.py
+
+cliutilsdir = @BUILD_PYTHON_SITE_PACKAGES@/gluster/cliutils
+cliutils_PYTHON = cliutils.py __init__.py
diff --git a/extras/cliutils/README.md b/extras/cliutils/README.md
new file mode 100644
index 00000000000..ccb60802c3d
--- /dev/null
+++ b/extras/cliutils/README.md
@@ -0,0 +1,233 @@
+# CLI utility for creating Cluster aware CLI tools for Gluster
+cliutils is a Python library which provides wrapper around `gluster system::
+execute` command to extend the functionalities of Gluster.
+
+Example use cases:
+- Start a service in all peer nodes of Cluster
+- Collect the status of a service from all peer nodes
+- Collect the config values from each peer nodes and display latest
+ config based on version.
+- Copy a file present in GLUSTERD_WORKDIR from one peer node to all
+ other peer nodes.(Geo-replication create push-pem is using this to
+ distribute the SSH public keys from all master nodes to all slave
+ nodes)
+- Generate pem keys in all peer nodes and collect all the public keys
+ to one place(Geo-replication gsec_create is doing this)
+- Provide Config sync CLIs for new features like `gluster-eventsapi`,
+ `gluster-restapi`, `gluster-mountbroker` etc.
+
+## Introduction
+
+If a executable file present in `$GLUSTER_LIBEXEC` directory in all
+peer nodes(Filename startswith `peer_`) then it can be executed by
+running `gluster system:: execute` command from any one peer node.
+
+- This command will not copy any executables to peer nodes, Script
+ should exist in all peer nodes to use this infrastructure. Raises
+ error in case script not exists in any one of the peer node.
+- Filename should start with `peer_` and should exist in
+ `$GLUSTER_LIBEXEC` directory.
+- This command can not be called from outside the cluster.
+
+To understand the functionality, create a executable file `peer_hello`
+under $GLUSTER_LIBEXEC directory and copy to all peer nodes.
+
+ #!/usr/bin/env bash
+ echo "Hello from $(gluster system:: uuid get)"
+
+Now run the following command from any one gluster node,
+
+ gluster system:: execute hello
+
+**Note:** Gluster will not copy the executable script to all nodes,
+ copy `peer_hello` script to all peer nodes to use `gluster system::
+ execute` infrastructure.
+
+It will run `peer_hello` executable in all peer nodes and shows the
+output from each node(Below example shows output from my two nodes
+cluster)
+
+ Hello from UUID: e7a3c5c8-e7ad-47ad-aa9c-c13907c4da84
+ Hello from UUID: c680fc0a-01f9-4c93-a062-df91cc02e40f
+
+## cliutils
+A Python wrapper around `gluster system:: execute` command is created
+to address the following issues
+
+- If a node is down in the cluster, `system:: execute` just skips it
+ and runs only in up nodes.
+- `system:: execute` commands are not user friendly
+- It captures only stdout, so handling errors is tricky.
+
+**Advantages of cliutils:**
+
+- Single executable file will act as node component as well as User CLI.
+- `execute_in_peers` utility function will merge the `gluster system::
+ execute` output with `gluster peer status` to identify offline nodes.
+- Easy CLI Arguments handling.
+- If node component returns non zero return value then, `gluster
+ system:: execute` will fail to aggregate the output from other
+ nodes. `node_output_ok` or `node_output_notok` utility functions
+ returns zero both in case of success or error, but returns json
+ with ok: true or ok:false respectively.
+- Easy to iterate on the node outputs.
+- Better error handling - Geo-rep CLIs `gluster system:: execute
+ mountbroker`, `gluster system:: execute gsec_create` and `gluster
+ system:: add_secret_pub` are suffering from error handling. These
+ tools are not notifying user if any failures during execute or if a node
+ is down during execute.
+
+### Hello World
+Create a file in `$LIBEXEC/glusterfs/peer_message.py` with following
+content.
+
+ #!/usr/bin/env python
+ from gluster.cliutils import Cmd, runcli, execute_in_peers, node_output_ok
+
+ class NodeHello(Cmd):
+ name = "node-hello"
+
+ def run(self, args):
+ node_output_ok("Hello")
+
+ class Hello(Cmd):
+ name = "hello"
+
+ def run(self, args):
+ out = execute_in_peers("node-hello")
+ for row in out:
+ print ("{0} from {1}".format(row.output, row.hostname))
+
+ if __name__ == "__main__":
+ runcli()
+
+When we run `python peer_message.py`, it will have two subcommands,
+"node-hello" and "hello". This file should be copied to
+`$LIBEXEC/glusterfs` directory in all peer nodes. User will call
+subcommand "hello" from any one peer node, which internally call
+`gluster system:: execute message.py node-hello`(This runs in all peer
+nodes and collect the outputs)
+
+For node component do not print the output directly, use
+`node_output_ok` or `node_output_notok` functions. `node_output_ok`
+additionally collects the node UUID and prints in JSON
+format. `execute_in_peers` function will collect this output and
+merges with `peers list` so that we don't miss the node information if
+that node is offline.
+
+If you observed already, function `args` is optional, if you don't
+have arguments then no need to create a function. When we run the
+file, we will have two subcommands. For example,
+
+ python peer_message.py hello
+ python peer_message.py node-hello
+
+First subcommand calls second subcommand in all peer nodes. Basically
+`execute_in_peers(NAME, ARGS)` will be converted into
+
+ CMD_NAME = FILENAME without "peers_"
+ gluster system:: execute <CMD_NAME> <SUBCOMMAND> <ARGS>
+
+In our example,
+
+ filename = "peer_message.py"
+ cmd_name = "message.py"
+ gluster system:: execute ${cmd_name} node-hello
+
+Now create symlink in `/usr/bin` or `/usr/sbin` directory depending on
+the usecase.(Optional step for usability)
+
+ ln -s /usr/libexec/glusterfs/peer_message.py /usr/bin/gluster-message
+
+Now users can use `gluster-message` instead of calling
+`/usr/libexec/glusterfs/peer_message.py`
+
+ gluster-message hello
+
+### Showing CLI output as Table
+
+Following example uses prettytable library, which can be installed
+using `pip install prettytable` or `dnf install python-prettytable`
+
+ #!/usr/bin/env python
+ from prettytable import PrettyTable
+ from gluster.cliutils import Cmd, runcli, execute_in_peers, node_output_ok
+
+ class NodeHello(Cmd):
+ name = "node-hello"
+
+ def run(self, args):
+ node_output_ok("Hello")
+
+ class Hello(Cmd):
+ name = "hello"
+
+ def run(self, args):
+ out = execute_in_peers("node-hello")
+ # Initialize the CLI table
+ table = PrettyTable(["ID", "NODE", "NODE STATUS", "MESSAGE"])
+ table.align["NODE STATUS"] = "r"
+ for row in out:
+ table.add_row([row.nodeid,
+ row.hostname,
+ "UP" if row.node_up else "DOWN",
+ row.output if row.ok else row.error])
+
+ print table
+
+ if __name__ == "__main__":
+ runcli()
+
+
+Example output,
+
+ +--------------------------------------+-----------+-------------+---------+
+ | ID | NODE | NODE STATUS | MESSAGE |
+ +--------------------------------------+-----------+-------------+---------+
+ | e7a3c5c8-e7ad-47ad-aa9c-c13907c4da84 | localhost | UP | Hello |
+ | bb57a4c4-86eb-4af5-865d-932148c2759b | vm2 | UP | Hello |
+ | f69b918f-1ffa-4fe5-b554-ee10f051294e | vm3 | DOWN | N/A |
+ +--------------------------------------+-----------+-------------+---------+
+
+## How to package in Gluster
+If the project is created in `$GLUSTER_SRC/tools/message`
+
+Add "message" to SUBDIRS list in `$GLUSTER_SRC/tools/Makefile.am`
+
+and then create a `Makefile.am` in `$GLUSTER_SRC/tools/message`
+directory with following content.
+
+ EXTRA_DIST = peer_message.py
+
+ peertoolsdir = $(libexecdir)/glusterfs/
+ peertools_SCRIPTS = peer_message.py
+
+ install-exec-hook:
+ $(mkdir_p) $(DESTDIR)$(bindir)
+ rm -f $(DESTDIR)$(bindir)/gluster-message
+ ln -s $(libexecdir)/glusterfs/peer_message.py \
+ $(DESTDIR)$(bindir)/gluster-message
+
+ uninstall-hook:
+ rm -f $(DESTDIR)$(bindir)/gluster-message
+
+Thats all. Add following files in `glusterfs.spec.in` if packaging is
+required.(Under `%files` section)
+
+ %{_libexecdir}/glusterfs/peer_message.py*
+ %{_bindir}/gluster-message
+
+## Who is using cliutils
+- gluster-mountbroker http://review.gluster.org/14544
+- gluster-eventsapi http://review.gluster.org/14248
+- gluster-georep-sshkey http://review.gluster.org/14732
+- gluster-restapi https://github.com/aravindavk/glusterfs-restapi
+
+## Limitations/TODOs
+- Not yet possible to create CLI without any subcommand, For example
+ `gluster-message` without any arguments
+- Hiding node subcommands in `--help`(`gluster-message --help` will
+ show all subcommands including node subcommands)
+- Only positional arguments supported for node arguments, Optional
+ arguments can be used for other commands.
+- API documentation
diff --git a/extras/cliutils/__init__.py b/extras/cliutils/__init__.py
new file mode 100644
index 00000000000..4bb8395bb46
--- /dev/null
+++ b/extras/cliutils/__init__.py
@@ -0,0 +1,29 @@
+# -*- coding: utf-8 -*-
+# Reexporting the utility funcs and classes
+from cliutils import (runcli,
+ sync_file_to_peers,
+ execute_in_peers,
+ execute,
+ node_output_ok,
+ node_output_notok,
+ output_error,
+ oknotok,
+ yesno,
+ get_node_uuid,
+ Cmd,
+ GlusterCmdException)
+
+
+# This will be useful when `from cliutils import *`
+__all__ = ["runcli",
+ "sync_file_to_peers",
+ "execute_in_peers",
+ "execute",
+ "node_output_ok",
+ "node_output_notok",
+ "output_error",
+ "oknotok",
+ "yesno",
+ "get_node_uuid",
+ "Cmd",
+ "GlusterCmdException"]
diff --git a/extras/cliutils/cliutils.py b/extras/cliutils/cliutils.py
new file mode 100644
index 00000000000..4e035d7ff5c
--- /dev/null
+++ b/extras/cliutils/cliutils.py
@@ -0,0 +1,212 @@
+# -*- coding: utf-8 -*-
+from __future__ import print_function
+from argparse import ArgumentParser, RawDescriptionHelpFormatter
+import inspect
+import subprocess
+import os
+import xml.etree.cElementTree as etree
+import json
+import sys
+
+MY_UUID = None
+parser = ArgumentParser(formatter_class=RawDescriptionHelpFormatter,
+ description=__doc__)
+subparsers = parser.add_subparsers(dest="mode")
+
+subcommands = {}
+cache_data = {}
+ParseError = etree.ParseError if hasattr(etree, 'ParseError') else SyntaxError
+
+
+class GlusterCmdException(Exception):
+ pass
+
+
+def get_node_uuid():
+ # Caches the Node UUID in global variable,
+ # Executes gluster system:: uuid get command only if
+ # calling this function for first time
+ global MY_UUID
+ if MY_UUID is not None:
+ return MY_UUID
+
+ cmd = ["gluster", "system::", "uuid", "get", "--xml"]
+ rc, out, err = execute(cmd)
+
+ if rc != 0:
+ return None
+
+ tree = etree.fromstring(out)
+ uuid_el = tree.find("uuidGenerate/uuid")
+ MY_UUID = uuid_el.text
+ return MY_UUID
+
+
+def yesno(flag):
+ return "Yes" if flag else "No"
+
+
+def oknotok(flag):
+ return "OK" if flag else "NOT OK"
+
+
+def output_error(message):
+ print (message, file=sys.stderr)
+ sys.exit(1)
+
+
+def node_output_ok(message=""):
+ # Prints Success JSON output and exits with returncode zero
+ out = {"ok": True, "nodeid": get_node_uuid(), "output": message}
+ print (json.dumps(out))
+ sys.exit(0)
+
+
+def node_output_notok(message):
+ # Prints Error JSON output and exits with returncode zero
+ out = {"ok": False, "nodeid": get_node_uuid(), "error": message}
+ print (json.dumps(out))
+ sys.exit(0)
+
+
+def execute(cmd):
+ p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ out, err = p.communicate()
+ return p.returncode, out, err
+
+
+def get_pool_list():
+ cmd = ["gluster", "--mode=script", "pool", "list", "--xml"]
+ rc, out, err = execute(cmd)
+ if rc != 0:
+ output_error("Failed to get Pool Info: {0}".format(err))
+
+ tree = etree.fromstring(out)
+
+ pool = []
+ try:
+ for p in tree.findall('peerStatus/peer'):
+ pool.append({"nodeid": p.find("uuid").text,
+ "hostname": p.find("hostname").text,
+ "connected": (True if p.find("connected").text == "1"
+ else False)})
+ except (ParseError, AttributeError, ValueError) as e:
+ output_error("Failed to parse Pool Info: {0}".format(e))
+
+ return pool
+
+
+class NodeOutput(object):
+ def __init__(self, **kwargs):
+ self.nodeid = kwargs.get("nodeid", "")
+ self.hostname = kwargs.get("hostname", "")
+ self.node_up = kwargs.get("node_up", False)
+ self.ok = kwargs.get("ok", False)
+ self.output = kwargs.get("output", "N/A")
+ self.error = kwargs.get("error", "N/A")
+
+
+def execute_in_peers(name, args=[]):
+ # Get the file name of Caller function, If the file name is peer_example.py
+ # then Gluster peer command will be gluster system:: execute example.py
+ # Command name is without peer_
+ frame = inspect.stack()[1]
+ module = inspect.getmodule(frame[0])
+ actual_file = module.__file__
+ # If file is symlink then find actual file
+ if os.path.islink(actual_file):
+ actual_file = os.readlink(actual_file)
+
+ # Get the name of file without peer_
+ cmd_name = os.path.basename(actual_file).replace("peer_", "")
+ cmd = ["gluster", "system::", "execute", cmd_name, name] + args
+ rc, out, err = execute(cmd)
+ if rc != 0:
+ raise GlusterCmdException((rc, out, err, " ".join(cmd)))
+
+ out = out.strip().splitlines()
+
+ # JSON decode each line and construct one object with node id as key
+ all_nodes_data = {}
+ for node_data in out:
+ data = json.loads(node_data)
+ all_nodes_data[data["nodeid"]] = {
+ "nodeid": data.get("nodeid"),
+ "ok": data.get("ok"),
+ "output": data.get("output", ""),
+ "error": data.get("error", "")}
+
+ # gluster pool list
+ pool_list = get_pool_list()
+
+ data_out = []
+ # Iterate pool_list and merge all_nodes_data collected above
+ # If a peer node is down then set node_up = False
+ for p in pool_list:
+ p_data = all_nodes_data.get(p.get("nodeid"), None)
+ row_data = NodeOutput(node_up=False,
+ hostname=p.get("hostname"),
+ nodeid=p.get("nodeid"),
+ ok=False)
+
+ if p_data is not None:
+ # Node is UP
+ row_data.node_up = True
+ row_data.ok = p_data.get("ok")
+ row_data.output = p_data.get("output")
+ row_data.error = p_data.get("error")
+
+ data_out.append(row_data)
+
+ return data_out
+
+
+def sync_file_to_peers(fname):
+ # Copy file from current node to all peer nodes, fname
+ # is path after GLUSTERD_WORKDIR
+ cmd = ["gluster", "system::", "copy", "file", fname]
+ rc, out, err = execute(cmd)
+ if rc != 0:
+ raise GlusterCmdException((rc, out, err))
+
+
+class Cmd(object):
+ name = ""
+
+ def run(self, args):
+ # Must required method. Raise NotImplementedError if derived class
+ # not implemented this method
+ raise NotImplementedError("\"run(self, args)\" method is "
+ "not implemented by \"{0}\"".format(
+ self.__class__.__name__))
+
+
+def runcli():
+ # Get list of Classes derived from class "Cmd" and create
+ # a subcommand as specified in the Class name. Call the args
+ # method by passing subcommand parser, Derived class can add
+ # arguments to the subcommand parser.
+ for c in Cmd.__subclasses__():
+ cls = c()
+ if getattr(cls, "name", "") == "":
+ raise NotImplementedError("\"name\" is not added "
+ "to \"{0}\"".format(
+ cls.__class__.__name__))
+
+ p = subparsers.add_parser(cls.name)
+ args_func = getattr(cls, "args", None)
+ if args_func is not None:
+ args_func(p)
+
+ # A dict to save subcommands, key is name of the subcommand
+ subcommands[cls.name] = cls
+
+ # Get all parsed arguments
+ args = parser.parse_args()
+
+ # Get the subcommand to execute
+ cls = subcommands.get(args.mode, None)
+
+ # Run
+ if cls is not None:
+ cls.run(args)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index cb04f431783..34e0ba95e2d 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -1046,6 +1046,7 @@ exit 0
# introducing glusterfs module in site packages.
# so that all other gluster submodules can reside in the same namespace.
%{python_sitelib}/gluster/__init__.*
+%{python_sitelib}/gluster/cliutils
%if ( 0%{!?_without_rdma:1} )
%files rdma
@@ -1186,6 +1187,9 @@ exit 0
# Extra utility script
%{_datadir}/glusterfs/scripts/stop-all-gluster-processes.sh
+# CLI utils
+%{_libexecdir}/glusterfs/cliutils
+
# Incrementalapi
%{_libexecdir}/glusterfs/glusterfind
%{_bindir}/glusterfind
@@ -1199,6 +1203,9 @@ exit 0
%{_sbindir}/gf_recon
%changelog
+* Mon Jul 11 2016 Aravinda VK <avishwan@redhat.com>
+- Added Python subpackage "cliutils" under gluster
+
* Tue May 31 2016 Kaleb S. KEITHLEY <kkeithle@redhat.com>
- broken brp-python-bytecompile in RHEL7 results in installed
but unpackaged files.