summaryrefslogtreecommitdiffstats
path: root/glfs-operations.h
Commit message (Collapse)AuthorAgeFilesLines
* gluster-block: add delete rpcPrasanna Kumar Kalever2017-01-301-1/+1
| | | | Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
* gluster-block: listen on unix and inetPrasanna Kumar Kalever2017-01-301-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | from now We basically have 2 RPC connections, 1. Between gluster block CLI and local gluster-blockd This connection is basically UNIX/local netid ,listening on /var/run/gluster-blockd.socket file. The CLI always Send/Receive the commands to/from the local gluster-blockd via local rpc. 2. Between gluster-blockd's, i.e local (to cli) gluster-blockd and the gluster-blockd's running on remote(blockhost) This is the tcp connection. The rpc requests are listening on 24006 Also from now gluster-blockd is multi threaded (As of now 2 threads) Lets consider the Create Request to understand what each thread solves Thread1 (THE CLI THREAD) * Listening on local RPC * Generate the GBID (UUID) and create the entry with name GBID in the given volume with a requested size. * And Send the Configuration requests to remote hosts, waits for the replies (HINt: after this point Read Thread2 and come back) * Return to CLI. Thread 2 (THE SERVER THREAD) * Listens on 24006 * On Receiving an event, read the structure. * Executes the required "targetcli bla bla bla" command locally * Fills the command exitcode and the output in the RPC reply structure and send reply Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
* gluster-block: Improve error loggingPrasanna Kumar Kalever2016-12-241-5/+5
| | | | | | This also include few other cosmic changes Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
* gluster-block: Initial CommitPrasanna Kumar Kalever2016-12-231-0/+38
gluster block storage CLI. As of now, gluster-block is capable of creating tcmu based gluster block devices, across multiple nodes. All you need is a gluster volume (on one set of nodes) and tcmu-runner (https://github.com/open-iscsi/tcmu-runner) running on same(as gluster) or different set of nodes. From an another (or same) node where gluster-block is installed you can create iSCSI based gluster block devices. What it can do ? -------------- 1. create a file (name uuid) in the gluster volume. 2. create the iSCSI LUN and export the target via tcmu-runner in multiple nodes (--block-host IP1,IP2 ...) 3. list the available LUN's across multiple nodes. 4. get info about a LUN across multiple nodes. 5. delete a given LUN across all given nodes. $ gluster-block --help gluster-block (Version 0.1) -c, --create <name> Create the gluster block -v, --volume <vol> gluster volume name -h, --host <gluster-node> node addr from gluster pool -s, --size <size> block storage size in KiB|MiB|GiB|TiB.. -l, --list List available gluster blocks -i, --info <name> Details about gluster block -m, --modify <RESIZE|AUTH> Modify the metadata -d, --delete <name> Delete the gluster block [-b, --block-host <IP1,IP2,IP3...>] block servers, clubbed with any option Typically gluster-block, gluster volume and tcmu-runner can coexist on single set of nodes/node or can be split across different set of nodes. Install: ------- $ make -j install (hopefully that should correct you.) Points to remember: ------------------ 1. setup gluster volume 2. run tcmu-runner service Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>