| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove all generated files and have them generated when needed. This
build a libgbrpcxdr.la archive with the .o files that gets linked into
the libgbrpc.la archive. 'rpcgen' generates .c code that triggers
warnings for various compilers. This is not something that can easily be
fixed, so add rpc-pragmas.h (like GlusterFS does) to prevent these
warnings.
There are some functions used by gluster-blockd.c that are not part of
the header and were manually added to block.h. Because block.h get
regenerated now, these functions have been added to a new file
block_svc.h.
Note that generated and compiled files land in $(top_builddir). This
directory does not need to be the same as $(top_srcdir).
Change-Id: I0e764d159d6d785699537eed4e24b16883218038
Fixes: #2
Signed-off-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Till now we had simple makefile for checking dependencies and building.
Using libtoolz will give more control on dependency checks and
flexibility.
This patch also introduce rpm build feature.
Compiling:
$ ./autogen.sh
$ ./configure
$ make -j
$ make install
Building RPMS:
$ make rpms
Running:
$ systemctl start gluster-blockd.service
Using CLI:
$ gluster-block help
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this patch,
deprecate ssh way of communicating between server nodes/pods
Reason: ssh way is hard to be accepted in container world (Kube).
An another option kubeExec way seems to be a bit weird,
to have uniform way of communication in container and
non container worlds, we prefer RPC.
From now we communicate via RPC, using a static port 24009
Hence, we have two components,
server component -> gluster-blockd (daemon)
client component -> gluster-block (cli)
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
|
gluster block storage CLI.
As of now, gluster-block is capable of creating tcmu based gluster
block devices, across multiple nodes.
All you need is a gluster volume (on one set of nodes) and tcmu-runner
(https://github.com/open-iscsi/tcmu-runner) running on same(as gluster)
or different set of nodes.
From an another (or same) node where gluster-block is installed you
can create iSCSI based gluster block devices.
What it can do ?
--------------
1. create a file (name uuid) in the gluster volume.
2. create the iSCSI LUN and export the target via tcmu-runner in
multiple nodes (--block-host IP1,IP2 ...)
3. list the available LUN's across multiple nodes.
4. get info about a LUN across multiple nodes.
5. delete a given LUN across all given nodes.
$ gluster-block --help
gluster-block (Version 0.1)
-c, --create <name> Create the gluster block
-v, --volume <vol> gluster volume name
-h, --host <gluster-node> node addr from gluster pool
-s, --size <size> block storage size in KiB|MiB|GiB|TiB..
-l, --list List available gluster blocks
-i, --info <name> Details about gluster block
-m, --modify <RESIZE|AUTH> Modify the metadata
-d, --delete <name> Delete the gluster block
[-b, --block-host <IP1,IP2,IP3...>] block servers, clubbed with any option
Typically gluster-block, gluster volume and tcmu-runner can coexist on
single set of nodes/node or can be split across different set of nodes.
Install:
-------
$ make -j install (hopefully that should correct you.)
Points to remember:
------------------
1. setup gluster volume
2. run tcmu-runner service
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|