| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduce the transaction locking also, start maintaining
meta data journaling per block
Every request is follows transaction, at the start of any transaction
we take blocking lock on "/block-meta/meta.lock" file and at the end
we unlock.
Meanwhile while, when the transaction is in progress we do
journaling, while performing series of operations, used for future
purposes and roll backing.
A sample journal file looks like:
$ cat /mnt/block-meta/LUN1
GBID: xyz-abc
SIZE : 5GiB
HA: 3
ENTRYCREATE: INPROGRESS
ENTRYCREATE: SUCCESS
NODE1: INPROGRESS
NODE2: INPROGRESS
NODE3: INPROGRESS
NODE2: SUCCESS
NODE3: FAIL
NODE1: SUCCESS
NODE4: INPROGRESS
NODE4: SUCCESS
NODE3: CLEANUPSUCCESS
<EOF>
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
|
|
|
| |
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
|
|
|
| |
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
|
|
|
| |
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
|
|
|
| |
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
|
|
|
| |
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
from now We basically have 2 RPC connections,
1. Between gluster block CLI and local gluster-blockd
This connection is basically UNIX/local netid ,listening on
/var/run/gluster-blockd.socket file.
The CLI always Send/Receive the commands to/from the local
gluster-blockd via local rpc.
2. Between gluster-blockd's, i.e local (to cli) gluster-blockd and the
gluster-blockd's running on remote(blockhost)
This is the tcp connection. The rpc requests are listening on 24006
Also from now gluster-blockd is multi threaded (As of now 2 threads)
Lets consider the Create Request to understand what each thread solves
Thread1 (THE CLI THREAD)
* Listening on local RPC
* Generate the GBID (UUID) and create the entry with name GBID in the
given volume with a requested size.
* And Send the Configuration requests to remote hosts,
waits for the replies
(HINt: after this point Read Thread2 and come back)
* Return to CLI.
Thread 2 (THE SERVER THREAD)
* Listens on 24006
* On Receiving an event, read the structure.
* Executes the required "targetcli bla bla bla" command locally
* Fills the command exitcode and the output in the RPC reply structure
and send reply
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
starting gluster-blockd:
$ make install
$ systemctl daemon-reload
$ systemctl start gluster-blockd.service
checking status:
$ systemctl status gluster-blockd.service
● gluster-blockd.service - Gluster block storage utility
Loaded: loaded (gluster-blockd.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 01-16 17:53:23 IST; 3min 42s ago
Main PID: 27552 (gluster-blockd)
Tasks: 1 (limit: 512)
CGroup: /system.slice/gluster-blockd.service
└─27552 /usr/local/sbin/gluster-blockd
Jan 16 17:53:23 local systemd[1]: Started Gluster block storage utility.
gluster-blockd.service inturn brings below services:
1. rpcbind.service
2. target.service and
3. tcmu-runner.service
In order.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this patch,
deprecate ssh way of communicating between server nodes/pods
Reason: ssh way is hard to be accepted in container world (Kube).
An another option kubeExec way seems to be a bit weird,
to have uniform way of communication in container and
non container worlds, we prefer RPC.
From now we communicate via RPC, using a static port 24009
Hence, we have two components,
server component -> gluster-blockd (daemon)
client component -> gluster-block (cli)
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
|
|
|
|
|
|
|
|
| |
1. create backstores with unique wwn across nodes which are multipath
2. remove node identity in the iqn naming to keep it unique across the
multipath nodes
3. save target configuration after deleting the backstores and LUN's
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
| |
|
|
|
|
|
|
| |
This also include few other cosmic changes
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|
|
gluster block storage CLI.
As of now, gluster-block is capable of creating tcmu based gluster
block devices, across multiple nodes.
All you need is a gluster volume (on one set of nodes) and tcmu-runner
(https://github.com/open-iscsi/tcmu-runner) running on same(as gluster)
or different set of nodes.
From an another (or same) node where gluster-block is installed you
can create iSCSI based gluster block devices.
What it can do ?
--------------
1. create a file (name uuid) in the gluster volume.
2. create the iSCSI LUN and export the target via tcmu-runner in
multiple nodes (--block-host IP1,IP2 ...)
3. list the available LUN's across multiple nodes.
4. get info about a LUN across multiple nodes.
5. delete a given LUN across all given nodes.
$ gluster-block --help
gluster-block (Version 0.1)
-c, --create <name> Create the gluster block
-v, --volume <vol> gluster volume name
-h, --host <gluster-node> node addr from gluster pool
-s, --size <size> block storage size in KiB|MiB|GiB|TiB..
-l, --list List available gluster blocks
-i, --info <name> Details about gluster block
-m, --modify <RESIZE|AUTH> Modify the metadata
-d, --delete <name> Delete the gluster block
[-b, --block-host <IP1,IP2,IP3...>] block servers, clubbed with any option
Typically gluster-block, gluster volume and tcmu-runner can coexist on
single set of nodes/node or can be split across different set of nodes.
Install:
-------
$ make -j install (hopefully that should correct you.)
Points to remember:
------------------
1. setup gluster volume
2. run tcmu-runner service
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
|