diff options
author | Ashish Pandey <aspandey@redhat.com> | 2019-01-30 14:24:14 +0530 |
---|---|---|
committer | Shyamsundar Ranganathan <srangana@redhat.com> | 2019-03-13 01:48:26 +0000 |
commit | 3bcf210a5a9b922ea78b31ef0de8eaf97ff4dcb4 (patch) | |
tree | 85c2d03d35b61254603001eab4cc495004179f1b /extras/thin-arbiter/gluster-ta-volume.service | |
parent | bda2feeaf2917996c59c0c2188bfa1a17d91895f (diff) |
rpm: add thin-arbiter packagev6.0rc1
Discussion on thin arbiter volume -
https://github.com/gluster/glusterfs/issues/352#issuecomment-350981148
Main idea of having this rpm package is to deploy thin-arbiter
without glusterd and other commands on a node, and all we need
on that tie-breaker node is to run a single glusterfs command.
Also note that, no other glusterfs installation needs
thin-arbiter.so.
Make sure RPM contains sample vol file, which can work by default,
and a script to configure that volfile, along with translator image.
Change-Id: Ibace758373d8a991b6a19b2ecc60c93b2f8fc489
updates: bz#1672818
Signed-off-by: Amar Tumballi <amarts@redhat.com>
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
(cherry picked from commit ca9bef7f1538beb570fcb190ff94f86f0b8ba38a)
Diffstat (limited to 'extras/thin-arbiter/gluster-ta-volume.service')
-rw-r--r-- | extras/thin-arbiter/gluster-ta-volume.service | 13 |
1 files changed, 0 insertions, 13 deletions
diff --git a/extras/thin-arbiter/gluster-ta-volume.service b/extras/thin-arbiter/gluster-ta-volume.service deleted file mode 100644 index 19be1757555..00000000000 --- a/extras/thin-arbiter/gluster-ta-volume.service +++ /dev/null @@ -1,13 +0,0 @@ -[Unit] -Description = Thin-arbiter process to maintain quorum for replica volume -After = network.target - -[Service] -Environment = "LOG_LEVEL=WARNING" -ExecStart = /usr/local/sbin/glusterfsd -N --volfile-id ta-vol -f /var/lib/glusterd/thin-arbiter/thin-arbiter.vol --brick-port 24007 --xlator-option ta-vol-server.transport.socket.listen-port=24007 -Restart = always -KillMode=process -SuccessExitStatus=15 - -[Install] -WantedBy = multi-user.target |