| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The peer list and the peerinfo objects are now protected using RCU.
Design patterns described in the Paul McKenney's RCU dissertation [1]
(sections 5 and 6) have been used to convert existing non-RCU protected
code to RCU protected code.
Currently, we are only targetting guaranteeing the existence of the
peerinfo objects, ie., we are only looking to protect deletes, not all
updaters. We chose this, as protecting all updates is a much more
complex task.
The steps used to accomplish this are,
1. Remove all long lived direct references to peerinfo objects (apart
from the peerinfo list). This includes references in glusterd_peerctx_t
(RPC), glusterd_friend_sm_event_t (friend state machine) and others.
This way no one has a reference to deleted peerinfo object.
2. Replace the direct references with indirect references, ie., use
peer uuid and peer hostname as indirect references to the peerinfo
object. Any reader or updater now uses the indirect references to get to
the actual peerinfo object, using glusterd_peerinfo_find. Cases where a
peerinfo cannot be found are handled gracefully.
3. The readers get and use the peerinfo object only within a RCU read
critical section. This prevents the object from being deleted/freed when
in actual use.
4. The deletion of a peerinfo object is done in a ordered manner
(glusterd_peerinfo_destroy). The object is first removed from the
peerinfo list using an atomic list remove, but the list head is not
reset to allow existing list readers to complete correctly. We wait for
readers to complete, before resetting the list head. This removes the
object from the list completely. After this no new readers can get a
reference to the object, and it can be freed.
This change was developed on the git branch at [2]. This commit is a
combination of the following commits on the development branch.
d7999b9 Protect the glusterd_conf_t->peers_list with RCU.
0da85c4 Synchronize before INITing peerinfo list head after removing
from list.
32ec28a Add missing rcu_read_unlock
8fed0b8 Correctly exit read critical section once peer is found.
63db857 Free peerctx only on rpc destruction
56eff26 Cleanup style issues
e5f38b0 Indirection for events and friend_sm
3c84ac4 In __glusterd_probe_cbk goto unlock only if peer already
exists
141d855 Address review comments on 9695/1
aaeefed Protection during peer updates
6eda33d Revert "Synchronize before INITing peerinfo list head after
removing from list."
f69db96 Remove unneeded line
b43d2ec Address review comments on 9695/4
7781921 Address review comments on 9695/5
eb6467b Add some missing semi-colons
328a47f Remove synchronize_rcu from
glusterd_friend_sm_transition_state
186e429 Run part of glusterd_friend_remove in critical section
55c0a2e Fix gluster (peer status/ pool list) with no peers
93f8dcf Use call_rcu to free peerinfo
c36178c Introduce composite struct, gd_rcu_head
[1]: http://www.rdrop.com/~paulmck/RCU/RCUdissertation.2004.07.14e1.pdf
[2]: https://github.com/kshlm/glusterfs/tree/urcu
Change-Id: Ic1480e59c86d41d25a6a3d159aa3e11fbb3cbc7b
BUG: 1191030
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/9695
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds a new option called as "no-verify" to geo-rep
create command. With no-verify option, following checks does
not take place before session creation:
* if ssh port 22 is open in slave
* has proper passwordless ssh login setup
* slave volume is created and is empty
* if slave has enough memory
This option is needed by ovirt-engine as the tasks done
by push-pem is taken care by ovirt-engine and also the
above checks are done. Thus creation of password-less
ssh can be avoided when geo-replication is managed
through ovirt.
Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>]
{ create [[no-verify]|[push-pem]] [force]|
start [force]|stop [force]|pause [force]|
resume [force]|config|status [detail]|
delete } [options...]
Change-Id: I975265f27d6434be5409438257d09cd4190c9159
BUG: 1198615
Signed-off-by: darshan n <dnarayan@redhat.com>
Reviewed-on: http://review.gluster.org/9799
Reviewed-by: Aravinda VK <avishwan@redhat.com>
Reviewed-by: Kotresh HR <khiremat@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is required to determine if gluster-nfs daemon needs to be
restarted. With http://review.gluster.org/9835 gluster-nfs volfile
wouldn't be created if all volumes had nfs disabled before they were
started even once. With the existing code, we wouldn't be able to
determine if gluster-nfs needs to be restarted or reconfigured without
the gluster-nfs volfile. This fix is ensure that we generate the
gluster-nfs volfile even if it wouldn't be started, to honour the above
requirement.
Change-Id: I86c6707870d838b03dd4d14b91b984cb43c33006
BUG: 1199944
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9851
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Appending GMT time stamp with snapname by default.
If no-timestamp flag is given during snapshot creation,
then time stamp will not append with snapname;
Initial consumer of this feature is Samba's Shadow Copy
feature. This feature allows Windows user to get previous
revisions of a file. For this feature to work snapshot
names under .snaps folder (USS) should have timestamp in
following format appended:
@GMT-YYYY.MM.DD-hh.mm.ss
PS: https://www.samba.org/samba/docs/man/manpages/vfs_shadow_copy2.8.html
This format is configurable by Samba conf file. Due to a
limitation in Windows directory access the exact format
cannot be used by USS. Therefore we have modified the file
format to:
_GMT-YYYY.MM.DD-hh.mm.ss
Snapshot scheduling feature also required to append timestamp
to the snapshot name therefore timestamp is appended in
snapshot creation itself instead of doing the changes in
snapview server.
More info:
https://www.mail-archive.com/gluster-users@gluster.org/msg18895.html
Change-Id: Idac24670948cf4c0fbe916ea6690e49cbc832d07
BUG: 1189473
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/9597
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Include xattrop64-watchlist for index xlator for disperse volumes.
- Change the functions that exist to consider disperse volumes also
for sending commands to disperse xls in self-heal-daemon.
Change-Id: Iae75a5d3dd5642454a2ebf5840feba35780d8adb
BUG: 1177601
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9793
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces the changes required in ec xlator to handle
index/full heal.
Index healer threads:
Ec xlator start an index healer thread per local brick. This thread keeps
waking up every minute to check if there are any files to be healed based on
the indices kept in index directory. Whenever child_up event comes, then also
this index healer thread wakes up and crawls the indices and triggers heal.
When self-heal-daemon is disabled on this particular volume then the healer
thread keeps waiting until it is enabled again to perform heals.
Full healer threads:
Ec xlator starts a full healer thread for the local subvolume provided by
glusterd to perform full crawl on the directory hierarchy to perform heals.
Once the crawl completes the thread exits if no more full heals are issued.
Changed xl-op prefix GF_AFR_OP to GF_SHD_OP to make it more generic.
Change-Id: Idf9b2735d779a6253717be064173dfde6f8f824b
BUG: 1177601
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9787
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ic4da2a467a95af7108ed67954f44341131b41c7b
BUG: 1199944
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9835
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also deleted default values for disperse-self-heal-daemon and locks.trace
Change-Id: Icc927d176aa10f06b40c114aa296b02dbad3a8ff
BUG: 1187858
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9516
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Provide a way of disabling reads when quorum is not met.
Change-Id: Ic4f57c2b87a0b8514600759de3a7a47e217fe3b5
BUG: 1187885
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9543
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I66edc1b98b70a494e069df95a6f347634c8f862d
BUG: 1198076
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/9791
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch replaces usage of the libglusterfs lists data structures and
API in glusterd with the lists data structures and API from liburcu. The
liburcu data structes and APIs are a drop-in replacement for
libglusterfs lists.
All usages have been changed to keep the code consistent, and free from
confusion.
NOTE: glusterd_conf_t->xprt_list still uses the libglusterfs data
structures and API, as it holds rpc_transport_t objects, which is not a
part of glusterd and is not being changed in this patch.
This change was developed on the git branch at [1]. This commit is a
combination of the following commits on the development branch.
6dac576 Replace libglusterfs lists with liburcu lists
a51b5ab Fix compilation issues
d98a06f Fix merge issues
a5d918e Remove merge remnant
1cca113 More style cleanup
1917be3 Address review comments on 9624/1
8d10f13 Use cds_lists for glusterd_svc_t
524ad5d Add rculist header in glusterd-conn-helper.c
646f294 glusterd: add list_add_order API honouring rcu
[1]: https://github.com/kshlm/glusterfs/tree/urcu
Change-Id: Ic613c5b6e496a677b9d3de15fc042a0492109fb0
BUG: 1191030
Signed-off-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9624
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit does the following:
1. Adds several new functions for generation of brick xlator units
in a volgen. Each such function takes care of generation of only
one xlator in volgen.
2. A new table, server_graph_table, links all individual graph generation
functions together. The order of xlator function generators in the
table determines the topology of the brick graph.
3. server_graph_builder() invokes individual graph generators by walking
through server_graph_table. Addition of debug xlators into the brick
graph is also handled by this walk. As a result, a lot of cruft that
is present in the exisiting implementation of this function gets
cleaned up.
4. get_server_xlator() now makes use of server_graph_table to determine
whether a xlator key corresponds to a server xlator or not.
Change-Id: I46bb6e331544150302eb5b33c4007917aff2586d
BUG: 1188196
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/9751
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In gd_restore_snap_volume(), irrespective of the ret value
we were trying to do a list_add_tail to new_volinfo variable.
There might be cases where this variable might be NULL, and hence
cause a crash.
Change-Id: I158c5ae655ea27c9a91be3cb9d95a80a3dd05559
BUG: 1194538
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/9719
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds liburcu related checks to the build system and updates
the spec file to require 'userspace-rcu'.
liburcu >= 0.7 is required to build GlusterFS, but 0.8 and above is
preferred. For cases when liburcu 0.7.x is the available version, some
function definitions (currently just one) from liburcu-0.8.6 have been
made available in /contrib/userspace-rcu/.
This change was developed on the git branch at [1]. This commit is a
combination of the following commits on the development branch.
a5cd6bd Add userspace-rcu checks to configure.ac
fe5ced3 Add URCU libs to glusterd libtool flags
1e43302 Add local definition of cds_list_add_tail_rcu for
liburcu-0.7
98da755 Move local definition of cds_list_add_tail_rcu into contrib
8c44dfd Update spec file to include userspace-rcu0466e33 Rename
rculist-additional.h to rculist-extra.h
947c7b3 Add rculist-extra.h to dist
19f32ad Address review comments 9605/1
[1]: https://github.com/kshlm/glusterfs/tree/urcu
Change-Id: Ifbb617d0dacce8fa01214f894badb9d8cdcaf56f
BUG: 1191030
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/9605
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch ports nfs, shd, quotad & snapd with the approach suggested in
http://www.gluster.org/pipermail/gluster-devel/2014-December/043180.html
Change-Id: I4ea5b38793f87fc85cc9d2cf873727351dedffd2
BUG: 1191486
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9428
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For tcp,rdma type voumes, there will be two ports, one for tcp
and one for rdma. But volume status command only display tcp port.
By this change, adding an extra column for rdma port and changing
the port to tcp port.
Eg:
>gluster volume status pathy
>For tcp,rdma type volume
Status of volume: patchy
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick brickname 49152 49153 Y 14158
>For rdma type volume
Status of volume: patchy
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick brickname 0 49153 Y 14158
For tcp type volume
Status of volume: patchy
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick brickname 49152 0 Y 14158
>gluster volume status patchy detail
Status of volume: xcube2
------------------------------------------------------------------------------
Brick : Brick brickname
TCP Port : 49152
RDMA Port : 49153
Online : Y
Pid : 14158
File System : ext4
Device :
/dev/mapper/luks-2099dd4a-0050-4cae-ad7b-c6a0498c4e88
Mount Options : rw,seclabel,relatime,data=ordered
Inode Size : 256
Disk Space Free : 31.1GB
Total Disk Space : 47.9GB
Inode Count : 3203072
Free Inodes : 2926789
>gluster volume status xcube --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr>(null)</opErrstr>
<volStatus>
<volumes>
<volume>
<volName>xcube</volName>
<nodeCount>2</nodeCount>
<node>
<hostname>hostname</hostname>
<path>/home/brick1</path>
<peerid>2d7bcb95-3d26-4d4f-b3c6-e2ee01b71662</peerid>
<status>1</status>
<port>49152</port>
<ports>
<tcp>49152</tcp>
<rdma>N/A</rdma>
</ports>
<pid>5657</pid>
</node>
<node>
<hostname>NFS Server</hostname>
<path>localhost</path>
<peerid>2d7bcb95-3d26-4d4f-b3c6-e2ee01b71662</peerid>
<status>1</status>
<port>2049</port>
<ports>
<tcp>2049</tcp>
<rdma>N/A</rdma>
</ports>
<pid>5665</pid>
</node>
<tasks/>
</volume>
</volumes>
</volStatus>
</cliOutput>
Change-Id: I81aab226edbd400d29cd3f510af4f344dd99ba51
BUG: 1164079
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/9191
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ifdde4f8256fa463472959078f53d033e5c55207e
BUG: 1139682
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9665
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd process, when try to initialize default vol file, will
always through an error if there is no rdma device. Changing the
log levels and log messages to more appropriately.
Change-Id: I75b919581c6738446dd2d5bddb7b7658a91efcf4
BUG: 1188232
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/9559
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace brick:
If geo-replication was configured on a volume, replace brick
used to fail. This patch allows replace brick to go through
if all geo-rep sessions corresponding to that volume is stopped.
Remove brick:
There was no check for geo-replication for remove brick. Enforce
'remove brick commit' to fail if geo-rep session corresponding
to volume is running. Allow 'remove brick commit' only if all of
the geo-rep sessions corresponding to that volume is stopped.
Code is re-organized for better readability.
Change-Id: I02282c2764d8b81e319489c977847e6e437511a4
BUG: 1179638
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/9402
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Aravinda VK <avishwan@redhat.com>
Reviewed-by: ajeet jha <ajha@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the ability to configure the number of event threads
for various gluster services.
Currently with the multi thread epoll patch, it is possible
to have more than one thread waiting on socket activity and
processing the same. This thread count is currently static,
which this commit makes dynamic.
The current services which use IO path, i.e brick processes,
any client process (nfs, FUSE, gfapi, heal,
rebalance, etc.a), gain 2 set parameters to control the number
of threads that are processing events. These settings are,
- client.event-threads <n>
- server.event-threads <n>
The client setting affects the client graph consumers, and the
server setting affects the brick processes. These are processed
and inited/reconfigured using the client/server protocol xlators.
Other services (say glusterd) would need to extend similar
configuration settings to take advantage of multi threaded event
processing.
At present glusterd is not enabled with this commit, as it does not
stand to gain from this multi-threading (as I understand it).
Change-Id: Id8422fc57a9f95a135158eb6477ccf9d3c9ea4d9
BUG: 1104462
Signed-off-by: Shyam <srangana@redhat.com>
Reviewed-on: http://review.gluster.org/9488
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New column introduced in Status output, "SLAVE USER",
Slave user is not "root" in non root Geo-replication setup.
Added additional tag in XML output <slave_user>
BUG: 1180459
Change-Id: Ia48a5a8eb892ce883b9ec114be7bb2d46eff8535
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: http://review.gluster.org/9409
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kotresh HR <khiremat@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In a n-way replication, where n>=3 fail snapshot,
even if one brick is down.
Also check for glusterd quorum, irrespective of the force option
Modified testcase tests/bugs/snapshot/bug-1090042.t because
it tested the successful creation of snapshot with force
command.
Change-Id: I72666f8f1484bd1766b9d6799c20766e4547f6c5
BUG: 1184344
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/9470
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Bring in option to disable memory accounting for a glusterfs process
This reverses the changes done by the commit
7fba3a88f1ced610eca0c23516a1e720d75160cd.
* Change the key from "memory-accounting" to "no-memory-accounting", as by
default all the glusterfs process enable memory accounting now. So to
disable memory accounting for some process, "no-mem-accounting" argument has
to be passed.
Change-Id: I39c7cefb0fe764ea3e48f4e73e1305b084c5f497
BUG: 1184366
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/9469
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
quota is disabled.
problem : If quota is disabled then all the options associated with
quota is removed, except quota-deem-statfs and quota-timeout.
When gluster volume info is issued then the user can see that quota
is disabled whereas quota-deem-statfs and quota-timeout values still
exist.
Solution : remove quota-deem-statfs and quota-timeout option when quota is
disabled
NOTE : If features.quota-deem-statfs is turned on, it takes quota limits
into consideration while estimating fs size.
Change-Id: I8cca6a8f47d2355799228643aedc8fc03896cfad
BUG: 1151933
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8924
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In case a new node is added to the peer, after a snapshot was
taken, the geo-rep files are not synced to that node. This
leads to the failure of snapshot restore. Hence, ignoring the
missing geo-rep files in the new node, and proceeding with
snapshot restore. Once the restore is successful, the missing
geo-rep files can be generated with "gluster volume geo-rep
<master-vol> <slave-vol> create push-pem force"
Change-Id: I1c364f8aefdd6c99b0b861b6d0cb33709ec39da2
BUG: 1181418
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/9489
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Reviewed-by: Aravinda VK <avishwan@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If restore commit is successful on the originator and
a few nodes, but fails on some other node, restore cleanup
should restate the volume and the snapshot in question
as it was before the command was run.
Change-Id: I7bb0becc7f052f55bc818018bc84770944e76c80
BUG: 1181418
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/9441
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current Behaviour:
1. Geo-replication gsec_create creates common_secret.pem.pub file
containing public keys of the all the nodes of master cluster
in the location /var/lib/glusterd/
2. Geo-replication create push-pem copies the common_secret.pem.pub
to the same location on all the slave nodes with same name.
Problem:
Wrong public keys might get copied on to slave nodes in multiple
geo-replication sessions simultaneosly.
E.g.
A geo-rep session is established between Node1(vol1:Master) to
Node2 (vol2:Slave). And one more geo-rep session where
Node2 (vol3) becomes master to Node3 (vol4) as below.
Session1: Node1 (vol1) ---> Node2 (vol2)
Session2: Node2 (vol3) ---> Node3 (vol4)
If steps followed to create both geo-replication session is as
follows, wrong public keys are copied on to Node3 from Node2.
1. gsec_create is done on Node1 (vol1) -Session1
2. gsec_create is done on Node2 (vol3) -Session2
3. create push-pem is done Node1 - Session1.
-This overwrites common_secret.pem.pub in Node2
created by gsec_create in second step.
4. create push-pem on Node2 (vol3) copies overwrited
common_secret.pem.pub keys to Node3. -Session2
Consequence:
Session2 fails to start with Permission denied because of wrong
public keys
Solution:
On geo-rep create push-pem, don't copy common_secret.pem.pub
file with same name on to all slave nodes. Prefix master and
slave volume names to the filename.
NOTE: This brings change in manual steps to be followed to setup
non-root geo-replication (mountbroker). To copy ssh public
keys, extra two arguments needs to be followed.
set_geo_rep_pem_keys.sh <mountbroker_user> <master vol name> \
<slave vol name>
Path to set_geo_rep_pem_keys.sh:
Source Installation:
/usr/local/libexec/glusterfs/set_geo_rep_pem_keys.sh
Rpm Installatino:
/usr/libexec/glusterfs/set_geo_rep_pem_keys.sh
Change-Id: If38cd4e6f58d674d5fe2d93da15803c73b660c33
BUG: 1183229
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/9460
Reviewed-by: Aravinda VK <avishwan@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If volume uses quota, volume delete operation should unmount the
auxiliary quota mount usin glusterd_remove_auxiliary_mount(). This
may fail with EBADF is the mount is already gone. In that situation,
ignore the error so that volume delete succeeds.
This fixes a spurious failure on NetBSD in tests/basic/quota.t 74-75
BUG: 1129939
Change-Id: I69325f71fc2c8af254db46f696c8669a4e6bd7e4
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/9468
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Found a bug where a replica 2 volume creation prompts
saying the bricks are in the same hosts even when they
are in different hosts.
Change-Id: Ie55addae55c55e32ad2b5339530ab71f0e3711ab
BUG: 1091935
Signed-off-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-on: http://review.gluster.org/9373
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously glusterd was not checking quorum validation in syncop framework.
So when there is loss in quorum then few operation (for eg. add-brick,
remove-brick, volume set) which is based on syncop framework passed
successfully with out doing quorum validation check.
With this change it will do quorum validation in syncop framework and it will
block all operation (except volume set <quorum options> and "volume reset all"
commands) when there is loss in quorum.
Change-Id: I4c2ef16728d55c98a228bb86795023d9c1f4e9fb
BUG: 1177132
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/9349
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem : glusterd was crashing with SIGABRT if rpc connection is failed in
debug mode.
Reason : It was happening due to iov is passing to assert() before checking
rpc status in rpc call back function (rpc is calling callback function with
setting rpc status as -1 and passing NULL to iov if connection is failed).
Fix : Error checking for iov added after checking the rpc status verified
and error messages are added properly .
Change-Id: I35c05c438444d0454aadac4e45524565a7be68a8
BUG: 1181543
Signed-off-by: Anand <anekkunt@redhat.com>
Reviewed-on: http://review.gluster.org/9449
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For volumes with replicate, disperse xlators, self-heal daemon should do
healing. This patch provides enable/disable functionality for the xlators to be
part of self-heal-daemon. Replicate already had this functionality with
'gluster volume set cluster.self-heal-daemon on/off'. But this patch makes it
uniform for both types of volumes. Internally it still does 'volume set' based
on the volume type.
Change-Id: Ie0f3799b74c2afef9ac658ef3d50dce3e8072b29
BUG: 1177601
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9358
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"features.uss" with a non-boolean value gets set in the volume option
table because of which subsequent volume set operation fails since
features.uss does not contain a valid boolean value.
Fix is not to allow a non-boolean value to get set in the volume option
table. "features.uss" option should have validation function "validate_uss"
which validate the input value given by user.
Change-Id: I4a212f876627a4979715183b0d488fd69095f193
BUG: 1179175
Signed-off-by: ggarg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/9395
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On changelog_register cleanup .processing, .history/.processing,
.current and .history/.current from the working directory.
Moved glusterd_recursive_rmdir and glusterd_for_each_entry to common
place(libglusterfs) and renamed as recursive_rmdir and
GF_FOR_EACH_ENTRY_IN_DIR respectively
BUG: 1162057
Change-Id: I1f98468a344cead039026762a805437b2f9e507b
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: http://review.gluster.org/9082
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Initialising the transport's list, meant to hold clients connected to
it, on the first connection event is prone to race, especially with the
introduction of multi-threaded event layer.
BUG: 1181203
Change-Id: I6a20686a2012c1f49a279cc9cd55a03b8c7615fc
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9413
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Apart from snapshot, for all other transactions quorum should be calculated on
global peer list.
Change-Id: I30bacdb6521b0c6fd762be84d3b7aa40d00aacc4
BUG: 1177132
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9422
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterfs socket files should not reside outside of gluster folder.
Change-Id: I5d7b43b11c8c78a32df8aaf38917b80e4e33c9d0
BUG: 1180972
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9423
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, enabling SSL authentication/encryption but not authorization
required explicitly setting ssl-allow=*. Now that same behavior is the
default (i.e. when ssl-allow is not set).
Also, there's no reason that a name used for *login* auth (typically a
UUID for internal purposes or a human name when using SSL) should
validate as an RFC-compliant host name or IP address. Therefore the
validation only occurs when the auth type is "addr" (not "login" or
anything else).
Change-Id: I01485ff4f0ab37de4b182858235a5fb0cf4c3c7d
BUG: 1179208
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/9397
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use list_for_each_entry_safe() instead of
list_for_each_entry() for cleanup of local
xaction_peers list.
Change-Id: I6d70c04dfb90cbbcd8d9fc4155b8e5e7d7612460
BUG: 1173414
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/9416
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Refactor glusterd-utils.c to create
glusterd-snapshot-utils.c consisting of all snapshot
utility functions.
Change-Id: Id9823a2aec9b115f9c040c9940f288d4fe753d9b
BUG: 1176770
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/9391
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to the recent change introduced by commit
da9deb54df91dedc51ebe165f3a0be646455cb5b cluster quorum count calucation now
depends on whether the peer list is either all peers or global transaction peer
list or the local transaction peer list.
Change-Id: I9f63af9a0cb3cfd6369b050247d0ef3ac93d760f
BUG: 1173414
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9350
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I1bf26c9d294e95f7b82cfc7a96f9d5575f5e0362
BUG: 1176770
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9313
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
... instead of consulting the on-disk data directory. There is no reason
why the on-disk is more accurate than the in-memory representation. In
fact, it is the other way around when a node is reconciling
volume/cluster configuration with the rest of the cluster.
Change-Id: I786823efdf1d0f6b9e6fcdb72d51e5227c399ce1
BUG: 1176770
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9292
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
CID: 1260432
Change-Id: I6845bc4c231b53428419a5a2ad0c78ea9da31058
BUG: 1093692
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9338
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I24ed0f866d53e91a8323c043a38f73207cbfd7d2
BUG: 1168189
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9351
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
... and unlink the 'right' socket file
Change-Id: Id12ee8c622914555b7933104e13b43b3b31b5d19
BUG: 1176770
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9315
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In current implementation xaction_peers list is maintained in a global variable
(glustrd_priv_t) for syncop/mgmt_v3. This means consistency and atomicity of
peerinfo list across transactions is not guranteed when multiple syncop/mgmt_v3
transaction are going through.
We had got into a problem in mgmt_v3-locks.t which was failing spuriously, the
reason for that was two volume set operations (in two different volume) was
going through simultaneouly and both of these transaction were manipulating the
same xaction_peers structure which lead to a corrupted list. Because of which in
some cases unlock request to peer was never triggered and we end up with having
stale locks.
Solution is to maintain a per transaction local xaction_peers list for every
syncop.
Please note I've identified this problem in op-sm area as well and a separate
patch will be attempted to fix it.
Finally thanks to Krishnan Parthasarathi and Kaushal M for your constant help to
get to the root cause.
Change-Id: Ib1eaac9e5c8fc319f4e7f8d2ad965bc1357a7c63
BUG: 1173414
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9269
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The mgmt_v3 handler functions already send the ret code as
part of the *send_resp calls, and further propagating the
ret code to the calling functions will lead to double deletion
of the req object. Hence returning success from the mgmt_v3
handler functions.
Change-Id: I1090e49c54a786daae5fd97b5c1fbcb5d819acba
BUG: 1138577
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/8620
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of relying on brickinfo->status, check if the brick
process is running before copying the brick port number.
Change-Id: I246465fa4cf4911da63a1c26bbb51cc4ed4630ac
BUG: 1175700
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/9297
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ie5eaa2beb4446640b22873f91e17da90d1cd8fad
BUG: 1174625
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/9280
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|