| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Unnecessary self probe is removed
2. After every probe a peer_count check is added to give the test to time finish
handhake.
Change-Id: Iab52548f8b781e7968250cd98fdbeaf02472970d
BUG: 1368953
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/15231
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currently, say we have 10 Node gluster volume, and mounted it using
Node 1 (N1) as volfile server and the rest as backup volfile servers
$ mount -t glusterfs -obackup-volfile-servers=<N2>:<N3>:...:<N10> <N1>:/vol /mnt
if N1 goes down we still be able to access the same mount point,
but the problem is that if we add or remove bricks to the volume
whoes volfile server is down in our case N1, that info will not be
passed to client, because connection between glusterfs and glusterd (of N1)
will be disconnected due to which we cannot store files to the newly
added bricks until N1 comes back
Solution:
If N1 goes down iterate through the nodes specified in
backup-volfile-servers list and try to establish the connection between
glusterfs and glsuterd, hence we don't really have to wait
until N1 comes back to store files in newly added bricks that are
successfully added when N1 was down
Change-Id: I653c9f081a84667630608091bc243ffc3859d5cd
BUG: 1289916
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-on: http://review.gluster.org/13002
Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Poornima G <pgurusid@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
1) Glusterd doesn't remember about arbiter information of replica volume in
store. When glusterd goes down and comes backup, arbiter volumes will
become replica volumes.
2) Glusterd doesn't import/export arbiter information to/from the other peers.
3) Volume info doesn't show any arbiter count in the output.
Fix:
1) Persist arbiter information in glusterd-store
2) Import/Export arbiter information of the volume
3) Change volume info output to show arbiter count.
Change-Id: I2db81e73d2694b01f7d07b08a17b41ad5a55c361
BUG: 1276675
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/12475
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Iba44be565c895e26b19b5ff85a886873f6b53e5c
BUG: 1177601
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9616
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem : heald.t uses EXPECT to check whether shd process is up or not, but as
shd is spawned with NO_WAIT end of volume start transaction doesn't gurantee
that the process will be up by that time.
Solution : Use EXPECT_WITHIN instead of EXPECT
Change-Id: Ic81725aa7e7cde9c0c873837fcc4a73d8318dfa0
BUG: 1163543
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9575
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
For volumes with replicate, disperse xlators, self-heal daemon should do
healing. This patch provides enable/disable functionality for the xlators to be
part of self-heal-daemon. Replicate already had this functionality with
'gluster volume set cluster.self-heal-daemon on/off'. But this patch makes it
uniform for both types of volumes. Internally it still does 'volume set' based
on the volume type.
Change-Id: Ie0f3799b74c2afef9ac658ef3d50dce3e8072b29
BUG: 1177601
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9358
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|