| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
Change-Id: I4074e7cce8f6782860f849780ab6d0458e92a2ce
Signed-off-by: Jeff Darcy <jdarcy@fb.com>
Reviewed-on: https://review.gluster.org/17708
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: Jeff Darcy <jeff@pl.atyp.us>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Halo prove tests were racy in a couple of ways. First, they raced
against the self-heal daemon (e.g. write to volume with two bricks
up and then assert that only two bricks have data file; but shd will
properly copy file to third brick sooner or later). Fix by disabling
shd in such tests.
Second, tests rely on pings to complete and set halo state as
expected, but do not check for this. If writing begins before initial
pings complete, all bricks may be up and receive the data. Fix by adding
explicit check for halo child states.
Test Plan:
prove tests/basic/halo*.t
(prior to this changeset, would fail within ~10 iterations on my
devserver and almost always on centos regression. Now runs overnight
without failure on my devserver).
Reviewers:
Subscribers:
Tasks:
Blame Revision:
Change-Id: If6823540dd4e23a19cc495d5d0e8b0c6fde9a3bd
Signed-off-by: Kevin Vigor <kvigor@fb.com>
Reviewed-on: http://review.gluster.org/16325
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
- SHD is now excluded from the max-replicas policy. We'd need
to make an SHD specific tunable for this to make tests reliably
pass, and frankly it probably makes things more intuitive having
SHD excluded (i.e. SHD can always see everything).
- Updated the halo-failover-enabled test, I think it's a bit more clear
now, and works reliably. halo.t fixed after fixing the SHD
max-replicas bug.
Test Plan: - Run prove tests -> https://phabricator.fb.com/P19872728
Reviewers: dph, sshreyas
Reviewed By: sshreyas
FB-commit-id: e425e6651cd02691d36427831b6b8ca206d0f78f
Change-Id: I57855ef99628146c32de59af475b096bd91d6012
Signed-off-by: Kevin Vigor <kvigor@fb.com>
Reviewed-on: http://review.gluster.org/16305
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
|
|
Summary:
- Adds "halo-failover-enabled" option to enable/disable failing over to a brick outside of the defined halo to satisfy min-replicas
- There are some use-cases where failing over to a brick which is out of region will be undesirable. I such cases we will more than likely opt to have more replicas within the region to tolerate the loss of a single replica in that region without losing quorum.
- Fixed quorum accounting problem as well, now correctly goes RO in case where we lose a brick and aren't able to swap one in for some reason (fail-over not enabled or otherwise)
Test Plan:
- run prove -v tests/basic/halo.t
- run prove -v tests/basic/halo-disable.t
- run prove -v tests/basic/halo-failover-enabled.t
- run prove -v tests/basic/halo-failover-disabled.t
Reviewers: dph, cjh, jackl, mmckeen
Reviewed By: mmckeen
Conflicts:
xlators/cluster/afr/src/afr.h
xlators/mount/fuse/utils/mount.glusterfs.in
Change-Id: Ia3ebf83f34b53118ca4491a3c4b66a178cc9795e
Signed-off-by: Kevin Vigor <kvigor@fb.com>
Reviewed-on: http://review.gluster.org/16275
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
|