| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
We currently don't have a roll-back/undoing of post-ops if quorum is not
met. Though the FOP is still unwound with failure, the xattrs remain on
the disk. Due to these partial post-ops and partial heals (healing only when
2 bricks are up), we can end up in split-brain purely from the afr
xattrs point of view i.e each brick is blamed by atleast one of the
others. These scenarios are hit when there is frequent
connect/disconnect of the client/shd to the bricks while I/O or heal
are in progress.
Fix:
Instead of undoing the post-op, pick a source based on the xattr values.
If 2 bricks blame one, the blamed one must be treated as sink.
If there is no majority, all are sources. Once we pick a source,
self-heal will then do the heal instead of erroring out due to
split-brain.
Change-Id: I3d0224b883eb0945785ade0e9697a1c828aec0ae
BUG: 1542380
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
(cherry picked from commit 0e6e8216823c2d9dafb81aae0f6ee3497c23d140)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: add-brick command to increase replica count in an arbiter
volume succeeds, causing undesirable effects like the 4th brick being
loaded with the arbiter xlator, the 3rd one losing the arbiter xlator
(when the brick process is restarted), arbitration logic in afr going
for a toss etc.
Fix: Arbiter configuration should always be a replica 3 volume (of
which 3rd brick is arbiter). Hence disallow increasing replica count for
arbiter volume configurations.
Change-Id: I9fe4edac880d0f711e6d44324ad5562974e53e51
BUG: 1429200
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: https://review.gluster.org/16845
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds support for multiple brick translator stacks running
in a single brick server process. This reduces our per-brick memory usage by
approximately 3x, and our appetite for TCP ports even more. It also creates
potential to avoid process/thread thrashing, and to improve QoS by scheduling
more carefully across the bricks, but realizing that potential will require
further work.
Multiplexing is controlled by the "cluster.brick-multiplex" global option. By
default it's off, and bricks are started in separate processes as before. If
multiplexing is enabled, then *compatible* bricks (mostly those with the same
transport options) will be started in the same process.
Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
BUG: 1385758
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: https://review.gluster.org/14763
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
1. Have a replica 2 volume with bricks b1 and b2
2. Before setting the layout, b1 goes down
3. Set the layout write some data, which gets populated on b2
4. b2 goes down then b1 comes up
5. Add another brick b3, and heal will take place from b1 to b3, which
basically have no data
6. Write some data. Both b1 and b3 will mark b2 for pending writes
7. b1 goes down, and b2 comes up
8. b2 gets heald from b1. During heal it removes the data which is already
in b2, considering that as stale data. This leads to data loss.
Solution:
1. In glusterd stage-op, while adding bricks, check whether the replica
count is being increased
2. If yes, then check whether any of the bricks are down at that time
3. If yes, then fail the add-brick to avoid such data loss
4. Else continue the normal operation.
This check will work enen when we convert plain distribute volume to replicate
Test:
1. Create a replica 2 volume
2. Kill one brick from the volume
3. Try adding a brick to the volume
4. It should fail with all bricks are not up error
5. Cretae a distribute volume and kill one of the brick
6. Try to convert it to replicate volume, by adding bricks.
7. This should also fail.
Change-Id: I9c8d2ab104263e4206814c94c19212ab914ed07c
BUG: 1406411
Signed-off-by: karthik-us <ksubrahm@redhat.com>
Reviewed-on: http://review.gluster.org/16330
Tested-by: Ravishankar N <ravishankar@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
1.Provide a command to convert replica 2 volumes to arbiter volumes.
Existing self-heal logic will automatically heal the file hierarchy into
the arbiter brick, the progress of which can be monitored using the
heal info command.
Syntax: gluster volume add-brick <VOLNAME> replica 3 arbiter 1
<HOST:arbiter-brick-path>
2. Add checks when removing bricks from arbiter volumes:
- When converting from arbiter to replica 2 volume, allow only arbiter
brick to be removed.
- When converting from arbiter to plain distribute volume, allow only if
arbiter is one of the bricks that is removed.
3. Some clean-up:
- Use GD_MSG_DICT_GET_SUCCESS instead of GD_MSG_DICT_GET_FAILED to
log messages that are not failures.
- Remove unused variable `brick_list`
- Move 'brickinfo->group' related functions to glusted-utils.
Change-Id: Ic87b8c7e4d7d3ab03f93e7b9f372b314d80947ce
BUG: 1318289
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/14126
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|