| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce cluster.favorite-child-policy which when enabled with
[ctime|mtime|size|majority], automatically heals files that are in
split-brian.
The majority policy will not pick a source if there is no majority.
The other three policies pick the first brick with a valid reply and
non-zero ctime/mtime/size as source.
Change-Id: I3c099a0404082213860f74f2c9b4d207cfaedb76
BUG: 1328224
Original-author: Richard Wareing <rwareing@fb.com>
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/14026
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anuradha Talur <atalur@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Parallel rmdir operations on the same directory results in ENOTCONN messages
eventhough there was no network disconnect.
In blocking entry lock during rmdir, AFR takes 2 set of locks on all its
children-One (parentdir,name of dir to be deleted), the other (full lock
on the dir being deleted). We proceed to pre-op stage even if only a single
lock (but not all the needed locks) was obtained, only to fail it with ENOTCONN
because afr_locked_nodes_get() returns zero nodes in afr_changelog_pre_op().
Fix:
After we get replies for all blocking lock requests, if we don't have
the minimum number of locks to carry out the FOP, unlock and fail the
FOP. The op_errno will be that of the last failed reply we got, i.e.
whatever is set in afr_lock_cbk().
Change-Id: Ibef25e65b468ebb5ea6ae1f5121a5f1201072293
BUG: 1336381
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/14358
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Race is explained at
https://bugzilla.redhat.com/show_bug.cgi?id=1337405#c0
This patch also handles performing of self-heal with shd-pid.
Also performs the healing with this->itable's inode rather than
main itable.
BUG: 1337405
Change-Id: Id657a6623b71998b027b1dff6af5bbdf8cab09c9
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/14422
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
If a named fresh-lookup is done on an loc and the fop fails on one of the
bricks or not sent on one of the bricks, but by the time response comes to afr,
if the brick is up, 'can_interpret' will be set to false in afr_lookup_done(),
this will lead to inode-ctx for that inode to be not set, this can lead to EIO
in case of a transaction as it depends on 'readable' array to be available by
that point.
Fix:
Refresh inode for inode-write fops for the ctx to be set if it is not already
done at the time of named fresh-lookup or if the file is in split-brain where
we need to perform one more refresh before failing the fop to check if the file
is still in split-brain or not.
BUG: 1336612
Change-Id: I5c50b62c8de06129b8516039f7c252e5008c47a5
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/14368
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In case of 3 way replication with quorum enabled with sharding,
if one bricks is brought down and brought back up sometimes
fops fail with EROFS because the mknod of shard file fails with
two good nodes with EEXIST. So even when quorum is not met, it
makes sense to unwind with the errno returned by lower xlators
as much as possible.
Change-Id: Iabd91cd7c270f5dfe6cbd18c50e59c299a331552
BUG: 1336612
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/14369
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In afr_changelog_post_op_now(), if there was any error,
meaning op_ret < 0, post-op was not being done even when
the errors were symmetric and there were no "failed
subvols".
Fix:
When the errors are symmetric, perform post-op.
How was the bug found :
In a 1 X 3 volume with shard and write behind on
when writes were done into a file with one brick down,
the trusted.afr.dirty xattr's value for .shard directory
would keep increasing as post op was not done but pre-op was.
This incorrectly showed .shard to be in split-brain.
RCA:
When WB is on, due to multiple writes being sent on
offset lying in the same shard, chances are that
same shard file will be created more than once
with the second one failing with op_ret < 0
and op_errno = EEXIST.
As op_ret was negative, afr wouldn't do post-op,
leading to no decrement of trusted.afr.dirty xattr.
Thus showing .shard directory to be in split-brain.
Change-Id: I711bdeaa1397244e6a7790e96f0c84501798fc59
BUG: 1335652
Signed-off-by: Anuradha Talur <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/14310
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Spurious entries are reported in heal info when the mount is on second/third
brick of the replica pair because local-child is given preference in selecting
source. The code is supposed to suggest the file needs heal if the (source < 0)
(failure code path), but instead it is written as if any non-zero value
is considered failure.
Fix:
Treat +ve source as success case
BUG: 1335429
Change-Id: I1be7f9defef2ae03be7eec8d7d49bf34adeca82c
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/14302
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-by: Anuradha Talur <atalur@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Multi-threaded healing doesn't create synctask with shd pid, this
leads to healing problems when quota exceeds.
BUG: 1332994
Change-Id: I80f57c1923756f3298730b8820498127024e1209
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/14211
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Afr does post-ops after write but the stat buffer it unwinds is at the
time of write, so if nfs client caches this, it will see different
ctime when it does stat on it after post-op is done. From NFS client's
perspective it thinks the file is changed. Tar which depends on this
to be correct keeps giving 'file changed as we read it' warning.
If Afr instead has to choose to unwind after post-op, eager-lock,
delayed-post-op will have to be disabled which will lead to bad
performance for all write usecases.
Fix:
Don't let client cache stat after write.
Change-Id: Ic6062acc6e5cdd97a9c83c56bd529ec83cee8a23
BUG: 1302948
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Signed-off-by: Anuradha Talur <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/13785
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implements new indices type ENTRY_CHANGES where other
xlators can add/delete names.
Change-Id: I01c5568997085e11d22ba36a4376c70b78fb3827
BUG: 1269461
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/12482
Tested-by: Krutika Dhananjay <kdhananj@redhat.com>
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I52da41dff5619492b656c2217f4716a6cdadebe0
BUG: 1269461
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/12442
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Do not allow directory creations without gfids as
after the directories are created, operations
on them fail anyway. So it is better to fail mkdir.
BUG: 1317361
Change-Id: I8f8e3b38bbded1960b7215bac0432500f7e78038
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/13690
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
BUG: 1329501
Change-Id: Id402c20f2fa19b22bc402295e03e7a0ea96b0c40
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/14048
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: During mount, afr waits for response from all its children before
notifying the parent xlator. In a 1x2 replica volume , if one of the nodes is
down, the mount will hang for more than a minute until child down is received
from the client xlator for that node.
Fix:
When parent up is received by afr, start a 10 second timer. In the timer call
back, if we receive a successful child up from atleast one brick, propagate the
event to the parent xlator.
Change-Id: I31e57c8802c1a03a4a5d581ee4ab82f3a9c8799d
BUG: 1054694
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/11113
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Thanks to Olia-Kremmyda for finding the bug on github review,
https://github.com/gluster/glusterfs/commit/b8106d1127f034ffa88b5dd322c23a10e023b9b6
Change-Id: Ib8640ed0c331a635971d5d12052f0959c24f76a2
BUG: 1329773
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/14052
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Locking schemes in afr-v1 were locking the directory/file completely during
self-heal. Newer schemes of locking don't require Full directory, file locking.
But afr-v2 still has compatibility code to work-well with older clients, where
in entry-self-heal it takes a lock on a special 256 character name which can't
be created on the fs. Similarly for data self-heal there used to be a lock on
(LLONG_MAX-2, 1). Old locking scheme requires heal info to take sh-domain locks
before examining heal-state. If it doesn't take sh-domain locks, then there is
a possibility of heal-info hanging till self-heal completes because of
compatibility locks. But the problem with heal-info taking sh-domain locks is
that if two heal-info or shd, heal-info try to inspect heal state in parallel
using trylocks on sh-domain, there is a possibility that both of them assuming
a heal is in progress. This was leading to spurious entries being shown in
heal-info.
Fix:
As long as there is afr-v1 way of locking, we can't fix this problem with
simple solutions. If we know that the cluster is running newer versions of
locking schemes, in those cases we can give accurate information in heal-info.
So introduce a new option called 'locking-scheme' which if it is 'granular'
will give correct information in heal-info. Not only that, Extra network hops
for taking compatibility locks, sh-domain locks in heal info will not be
necessary anymore. Thus it improves performance.
BUG: 1322850
Change-Id: Ia563c5f096b5922009ff0ec1c42d969d55d827a3
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/13873
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Ashish Pandey <aspandey@redhat.com>
Reviewed-by: Anuradha Talur <atalur@redhat.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
When there are 2 sources and one sink and if two self-heal daemons
try to acquire locks at the same time, there is a chance that it
gets a lock on one source and sink leading partial to heal. This will
need one more heal from the remaining source to sink for the complete
self-heal. This is not optimal.
Fix:
Upgrade non-blocking locks to blocking lock on all the subvolumes, if
the number of locks acquired is majority and there were eagains.
BUG: 1318751
Change-Id: Iae10b8d3402756c4164b98cc49876056ff7a61e5
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/13766
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In afr-v1 pre-op, xattrop increments self xattr first then it increments the
value on rest. In post-op, xattr value is decreased first on rest and at last
it gets decremented on self. So for a possible operation to be witnessed i.e.
a fop is seen by the brick it is important to have at least 1 pending op
because without completing pre-op fop won't come. The other possibility is when
fop completes but at the time of post-op after decrementing pending counts on
others just before decrementing its own pending count, the brick dies.
Fix:
Fix witness detection code in afr_self_heal_find_direction()
BUG: 1322253
Change-Id: Ia7e76482c0a46e775e269bb96ec1b9490a3ac18f
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/13811
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: The throughput for a 'dd' workload was much less for arbiter
configuration when compared to normal replica-3 volume. There were 2
issues:
i)arbiter_writev was using the request dict as response dict while
unwinding, leading to incorect GLUSTERFS_WRITE_IS_APPEND and
GLUSTERFS_OPEN_FD_COUNT values (=4), leading to immediate post-ops
because is_afr_delayed_changelog_post_op_needed() failed due to
afr_are_multiple_fds_opened() check.
ii) The arbiter code in afr was setting local->transaction.{start and len} =0
to take full file locks. What this meant was even for simultaenous but
non-overlapping writevs, afr_transaction_eager_lock_init() was not
happening because afr_locals_overlap() always stays true. Consequently
is_afr_delayed_changelog_post_op_needed() failed due to
local->delayed_post_op not being set.
Fix:
i) Send appropriate response dict values in arbiter_writev.
ii) Modify flock params instead of local->transaction.{start and len} to
take full file locks in the transaction.
Also changed _fill_writev_xdata() in posix to fill rsp_xdata for
whatever key is requested for.
Change-Id: I1c5fc5e98aba49ade540bb441a022e65b753432a
BUG: 1324004
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reported-by: Robert Rauch <robert.rauch@gns-systems.de>
Reported-by: Russel Purinton <russell.purinton@gmail.com>
Reviewed-on: http://review.gluster.org/13906
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
| |
BUG: 1221737
Change-Id: I0ed71a72f0e33bd733723e00a01cf28378c5534e
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/13755
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
All inodes that are looked-up are always forgotten without fail in
afr removing the benefits of them being in lru. This same code can
cause crashes if between inode_lookup, inode_forget in afr if the
top xlator does inode_forget(0).
Fix:
Don't use lookup/forget in afr. No benefits are there at the moment
for keeping this code. It is impossible to prevent top xlators to
do inode_forget(0). Found similar instances in ec
and removed them even though those code paths are not going to
be executed in any place other than heal-daemon.
BUG: 1321554
Change-Id: Ia4cb236178f7f129cc898d53f0bbd26f494a2a8d
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/13834
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anuradha Talur <atalur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Extended the CLI to include support for split-brain resolution based on
mtime. The command syntax is:
$:gluster volume heal <VOLNAME> split-brain latest-mtime <FILE>
where <FILE> can be either the full file name as seen from the root of the
volume (or) the gfid-string representation of the file.
Change-Id: I7a16f72ff1a4495aa69f43f22758a9404e958b4f
BUG: 1321322
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/13828
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When an entry is being created the inode is yet to be linked
so args must be filled with gfid and ia_type for it to give
consistent iatt.
Also handle Dht sending fops on inode not yet linked.
BUG: 1302948
Change-Id: I6969cacb437cad02f66716f3bf8ec004ffe7c691
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/13827
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anuradha Talur <atalur@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch is part two change to prevent data loss
in a replicate volume on doing a add-brick operation.
Problem: After doing add-brick, there is a chance
that self heal might happen from the newly added
brick rather than the source brick, leading to data loss.
Solution: Mark pending changelogs on afr children for
the new afr-child so that heal is performed in the
correct direction.
Change-Id: I11871e55eef3593aec874f92214a2d97da229b17
BUG: 1276203
Signed-off-by: Anuradha Talur <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/12454
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is better to choose local brick as source if possible to prevent
over the wire read thus saving on bandwidth. Also changed code to not
attempt data-heal if 'source' is selected as arbiter.
Change-Id: I9a328d0198422280b13a30ab99545370a301dfea
BUG: 1314150
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/13585
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Tested-by: Krutika Dhananjay <kdhananj@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. In afr_getxattr_cbk, consider the errno value before blindly
launching an inode refresh and a subsequent retry on other children.
2. We want to accuse small files only when we know for sure that there is no
IO happening on that inode. Otherwise, the ia_sizes obtained in the
post-inode-refresh replies may mismatch due to a race between
inode-refresh and ongoing writes, causing spurious heal launches.
Change-Id: Ife180f4fa5e584808c1077aacdc2423897675d33
BUG: 1309462
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/13595
Smoke: Gluster Build System <jenkins@build.gluster.com>
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
If afr_lookup_done() or afr_read_subvol_select_by_policy() chooses the
arbiter brick to serve the stat() data, file size will be reported as
zero from the mount, despite other data bricks being available. This can
break programs like tar which use the stat info to decide how much to read.
Fix:
In the inode-context, mark arbiter as a non-readable subvol for both
data and metadata.
It it to be noted that by making this fix, we are *not* going to serve
metadata FOPS anymore from the arbiter brick despite the brick storing
the metadata. It makes sense to do this because the ever increasing
over-loaded FOPs (getxattr returning stat data etc.) and compound FOPS
in gluster will otherwise make it difficult to add checks in code to
handle corner cases.
Change-Id: Ic60b25d77fd05e0897481b7fcb3716d4f2101001
BUG: 1310171
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reported-by: Mat Clayton <mat@mixcloud.com>
Reviewed-on: http://review.gluster.org/13539
Reviewed-by: Anuradha Talur <atalur@redhat.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Afr does dict_ref of the xattr_req that comes to it and deletes "gfid-req" key.
Dht uses same dict to send lookup to other subvolumes. So in case of
directories and more than 1 dht subvolumes, second subvolume till the last
subvolume won't get a lookup request with "gfid-req". So gfid reset never
happens on the directories in distributed replicate subvolume for 2nd till last
subvolumes.
Fix:
Make a copy of lookup xattr request.
Also fixed replies_wipe possibly resetting gfid to NULL gfid
BUG: 1312816
Change-Id: Ic16260e5a4664837d069c1dc05b9e96ca05bda88
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/13545
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a heal is needed after inode refresh (lookup, read_txn), launch it in
the background instead of blocking the fop (that triggered refresh) until the
heal happens.
afr_replies_interpret() is modified such that the heal is
launched only if atleast one sink brick is up.
Max. no of heals that can happen in parallel is configurable via the
'background-self-heal-count' volume option. Any number greater than that
is put in a wait queue whose length is configurable via
'heal-wait-queue-leng' volume option. If the wait queue is also full,
further heals will be ignored.
Default values: background-self-heal-count=8, heal-wait-queue-leng=128
Change-Id: I1d4a52814cdfd43d90591b6d2ad7b6219937ce70
BUG: 1297172
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/13207
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
cli/src/cli-cmd-parser.c (chenk)
cli/src/cli-xml-output.c (spandit)
cli/src/cli.c (chenk)
libglusterfs/src/common-utils.c (vmallika)
libglusterfs/src/gfdb/gfdb_sqlite3.c (jfernand +1)
rpc/rpc-transport/socket/src/socket.c (?)
xlators/cluster/afr/src/afr-transaction.c (?)
xlators/cluster/dht/src/dht-common.h (srangana +2)
xlators/cluster/dht/src/dht-selfheal.c (srangana +2)
xlators/debug/io-stats/src/io-stats.c (R. Wareing)
xlators/features/barrier/src/barrier.c (vshastry)
xlators/features/bit-rot/src/bitd/bit-rot-scrub.h (vshankar +1)
xlators/features/shard/src/shard.c (kdhananj +1)
xlators/mgmt/glusterd/src/glusterd-ganesha.c (skoduri)
xlators/mgmt/glusterd/src/glusterd-handler.c (atinmu)
xlators/mgmt/glusterd/src/glusterd-op-sm.h (atinmu)
xlators/mgmt/glusterd/src/glusterd-snapshot.c (spandit)
xlators/mgmt/glusterd/src/glusterd-syncop.c (atinmu)
xlators/mgmt/glusterd/src/glusterd-volgen.c (atinmu)
xlators/protocol/client/src/client-messages.h (mselvaga +1)
xlators/storage/bd/src/bd-helper.c (M. Mohan Kumar)
xlators/storage/bd/src/bd.c (M. Mohan Kumar)
xlators/storage/posix/src/posix.c (nbalacha +1)
Change-Id: I85934fbcaf485932136ef3acd206f6ebecde61dd
BUG: 1293133
Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/13031
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
If index heal is launched when some of the bricks are down, glustershd of that
node sends a -1 op_ret to glusterd which eventually propagates it to the CLI.
Also, glusterd sometimes sends an err_str and sometimes not (depending on the
failure happening in the brick-op phase or commit-op phase). So the message that
gets displayed varies in each case:
"Launching heal operation to perform index self heal on volume testvol has been
unsuccessful"
(OR)
"Commit failed on <host>. Please check log file for details."
Fix:
1. Modify afr_xl_op() to return -1 even if index healing of atleast one brick
fails.
2. Ignore glusterd's error string in gf_cli_heal_volume_cbk and print a more
meaningful message.
The patch also fixes a bug in glusterfs_handle_translator_op() where if we
encounter an error in notify of one xlator, we break out of the loop instead of
sending the notify to other xlators.
Change-Id: I957f6c4b4d0a45453ffd5488e425cab5a3e0acca
BUG: 1302291
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/13303
Reviewed-by: Anuradha Talur <atalur@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
seek() is like a read(), copied the same semantics.
Change-Id: I100b741d9bfacb799df318bb081f2497c0664927
BUG: 1220173
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/11483
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now heal-info does an open() on the file being examined so that
the client at some point sees open-fd count being > 1 and releases
the eager-lock so that heal-info doesn't remain blocked forever
until IO completes.
Change-Id: Icc478098e2bc7234408728b54d8185102b3540dc
BUG: 1297695
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/13326
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I48d9e5313bd3ccf9fe26c90a7051a8a174d75c49
BUG: 1296818
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/13195
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
posix_mkdir does not send response xdata. So even though replies are
valid, the response xdata dict is NULL. Check if dict is non-null in
afr_replies_interpret before doing dict_get
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Change-Id: If543d68d8bfd2433519105839d5be106076cc276
BUG: 1294053
Reviewed-on: http://review.gluster.org/13185
Tested-by: Ravishankar N <ravishankar@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1 ....but still do other parts of data-self-heal like restoring the time
and undo pending xattrs.
2. Perform undo_pending inside inodelks.
3. If arbiter is the only sink, do these other parts of data-self-heal
inside a single lock-unlock sequence.
Change-Id: I64c9d5b594375f852bfb73dee02c66a9a67a7176
BUG: 1286017
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/12777
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 2b7226f9d3470d8fe4c98c1fddb06e7f641e364d did not check for the
validity of a dict before doing a dict_get. Fix that.
Change-Id: Ie21f4da19256b17196f242cd8fd5bb76b0a69c1e
BUG: 1294053
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/13077
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Revisiting http://review.gluster.org/#/c/11814/, which unintentionally
introduced warnings from libtool about the xlator .so names.
According to [1], the -module option must appear in the Makefile.am
file(s); if -module is defined in a macro, e.g. in configure(.ac),
then libtool will not recognize that this is a module and will emit a
warning.
[1]
http://www.gnu.org/software/automake/manual/automake.html#Libtool-Modules
Change-Id: Ifa5f9327d18d139597791c305aa10cc4410fb078
BUG: 1248669
Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/13003
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: soumya k <skoduri@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since commit 6e635284a4411b816d4d860a28262c9e6dc4bd6a
(glusterfs-3.7.7), the afr pending xattrs are stored in the volfile and used
by afr when it initializes. If a cluster is upgraded, prevent afr from loading
until the op-version has been bumped up to 3.7.7 and the volfiles have been
regenerated using a volume set command.
Without this fix, AFR will crash when initialzing.
Change-Id: I14249dedb3f2f77cd754d78d8a9a70fdc5fc8c10
BUG: 1293293
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/13038
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If an object (file) is marked bad by bitrot, do not consider the brick
on which the object is present as a potential read subvolume for AFR
irrespective of the pending xattr values.
Also do not consider the brick containing the bad object while
performing afr_accuse_smallfiles(). Otherwise if the bad object's size
is bigger, we may end up considering that as the source.
Change-Id: I4abc68e51e5c43c5adfa56e1c00b46db22c88cf7
BUG: 1290965
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/12955
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For fd based operations (fgetxattr, readv etc.) if an inode refresh is
required, do so using fstat instead of lookup. This is because the file
might have been deleted by another client before refresh but posix
mandates that FOPS using already open fds must still succeed.
Change-Id: Id5f71c3af4892b648eb747f363dffe6208e7ac09
BUG: 1285230
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/12894
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the disk associated with a brick returns EIO during lookup, chances are
that name heal would return an EIO because one of the syncop_XXX() operations
as part of it returned an EIO. This is inherently treated by afr_lookup_selfheal_wrap()
as a split-brain and the lookup is aborted prematurely with EIO even if it
succeeded on the other replica(s).
Change-Id: Ib9b7f2974bff8e206897bb4f689f0482264c61e5
BUG: 1291701
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/12973
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
When AFR xlator initialises, it uses the name of the client xlators
below it for storing the pending changelogs (xattrs). This can be
problem when some other xlator is loaded in between AFR and the client.
Though that is a trivial 'traverse-graph-till-the-client-and-use-the-name'
fix in AFR's init(), there are other issues like when there's no client
xlator at all when, say, AFR is moved to the server side.
Fix:
The client xlator names are currenly unique and stored as
brickinfo->brick_ids. So persist these ids as comma separated values in
AFR's volume_options and use them as xattr values during init().
Change-Id: Ie761ffeb3373a4c4d85ad05c84a768c4188aa90d
BUG: 1285152
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/12738
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ida863844e14309b6526c1b8434273fbf05c410d2
BUG: 1250803
Signed-off-by: Anuradha Talur <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/12658
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Things done :
1) during lookup and inode_refresh as part of read_txn,
request is sent to detect if heal is required or not.
2) If heal is required, be conservative in setting the
readdirp entry inodes to NULL, otherwise don't be.
3) Self-heal-daemon now crawls both indices/xattrop
and indices/dirty directory while healing.
Change-Id: Ic4a4da63fb7e0726eab5f341a200859b29cf7eb7
BUG: 1250803
Signed-off-by: Anuradha Talur <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/12507
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Size mismatch should consider that arbiter brick will have zero size file to
prevent data self-heal to spuriously trigger/assuming need of self-heals.
Change-Id: I179775d604236b9c8abfa360657abbb36abae829
BUG: 1285634
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/12755
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In create fop, afr doesn't remember the flags. When afr has to perform
fixing of the fd that needs to be opened on other bricks because the brick
was down at the time of create, the flags with which it needs to send open
are not present.
Fix:
Remember the flags in the fd_ctx.
Thanks to Nitya for showing us the problem in re-open with the flags.
Change-Id: I8ce1eb50c35fc0722cfc25cb4b6d234ef56180e5
BUG: 1285173
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/12739
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In glusterfs 3.4 and older, AFR did not take locks in self-heal domain
during data self-heal. So this compat lock in data domain was added to prevent
older clients from trying to heal a file while an existing self-heal was going
on by a newer client. But the side effect was that all appending writes (which
take full locks in data domain) from mounts would be stalled until self-heal
was complete.
Since glusterfs 3.4 is not supported anymore, remove the compat lock.
Change-Id: I31c8e4d7f3364f769a14eec295154e3c40d9f78e
BUG: 1283032
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/12602
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I8ae7af266d3e00460f0cfdc9389a926e5f2fee36
BUG: 1282761
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/12598
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As a part of CHILD_MODIFIED event DHT forgets the current layout and
performs fresh lookup. However this is not required when a replica pair
goes offline as the xattrs can be read from other replica pairs. Hence
setting different event to handle replica pair going down.
Change-Id: I5ede2a6398e63f34f89f9d3c9bc30598974402e3
BUG: 1281230
Signed-off-by: Sakshi Bansal <sabansal@redhat.com>
Reviewed-on: http://review.gluster.org/12573
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Susant Palai <spalai@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|