| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a single brick is added to the volume and the
newly added brick is the first to respond to a
dht_revalidate call, its stbuf will not be merged
into local->stbuf as the brick does not yet have
a layout. The is_permission_different check therefore
fails to detect that an attr heal is required as it
only considers the stbuf values from existing bricks.
To fix this, merge all stbuf values into local->stbuf
and use local->prebuf to store the correct directory
attributes.
Change-Id: Ic9e8b04a1ab9ed1248b6b056e3450bbafe32e1bc
fixes: bz#1693057
Signed-off-by: N Balachandran <nbalacha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In an arbiter volume configuration SHD will not send any writes onto the arbiter
brick even if there is data pending marker for the arbiter brick. If we have a
arbiter setup on the geo-rep master and there are data pending markers for the files
on arbiter brick, SHD will not mark any data changelog during healing. While syncing
the data from master to slave, if the arbiter-brick is considered as ACTIVE, then
there is a chance that slave will miss out some data. If the arbiter brick is being
newly added or replaced there is a chance of slave missing all the data during sync.
Fix:
If there is data pending marker for the arbiter brick, send truncate on the arbiter
brick during heal, so that it will record truncate as the data transaction in changelog.
Change-Id: I3242ba6cea6da495c418ef860d9c3359c5459dec
fixes: bz#1687746
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 421b7071f5acee064faf02dc91bcc83efbaa6523.
With this commit, the way glusterd calculates the checksum changes.
In a heterogeneous cluster, upgraded and non-upgraded nodes follow
different mechanisms to compute the checksum. Although the contents
of files are same, we will see checksum mismatch errors and peers
will run into the rejected state.
reverted patch: https://review.gluster.org/#/c/glusterfs/+/22149/
updates: bz#1672249
Change-Id: Ie12e1ac983d62594b161844b2967d8a3fbfedba6
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: When quorum count option is updated, the change is not reflected in
the nfs-server.vol file. This is because in get_checksum_for_file(), when the
last part of the file read has size less than buffer size, the read buffer
stores old data value along with correct data value.
Solution: Pass the bytes read instead of fixed buffer size, for calculating
checksum.
Change-Id: I4b641607c8a262961b3f3da0028a54e08c3f8589
fixes: bz#1672249
Signed-off-by: Varsha Rao <varao@redhat.com>
|
|
|
|
|
|
| |
Fixes: bz#1673265
Change-Id: I2b9be45f199f6436b858536c6f49be85902217f0
Signed-off-by: Nigel Babu <nigelb@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The default value of shard-block-size was changed from 4MB
to 64MB sometime back. The script "fallocate"s a 6MB file
and expects it to have 1 shard under .shard. This worked when
the shard-block-size was 4MB. With the default value now at 64MB,
file "file1" won't have any shards under .shard and the stat on the
1st shard's path fails with ENOENT.
Changed the script to explicitly set shard-block-size to 4MB.
Change-Id: I7f1785922287d16d74c95fa57cbbe12e6e66e4f7
fixes: bz#1662635
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
(cherry picked from commit 32f55a2708903145a51de5705ca8e60ff6dd6f9e)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of https://review.gluster.org/#/c/glusterfs/+/21297/
Problem:
If parent dir is in split-brain or has dirty xattrs set, and the file
has gfid missing on one of the bricks, then name heal won't assign the
gfid.
Fix:
Use the brick we select the gfid from as the 'source'.
Note: Problem was found while trying to debug a split-brain issue on
Cynthia Zhou's setup.
fixes: bz#1655561
Change-Id: Id088d4f0fb017aa35122de426654194e581ed742
Reported-by: Cynthia Zhou <cynthia.zhou@nokia-sbell.com>
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the commit febf5ed4848, during the volume create op,
we are setting volinfo->caps to 0, only if any of the bricks
belong to the same node and brickinfo->vg[0] is null.
Previously, we used to set volinfo->caps to 0, when
either brick doesn't belong to the same node or brickinfo->vg[0]
is null.
With this patch, we set volinfo->caps to 0, when either brick
doesn't belong to the same node or brickinfo->vg[0] is null.
(as we do earlier without commit febf5ed4848).
> BUG: bz#1635820
> Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
> Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
fixes: bz#1643052
Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Patch https://review.gluster.org/#/c/glusterfs/+/19135/ has
optimised glusterd test cases by clubbing the similar test
cases into a single test case.
https://review.gluster.org/#/c/glusterfs/+/19135/15/tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t
test case has been deleted and added as a part of
tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t
In the original test case, we create a volume with two bricks,
each on a separate node(N1 & N2). From another node in cluster(N3),
we try to detach a node which is hosting bricks. It fails.
In the new test, we created volume with single brick on N1.
and from another node in cluster, we tried to detach N1. we
expect peer detach to fail, but peer detach was success as
the node is hosting all the bricks of volume.
Now, changing the new test case to cover the original test case scenario.
Please refer https://bugzilla.redhat.com/show_bug.cgi?id=1642597#c1 to
understand why the new test case is not failing in centos-regression.
> BUG: bz#1642597
> Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
> Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
(cherry picked from commit 0ca6773eaf5aeb507ebc72d2c2f61902eeff414c)
fixes: bz#1643075
Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing
this .t spuriously. On checking one of the failure logs, I see:
22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful:
22:05:44 Self-heal daemon is not running. Check self-heal daemon log file.
22:05:44 not ok 20 , LINENUM:38
In glusterd log:
[2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file
But the tests which preceed this check whether via a statedump if the shd is
conected to the bricks, and they have succeeded and even started
healing. From glustershd.log:
[2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1 sinks=2
So the only reason I can see launching heal via cli failing is a race where
shd has been spawned but glusterd has not yet updated in-memory that it is up,
and hence failing the CLI.
Fix:
Check for shd up status before launching heal via CLI
Change-Id: Ic88abf14ad3d51c89cb438db601fae4df179e8f4
fixes: bz#1641761
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
(cherry picked from commit 3dea105556130abd4da0fd3f8f2c523ac52398d1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of https://review.gluster.org/#/c/glusterfs/+/21380/
Problem:
In an arbiter volume, if there is a pending data heal of a file only on
arbiter brick, self-heal takes inodelks twice due to a code-bug but unlocks
it only once, leaving behind a stale lock on the brick. This causes
the next write to the file to hang.
Fix:
Fix the code-bug to take lock only once. This bug was introduced master
with commit eb472d82a083883335bc494b87ea175ac43471ff
Thanks to Pranith Kumar K <pkarampu@redhat.com> for finding the RCA.
fixes: bz#1637953
Change-Id: I15ad969e10a6a3c4bd255e2948b6be6dcddc61e1
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of https://review.gluster.org/#/c/glusterfs/+/21135/
Problem:
When a directory has dirty xattrs due to failed post-ops or when
replace/reset brick is performed, AFR does a conservative merge as
expected, but heal-info reports it as split-brain because there are no
clear sources.
Fix:
Modify pending flag to contain information about pending heals and
split-brains. For directories, if spit-brain flag is not set,just show
them as needing heal and not being in split-brain.
Change-Id: I09ef821f6887c87d315ae99e6b1de05103cd9383
fixes: bz#1633634
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
When metadata-self-heal is triggered on the mount, it blocks
lookup until metadata-self-heal completes. But that can lead
to hangs when lot of clients are accessing a directory which
needs metadata heal and all of them trigger heals waiting
for other clients to complete heal.
Fix:
Only when the heal is needed but the pending xattrs are not set,
trigger metadata heal that could block lookup. This is the only
case where different clients may give different metadata to the
clients without heals, which should be avoided.
Updates bz#1625575
Change-Id: I6089e9fda0770a83fb287941b229c882711f4e66
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It wouldn't make sense to allow iostats file to be written in
*any* directory. While the formating makes sure we try to append
io-stats-name for the file, so overwriting existing file is slim,
but in any case it makes sense to restrict dumping to one directory.
Below are the sample commands, and files created for the corresponding
values:
$ setfattr -n trusted.io-stats-dump -v file-for-dump $M0
In this case, the file would be in /var/run/gluster/file-for-dump
$ setfattr -n trusted.io-stats-dump -v /dir1/dir2/file-for-dump $M0
In this case, then the dump file is in /var/run/gluster/dir1-dir2-file-for-dump
Note that the value passed for this virtual xattr would be treated as a
file, and even if the value has '/' in it, it would be changed to '-'
for sanity.
Fixes: bz#1625106
Change-Id: Id9ae6a40a190b8937c51662e6e1c2a0f6c86a0e0
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
The value of trusted.pgfid.xx was always set to 1
in posix_mknod. This is incorrect if posix_mknod
calls posix_create_link_if_gfid_exists.
Change-Id: Ibe87ca6f155846b9a7c7abbfb1eb8b6a99a5eb68
fixes: bz#1623317
Signed-off-by: N Balachandran <nbalacha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression
wherein if a file is present in only 1 brick of replica *and* doesn't
have a gfid associated with it, it doesn't get healed upon the next
lookup from the client. Fix it.
Change-Id: I7d1111dcb45b1b8b8340a7d02558f05df70aa599
fixes: bz#1597117
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
(cherry picked from commit eb472d82a083883335bc494b87ea175ac43471ff)
|
|
|
|
|
|
|
|
|
|
|
| |
1) snprintf into linkname_expected should happen with PATH_MAX
2) comparison should happen with linkname_actual with complete
string linkname_expected
fixes bz#1595524
Change-Id: Ic3b3c362dc6c69c046b9a13e031989be47ecff14
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
(cherry picked from commit 3099d3e6ba81d3e1abf37385b13aabf5837b9c5e)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In the .t, when the only good brick was brought down, writes on the fd were
still succeeding on the bad bricks. The inflight split-brain check was
marking the write as failure but since the write succeeded on all the
bad bricks, afr_txn_nothing_failed() was set to true and we were
unwinding writev with success to DHT and then catching the failure in
post-op in the background.
Fix:
Don't wind the FOP phase if the write_subvol (which is populated with readable
subvols obtained in pre-op cbk) does not have at least 1 good brick which was up
when the transaction started.
Note: This fix is not related to brick muliplexing. I ran the .t
10 times with this fix and brick-mux enabled without any failures.
Change-Id: I915c9c366aa32cd342b1565827ca2d83cb02ae85
updates: bz#1581548
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
(cherry picked from commit 985a1d15db910e012ddc1dcdc2e333cc28a9968b)
|
|
|
|
|
|
|
|
| |
brick"
Updates: bz#1582286
This reverts commit 0043c63f70776444f69667a4ef9596217ecb42b7.
Change-Id: Iab3b4f4a54e122c589e515add93c6effc966b3e0
|
|
|
|
|
|
|
|
|
|
| |
Details in the bug
Updates: bz#1581745
Change-Id: Id984e10b60daf274d5510e3ccbf7abf0cb19f368
Signed-off-by: ShyamsundarR <srangana@redhat.com>
(cherry picked from commit 92839c5a4a6c87973e4f6f2f96b359e9c2a0f5c0)
|
|
|
|
|
|
|
| |
Change-Id: I3a8d452d00560dac5e0b7ff0b1835d1f20a59f91
updates: bz#1580540
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
(cherry picked from commit c2cf3f686f3ea0efd936d2eafc404fc9d2e0acc7)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit d01f7244e9d9f7e3ef84e0ba7b48ef1b1b09d809.
This is being reverted as the API signatures should adapt to a
statx like structure, and also all APIs that need to return
pre/post attrs are not complete.
As a result, instead of fixing up part of the APIs and then
refixing the same in a later release, removing these set of
fixes from the branch
Additionally fixed up posix-entry-ops.c which was using the
new syncop signature
Updates: bz#1575386
Change-Id: I35222dadc4a2e97010bc1e6b97b6f83583c311f6
|
|
|
|
|
|
|
|
|
|
| |
cut -d$'\n' is not separating the xattrs shown as part of getfattr output.
Hence use awk to get the nth line of getfattr output for nth iteration
in the for loop.
Change-Id: I1a96cd3f72f4f407f9a783375f78d9a69d5d3885
fixes: bz#1574606
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
force-migration config for remove-brick operation.
The cli will take input from the user before starting "remove-brick"
start operation. The message/confirmation looks like the following:
<Running remove-brick with cluster.force-migration enabled can result
in data corruption. It is safer to disable this option so that files
that receive writes during migration are not migrated. Files that are
not migrated can then be manually copied after the remove-brick commit
operation. Do you want to continue with your current
cluster.force-migration settings? (y/n)>
And also question for COMMIT_FORCE is changed.
Fixes: bz#1572586
Change-Id: Ifdb6b108a646f50339dd196d6e65962864635139
Signed-off-by: Susant Palai <spalai@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
see https://review.gluster.org/#/c/19788/
use print fn from __future__
Change-Id: If5075d8d9ca9641058fbc71df8a52aa35804cda4
updates: #411
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: Sometimes brick process is getting crashed at the time
of stop brick while brick mux is enabled.
Solution: Brick process was getting crashed because of rpc connection
was not cleaning properly while brick mux is enabled.In this patch
after sending GF_EVENT_CLEANUP notification to xlator(server)
waits for all rpc client connection destroy for specific xlator.Once rpc
connections are destroyed in server_rpc_notify for all associated client
for that brick then call xlator_mem_cleanup for for brick xlator as well as
all child xlators.To avoid races at the time of cleanup introduce
two new flags at each xlator cleanup_starting, call_cleanup.
BUG: 1544090
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/)
with same patch after enable brick mux forcefully, all test cases are
passed.
Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf
updates: bz#1544090
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. If pre-op fails on all bricks,set lock->release to true in
afr_handle_lock_acquire_failure so that the GF_ASSERT in afr_unlock() does not
crash.
2. Added a missing 'return' after handling pre-op failure in
afr_transaction_perform_fop(), fixing a use-after-free issue.
Change-Id: If0627a9124cb5d6405037cab3f17f8325eed2d83
fixes: bz#1561129
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note 1) we're not supposed to be using #!/usr/bin/env python, see
https://fedoraproject.org/wiki/Packaging:Guidelines?rd=Packaging/Guidelines#Shebang_lines
Note 2) we're also not supposed to be using "!/usr/bin/python,
see https://fedoraproject.org/wiki/Changes/Avoid_usr_bin_python_in_RPM_Build#Quick_Opt-Out
The previous patch (https://review.gluster.org/19767) tried to do too
much in one patch, so it was abandoned.
This patch does two things:
1) minor cleanup of configure(.ac) to explicitly use python2
2) change all the shebang lines to #!/usr/bin/python2 and add them
where they were missing based on warnings emitted during rpmbuild.
In a follow-up patch python2 will eventually be changed to python3.
Before that python2-isms (e.g. print, string.join(), etc.) need to be
converted to python3. Some of those can be rewritten in version agnostic
python. E.g. print statements become print() with "from __future_ import
print_function". The python 2to3 utility will be used for some of those.
Also Aravinda has given guidance in the comments to the first patch for
changes.
updates: #411
Change-Id: I471730962b2526022115a1fc33629fb078b74338
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
| |
Change-Id: I4648816af908539efdc2528608aa2ebf7f0d0e2f
fixes: bz#1559004
BUG: 1559004
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd maintains a boolean flag 'port_registered' which is used to determine
if a brick has completed its portmap sign in process. This flag is (re)set in
pmap_sigin and pmap_signout events. In case of brick multiplexing this flag is
the identifier to determine if the very first brick with which the process is
spawned up has completed its sign in process. However in case of glusterd
restart when a brick is already identified as running, glusterd does a
pmap_registry_bind to ensure its portmap table is updated but this flag isn't
which is fine in case of non brick multiplex case but causes an issue if
the very first brick which came as part of process is replaced and then
the subsequent brick attach will fail. One of the way to validate this
is to create and start a volume, remove the first brick and then
add-brick a new one. Add-brick operation will take a very long time and
post that the volume status will show all other brick status apart from
the new brick as down.
Solution is to set brickinfo->port_registered to true for all the
running bricks when brick multiplexing is enabled.
Change-Id: Ib0662d99d0fa66b1538947fd96b43f1cbc04e4ff
Fixes: bz#1560957
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit a60fc2ddc03134fb23c5ed5c0bcb195e1649416b.
This commit was causing multiple tests to time out when brick
multiplexing is enabled. With further debugging, it's found that even
though the volume stop transaction is converted into mgmt_v3 to allow
the remote nodes to follow the synctask framework to process the command,
there are other callers of glusterd_brick_stop () which are not synctask
based.
Change-Id: I7aee687abc6bfeaa70c7447031f55ed4ccd64693
updates: bz#1545048
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: There's a race between the last glusterfs_handle_terminate()
response sent to glusterd and the kill that happens immediately if the
terminated brick is the last brick.
Solution: When it is a last brick for the brick process, instead of glusterfsd
killing itself, glusterd will kill the process in case of brick multiplexing.
And also changing gf_attach utility accordingly.
Change-Id: I386c19ca592536daa71294a13d9fc89a26d7e8c0
fixes: bz#1545048
BUG: 1545048
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
1) Afr's eager-lock only works for data transactions.
2) When there are conflicting writes, write with conflicting region initiates
unlock of eager-lock leading to extra pre-ops and post-ops on the file. When
eager-lock goes off, it leads to extra fsyncs for random-write workload in afr.
Solution (that is modeled after EC):
In EC, when there is a conflicting write, it waits for the current write to
complete before it winds the conflicted write. This leads to better utilization
of network and disk, because we will not be doing extra xattrops and FSYNCs and
inodelk/unlock. Moved fd based counters to inode based counters.
I tried to model the solution based on EC's locking, but it is not similar to
AFR because we had to keep backward compatibility.
Lifecycle of lock:
==================
First transaction is added to inode->owners list and an inodelk will be sent on
the wire. All the next transactions will be put in inode->waiters list until
the first transaction completes inodelk and [f]xattrop completely. Once
[f]xattrop also completes, all the requests in the inode->waiters list are
checked if it conflict with any of the existing locks which are in
inode->owners list and if not are added to inode->owners list and resumed with
doing transaction. When these transactions complete fop phase they will be
moved to inode->post_op list and resume the transactions that were paused
because of conflicts. Post-op and unlock will not be issued on the wire until
that is the last transaction on that inode. Last transaction when it has to
perform post-op can choose to sleep for deyed-post-op-secs value. During that
time if any other transaction comes, it will wake up the sleeping transaction
and takes over the ownership of the lock and the cycle continues. If the
dealyed-post-op-secs expire, then the timer thread will wakeup the sleeping
transaction and it will set lock->release to true and starts doing post-op and
then unlock. During this time if any other transactions come, they will be put
in inode->frozen list. Once the previous unlock comes it will move the frozen
list to waiters list and moves the first element from this waiters-list to
owners-list and attempts the lock and the cycle continues. This is the general
idea. There is logic at the time of dealying and at the time of new
transaction or in flush fop to wakeup existing sleeping transactions or
choosing whether to delay a transaction etc, which is subjected to change based
on future enhancements etc.
Fixes: #418
BUG: 1549606
Change-Id: I88b570bbcf332a27c82d2767dfa82472f60055dc
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Self-heal creates a thread per brick to sweep the index looking for
files that need to be healed. These threads are started before the
volume comes online, so nothing is done but waiting for the next
sweep. This happens once per minute.
When a replace brick command is executed, the new graph is loaded and
all index sweeper threads started. When all bricks have reported, a
getxattr request is sent to the root directory of the volume. This
causes a heal on it (because the new brick doesn't have good data),
and marks its contents as pending to be healed. This is done by the
index sweeper thread on the next round, one minute later.
This patch solves this problem by waking all index sweeper threads
after a successful check on the root directory.
Additionally, the index sweep thread scans the index directory
sequentially, but it might happen that after healing a directory entry
more index entries are created but skipped by the current directory
scan. This causes the remaining entries to be processed on the next
round, one minute later. The same can happen in the next round, so
the heal is running in bursts and taking a lot to finish, specially
on volumes with many directory levels.
This patch solves this problem by immediately restarting the index
sweep if a directory has been healed.
Change-Id: I58d9ab6ef17b30f704dc322e1d3d53b904e5f30e
BUG: 1547662
Signed-off-by: Xavi Hernandez <jahernan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test does:
1. mount a volume
2. kill a brick in the volume
3. mkdir (/somedir)
In my local tests and in [1], I see that mkdir in step 3 fails because
there is no dht-layout on root directory.
The reason I think is by the time first lookup on "/" hit dht, a brick
was killed as per step 2. This means layout was not healed for "/" and
since this is a new volume, no layout is present on it. Note that the
first lookup done on "/" by fuse-bridge is not synchronized with
parent process of daemonized glusterfs mount completing. IOW, by the
time glusterfs cmd executed there is no guarantee that lookup on "/"
is complete. So, if step 2 races ahead of fuse_first_lookup on "/", we
end up with an invalid dht-layout on "/" resulting in failures.
Doint an operation like ls makes sure that lookup on "/" is completed
before we kill a brick
Change-Id: Ie0c4e442c4c629fad6f7ae850437e3d63fe4bea9
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
BUG: 1543279
|
|
|
|
|
|
|
|
|
|
|
| |
Because bug-924726.t depends on netstat, tests failed before. This got resolved
by adding respective check to run-tests.sh.
Enabled respective test again.
Change-Id: I70c9bff03379ed9ee8cd95842c3501dfb50b8e86
BUG: 1312830
Signed-off-by: Sven Fischer <sven@fischer-abc.de>
|
|
|
|
|
|
| |
Change-Id: Ib74354f57a18569762ad45a51f182822a2537421
BUG: 1468483
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
For as long as a shard's inode is in priv->lru_list, it should have a non-zero
ref-count. This patch achieves it by taking a ref on the inode when it
is added to lru list. When it's time for the inode to be evicted
from the lru list, a corresponding unref is done.
Change-Id: I289ffb41e7be5df7489c989bc1bbf53377433c86
BUG: 1468483
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This uses 'timeout' command with 300 seconds default. Right now,
there is just 1 test which takes more than that in a properly
setup machine.
Ideally best case is set the default to something like 30 seconds,
and if a test is supposed to take more than that, owner should add
a timeout line to test knowingly. That way, it makes test writers
think about a time limit too.
Change-Id: I747005ce1f208aeb2ecbf899e8feea487ecd21a0
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
| |
In bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t
check for peer count after starting glusterd instance on node 2
Change-Id: I3f92013719d94b6d92fb5db25efef1fb4b41d510
BUG: 1540607
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As nfs-ganesha, a wcc data contains pre/post attributes is return
in read/write rpc reply. nfs-ganesha get those attributes by
two getattr between the real read/write right now.
But, gluster has return pre/post attributes from glusterfsd,
those attributes are skipped in syncop/gfapi, if gfapi return them,
the upper user (nfs-ganesha) can use them directly without any
duplicate getattr.
Updates: #389
Change-Id: I7b643ae4241cfe2aeb17063de00192d81674024a
Signed-off-by: Kinglong Mee <mijinlong@open-fs.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To reduce the overall time taken by the every regression job for all glusterd test cases,
avoiding some duplicate tests by clubbing similar test cases into one.
real time taken for all regression jobs of glusterd without this patch is 1959 seconds,
with this patch it is 1059 seconds.
Look at the below document for your reference.
https://docs.google.com/document/d/1u8o4-wocrsuPDI8BwuBU6yi_x4xA_pf2qSrFY6WEQpo/edit?usp=sharing
Change-Id: Ib14c61ace97e62c3abce47230dd40598640fe9cb
BUG: 1530905
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
In one of the case in commit cb0339f there's one particular case where
after removing the old snap it wasn't writing the new snap version and
this resulted into one of the test to fail spuriously.
Change-Id: I3e83435fb62d6bba3bbe227e40decc6ce37ea77b
BUG: 1540607
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
We currently don't have a roll-back/undoing of post-ops if quorum is not
met. Though the FOP is still unwound with failure, the xattrs remain on
the disk. Due to these partial post-ops and partial heals (healing only when
2 bricks are up), we can end up in split-brain purely from the afr
xattrs point of view i.e each brick is blamed by atleast one of the
others. These scenarios are hit when there is frequent
connect/disconnect of the client/shd to the bricks while I/O or heal
are in progress.
Fix:
Instead of undoing the post-op, pick a source based on the xattr values.
If 2 bricks blame one, the blamed one must be treated as sink.
If there is no majority, all are sources. Once we pick a source,
self-heal will then do the heal instead of erroring out due to
split-brain.
Change-Id: I3d0224b883eb0945785ade0e9697a1c828aec0ae
BUG: 1539358
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
| |
Change-Id: Ifbf5e628ccb9a0ecb285f5884a41e70d935316bd
Signed-off-by: Csaba Henk <csaba@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, the list of xattrs that md-cache can cache is hard coded
in the md-cache.c file, this necessiates code change and rebuild
everytime a new xattr needs to be added to md-cache xattr cache
list.
With this patch, the user will be able to configure a comma
seperated list of xattrs to be cached by md-cache
Updates #297
Change-Id: Ie35ed607d17182d53f6bb6e6c6563ac52bc3132e
Signed-off-by: Poornima G <pgurusid@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
afr relies on pending changelog xattrs to identify source and sinks and the
setting of these xattrs happen in post-op. So if post-op fails, we need to
unwind the write txn with a failure.
Change-Id: I0f019ac03890108324ee7672883d774918b20be1
BUG: 1506140
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, md-cache sends a list of xattrs, it is inttrested in recieving
invalidations for. But, it cannot specify any wildcard in the xattr names
Eg: user.* - invalidate on updating any xattr with user. prefix.
This patch, enable upcall to honor wildcard in the xattr key names
Updates: #297
Change-Id: I98caf0ed72f11ef10770bf2067d4428880e0a03a
Signed-off-by: Poornima G <pgurusid@redhat.com>
|
|
|
|
|
|
|
|
|
| |
..in order for self-heal of symlinks to work properly (see BZ for
details).
Change-Id: I9a011d00b07a690446f7fd3589e96f840e8b7501
BUG: 1529488
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
so that glusterd is also aware that shd is up and running.
While not reproducible locally, on the jenkins slaves, 'gluster vol heal patchy'
fails with "Self-heal daemon is not running. Check self-heal daemon log file.",
while infact the afr_child_up_status_in_shd() checks before that passed. In the
shd log also, I see the shd being up and connected to at least one brick before
the heal is launched.
Change-Id: Id3801fa4ab56a70b1f0bd6a7e240f69bea74a5fc
BUG: 1515163
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|