| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
When name-self-heal is triggered on the mount, it blocks
lookup until name-self-heal completes. But that can lead
to hangs when lot of clients are accessing a directory which
needs name heal and all of them trigger heals waiting
for other clients to complete heal.
Fix:
When a name-heal is needed but quorum number of names have the
file and pending xattrs exist on the parent, then better to
delegate the heal to SHD which will be completed as part of
entry-heal of the parent directory. We could also do the same
for quorum-number of names not present but we don't have
any known use-case where this is a frequent occurrence so
not changing that part at the moment. When there is a gfid
mismatch or missing gfid it is important to complete the heal
so that next rename doesn't assume everything is fine and
perform a rename etc
fixes bz#1625575
Change-Id: I8b002c85dffc6eb6f2833e742684a233daefeb2c
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
When metadata-self-heal is triggered on the mount, it blocks
lookup until metadata-self-heal completes. But that can lead
to hangs when lot of clients are accessing a directory which
needs metadata heal and all of them trigger heals waiting
for other clients to complete heal.
Fix:
Only when the heal is needed but the pending xattrs are not set,
trigger metadata heal that could block lookup. This is the only
case where different clients may give different metadata to the
clients without heals, which should be avoided.
Updates bz#1625575
Change-Id: I6089e9fda0770a83fb287941b229c882711f4e66
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In a disk full scenario, we take a failure path in afr_transaction_perform_fop()
and go to unlock phase. But we change the lk-owner before that, causing unlock
to fail. When mount issues another fop that takes locks on that file, it hangs.
Fix:
Change lk-owner only when we are about to perform the fop phase.
Also fix the same issue for arbiters when afr_txn_arbitrate_fop() fails the fop.
Also removed the DISK_SPACE_CHECK_AND_GOTO in posix_xattrop. Otherwise truncate
to zero will fail pre-op phase with ENOSPC when the user is actually trying to
freee up space.
Change-Id: Ic4c8a596b4cdf4a7fc189bf00b561113cf114353
fixes: bz#1603056
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
(cherry picked from commit ec0d7d77de3e4bd485a4fa2e53c9137e25c71ce7)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
When call_count is decremented by one thread, another thread can
go ahead with the operation leading to undefined behavior for the
thread executing statements after decrementing call count.
Fix:
Do the operations necessary before decrementing call count.
fixes bz#1599629
Change-Id: Icc90cd92ac16e5fbdfe534d9f0a61312943393fe
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
(cherry picked from commit 03f1f5bdc46076178f1afdf8e2a76c5b973fe11f)
|
|
|
|
|
|
|
|
|
|
|
|
| |
commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression
wherein if a file is present in only 1 brick of replica *and* doesn't
have a gfid associated with it, it doesn't get healed upon the next
lookup from the client. Fix it.
Change-Id: I7d1111dcb45b1b8b8340a7d02558f05df70aa599
fixes: bz#1597117
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
(cherry picked from commit eb472d82a083883335bc494b87ea175ac43471ff)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In the new eager-lock implementation lk-owner is assigned after the
'local' is added to the eager-lock list, so there exists a possibility
of lock being sent even before lk-owner is assigned.
Fix:
Make sure to assign lk-owner before adding local to eager-lock list
fixes bz#1598193
Change-Id: I26d1b7bcf3e8b22531f1dc0b952cae2d92889ef2
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
(cherry picked from commit c6f93e422855f656d3a86461a8458f37ad0103eb)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
If inode refresh failed on all children of afr due to ENOENT (say file
migrated by dht), it resets the readables to zero. Any inflight txn which
then later comes on the inode fails with EIO because no readable
children present for the inode.
Fix:
Don't update readables when inode refresh fails on *all* children of
afr. In that way any inflight txns will either proceed with its own inode
refresh if needed and fail it with the right errno or use the old value
of readables and continue with the txn.
Also, add quorum checks to the beginning of afr_transaction(). Otherwise, we
seem to be winding the lock and checking for quorum only in pre-op pahse.
Note: This should ideally fix BZ 1329505 since the stop gap fix for
it is has been reverted at https://review.gluster.org/#/c/20028.
Change-Id: Ia638c092d8d12dc27afb3cdad133394845061319
updates: bz#1597116
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
(cherry picked from commit 0f13eed0c1fa74cefed486538b02e0c8a8708456)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In the .t, when the only good brick was brought down, writes on the fd were
still succeeding on the bad bricks. The inflight split-brain check was
marking the write as failure but since the write succeeded on all the
bad bricks, afr_txn_nothing_failed() was set to true and we were
unwinding writev with success to DHT and then catching the failure in
post-op in the background.
Fix:
Don't wind the FOP phase if the write_subvol (which is populated with readable
subvols obtained in pre-op cbk) does not have at least 1 good brick which was up
when the transaction started.
Note: This fix is not related to brick muliplexing. I ran the .t
10 times with this fix and brick-mux enabled without any failures.
Change-Id: I915c9c366aa32cd342b1565827ca2d83cb02ae85
updates: bz#1581548
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
(cherry picked from commit 985a1d15db910e012ddc1dcdc2e333cc28a9968b)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit d01f7244e9d9f7e3ef84e0ba7b48ef1b1b09d809.
This is being reverted as the API signatures should adapt to a
statx like structure, and also all APIs that need to return
pre/post attrs are not complete.
As a result, instead of fixing up part of the APIs and then
refixing the same in a later release, removing these set of
fixes from the branch
Additionally fixed up posix-entry-ops.c which was using the
new syncop signature
Updates: bz#1575386
Change-Id: I35222dadc4a2e97010bc1e6b97b6f83583c311f6
|
|
|
|
|
|
|
|
| |
Change-Id: Ied047dd5ee44e9d5a5d3db214826f7df30332ef9
updates: #350
BUG: 1319992
Signed-off-by: Poornima G <pgurusid@redhat.com>
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
|
|
|
|
|
|
|
| |
Updates #352
Change-Id: I1bbb3c652ba33cec6aa37f3700370674077fb17d
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
| |
1. Create thin arbiter index file during mount.
2. Set pending marker in thin arbiter id file in case of failure.
Change-Id: I269eb8d069f0323f1fc616175e5e5eb7b91d5f82
updates: #352
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
If we have 2 bricks, brick-A and brick-B with brick-A within halo-max-latency
and brick-B more than halo-max-latency. If we set both halo-min, halo-max replicas
as '1'. In this case, brick-A comes online and then ping-latency will be updated for it.
When brick-B comes online, we have 2 up-bricks, so the code tries to find the brick with
worst latency to mark it down. Since Brick-B just came online it always had '0' latency
so brick-B used to be marked offline and Brick-B would eventually be the one to be
online even when brick-A is more suited.
Fix:
Consider latency of just-up child as HALO_MAX_LATENCY so that worst-child until
ping-latency is found as the just-up brick. Also keep ping-latency as -1 until
child-up during initialization.
BUG: 1567881
fixes bz#1567881
Change-Id: I148262fe505468190f0eb99225d0f6d57cdb6f04
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
In Halo replication, there are pending heals more often than not.
It makes sense to give users the capability to configure it as low
as 5 seconds.
BUG: 1569489
fixes bz#1569489
Change-Id: I451c1975827f66398b903f659c981ef3121d5376
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
xlator_notify doesn't pass the extra arguments that come in the
input function, so XLATOR_NOTIFY macro should be used instead
to pass the extra arguments to the function.
BUG: 1567881
fixes bz#1567881
Change-Id: Ic15b6c446638cbacf3149693147a754219037c47
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. If pre-op fails on all bricks,set lock->release to true in
afr_handle_lock_acquire_failure so that the GF_ASSERT in afr_unlock() does not
crash.
2. Added a missing 'return' after handling pre-op failure in
afr_transaction_perform_fop(), fixing a use-after-free issue.
Change-Id: If0627a9124cb5d6405037cab3f17f8325eed2d83
fixes: bz#1561129
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
We seem to be winding the FOP if pre-op did not succeed on quorum bricks
and then failing the FOP with EROFS since the fop did not meet quorum.
This essentially masks the actual error due to which pre-op failed. (See
BZ).
Fix:
Skip FOP phase if pre-op quorum is not met and go to post-op.
Fixes: 1561129
Change-Id: Ie58a41e8fa1ad79aa06093706e96db8eef61b6d9
fixes: bz#1561129
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
| |
On shd, we shouldn't treat any brick down based
on latency, otherwise self-heal will never happen
fixes: bz#1562717
Change-Id: Ica07fcc4fae91a6bfd9c9a670e2be464704d94b7
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Updates: #363
This new value (3) will try to wind read requests to the child of AFR
having the least amount of pending requests in its queue.
Change-Id: If6bda2aac9bf7aec3fc39622f78659313c4b6508
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
| |
BUG: 1557932
Change-Id: I3783e41b3812267bc10c0d05d062a31396ce135b
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
1) Afr's eager-lock only works for data transactions.
2) When there are conflicting writes, write with conflicting region initiates
unlock of eager-lock leading to extra pre-ops and post-ops on the file. When
eager-lock goes off, it leads to extra fsyncs for random-write workload in afr.
Solution (that is modeled after EC):
In EC, when there is a conflicting write, it waits for the current write to
complete before it winds the conflicted write. This leads to better utilization
of network and disk, because we will not be doing extra xattrops and FSYNCs and
inodelk/unlock. Moved fd based counters to inode based counters.
I tried to model the solution based on EC's locking, but it is not similar to
AFR because we had to keep backward compatibility.
Lifecycle of lock:
==================
First transaction is added to inode->owners list and an inodelk will be sent on
the wire. All the next transactions will be put in inode->waiters list until
the first transaction completes inodelk and [f]xattrop completely. Once
[f]xattrop also completes, all the requests in the inode->waiters list are
checked if it conflict with any of the existing locks which are in
inode->owners list and if not are added to inode->owners list and resumed with
doing transaction. When these transactions complete fop phase they will be
moved to inode->post_op list and resume the transactions that were paused
because of conflicts. Post-op and unlock will not be issued on the wire until
that is the last transaction on that inode. Last transaction when it has to
perform post-op can choose to sleep for deyed-post-op-secs value. During that
time if any other transaction comes, it will wake up the sleeping transaction
and takes over the ownership of the lock and the cycle continues. If the
dealyed-post-op-secs expire, then the timer thread will wakeup the sleeping
transaction and it will set lock->release to true and starts doing post-op and
then unlock. During this time if any other transactions come, they will be put
in inode->frozen list. Once the previous unlock comes it will move the frozen
list to waiters list and moves the first element from this waiters-list to
owners-list and attempts the lock and the cycle continues. This is the general
idea. There is logic at the time of dealying and at the time of new
transaction or in flush fop to wakeup existing sleeping transactions or
choosing whether to delay a transaction etc, which is subjected to change based
on future enhancements etc.
Fixes: #418
BUG: 1549606
Change-Id: I88b570bbcf332a27c82d2767dfa82472f60055dc
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Removed
1) afr-v1 self-heal locks related code which is not used anymore
2) transaction has some data types that are not needed, so removed them
3) Never used lock tracing available in afr as gluster's network tracing does
the job. So removed that as well.
4) Changelog is always enabled and afr is always used with locks, so
__changelog_enabled, afr_lock_server_count etc functions can be deleted.
5) transaction.fop/done/resume always call the same functions, so no need
to have these variables.
BUG: 1549606
Change-Id: I370c146fec2892d40e674d232a5d7256e003c7f1
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
| |
We are not seeing much improvement with this change. So removing the
feature so that it doesn't need to be maintained anymore.
Fixes: #414
Change-Id: Ic7969b151544daf2547bd262a9fa03f575626411
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I713401feb96393f668efb074f2d5b870d19e6fda
BUG: 1548361
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
At the time of pre-op, pre_op_xdata is populted with the xattrs we get from the
disk and at the time of post-op it gets over-written without unreffing the
previous value stored leading to a leak.
This is a regression we missed in
https://review.gluster.org/#/q/ba149bac92d169ae2256dbc75202dc9e5d06538e
BUG: 1550078
Change-Id: I0456f9ad6f77ce6248b747964a037193af3a3da7
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following release-3.8-fb branch patch is upstreamed:
> features/namespace: Add namespace xlator and link into brick graph
> Commit ID: dbd30776f26e
> https://review.gluster.org/#/c/18041/
> By Michael Goulet <mgoulet@fb.com>
Changes in this patch:
Removes extra config.h and namespace.h file in namespace.c
Adds default_getspec_cbk to libglusterfs.sym
Rename dict_for_each to dict_foreach_inline
Remove fd.h header file stack.h
Add test case for truncate, open and symlink
This patch is required to forward port io-threads namespace patch.
Updates: #401
Change-Id: Ib88c95b89eecee9b8957df8a4c8712c899c761d1
Signed-off-by: Varsha Rao <varao@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Added a volume option 'fips-mode-rchecksum' tied to op version 4.
If not set, rchecksum fop will use MD5 instead of SHA256.
updates: #230
Change-Id: Id8ea1303777e6450852c0bc25503cda341a6aec2
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As nfs-ganesha, a wcc data contains pre/post attributes is return
in read/write rpc reply. nfs-ganesha get those attributes by
two getattr between the real read/write right now.
But, gluster has return pre/post attributes from glusterfsd,
those attributes are skipped in syncop/gfapi, if gfapi return them,
the upper user (nfs-ganesha) can use them directly without any
duplicate getattr.
Updates: #389
Change-Id: I7b643ae4241cfe2aeb17063de00192d81674024a
Signed-off-by: Kinglong Mee <mijinlong@open-fs.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following release-3.8-fb branch patch is upstreamed:
> io-stats: Expose io-thread queue depths
> Commit ID: 69509ee7d2
> https://review.gluster.org/#/c/18143/
> By Shreyas Siravara <sshreyas@fb.com>
Changes in this patch:
- Replace iot_pri_t with gf_fop_pri_t
- Replace IOT_PRI_{HI, LO, NORMAL, MAX, LEAST} with
GF_FOP_PRI_{HI, LO, NORMAL, MAX, LEAST}
- Use dict_unref() instead of dict_destroy()
This patch is required to forward port io-threads namespace patch.
Updates: #401
Change-Id: I1b47a63185a441a30fbc423ca1015df7b36c2518
Signed-off-by: Varsha Rao <varao@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The child_up array was initialized with all elements being -1 to
allow afr_notify() to differentiate down bricks from bricks that
haven't reported yet. With current implementation this is not needed
anymore and it was causing unexpected results when other parts of
the code considered that if child_up[i] != 0, it meant that it was up.
Change-Id: I2a9d712ee64c512f24bd5cd3a48dcb37e3139472
BUG: 1541038
Signed-off-by: Xavier Hernandez <jahernan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
We currently don't have a roll-back/undoing of post-ops if quorum is not
met. Though the FOP is still unwound with failure, the xattrs remain on
the disk. Due to these partial post-ops and partial heals (healing only when
2 bricks are up), we can end up in split-brain purely from the afr
xattrs point of view i.e each brick is blamed by atleast one of the
others. These scenarios are hit when there is frequent
connect/disconnect of the client/shd to the bricks while I/O or heal
are in progress.
Fix:
Instead of undoing the post-op, pick a source based on the xattr values.
If 2 bricks blame one, the blamed one must be treated as sink.
If there is no majority, all are sources. Once we pick a source,
self-heal will then do the heal instead of erroring out due to
split-brain.
Change-Id: I3d0224b883eb0945785ade0e9697a1c828aec0ae
BUG: 1539358
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
| |
If the post-op phase of txn did not meet quorm checks, use that errno to
unwind the FOP rather than blindly setting ENOTCONN.
Change-Id: I0cb0c8771ec75a45f9a25ad4cd8601103deddf0c
BUG: 1506140
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
afr relies on pending changelog xattrs to identify source and sinks and the
setting of these xattrs happen in post-op. So if post-op fails, we need to
unwind the write txn with a failure.
Change-Id: I0f019ac03890108324ee7672883d774918b20be1
BUG: 1506140
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In replica 3 volumes there is a possibilities of ending up in split
brain scenario, when multiple clients writing data on the same file
at non overlapping regions in parallel.
Scenario:
- Initially all the copies are good and all the clients gets the value
of data readables as all good.
- Client C0 performs write W1 which fails on brick B0 and succeeds on
other two bricks.
- C1 performs write W2 which fails on B1 and succeeds on other two bricks.
- C2 performs write W3 which fails on B2 and succeeds on other two bricks.
- All the 3 writes above happen in parallel and fall on different ranges
so afr takes granular locks and all the writes are performed in parallel.
Since each client had data-readables as good, it does not see
file going into split-brain in the in_flight_split_brain check, hence
performs the post-op marking the pending xattrs. Now all the bricks
are being blamed by each other, ending up in split-brain.
Fix:
Have an option to take either full lock or range lock on files while
doing data transactions, to prevent the possibility of ending up in
split brains. With this change, by default the files will take full
lock while doing IO. If you want to make use of the old range lock
change the value of "cluster.full-lock" to "no".
Change-Id: I7893fa33005328ed63daa2f7c35eeed7c5218962
BUG: 1535438
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
| |
updates #220
Change-Id: I6e25dbb69b2c7021e00073e8f025d212db7de0be
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Setting the write_subvol value to read_subvol in case of metadata
transaction during pre-op (commit 19f9bcff4aada589d4321356c2670ed283f02c03)
might lead to the original problem of arbiter becoming source.
Scenario:
1) All bricks are up and good
2) 2 writes w1 and w2 are in progress in parallel
3) ctx->read_subvol is good for all the subvolumes
4) w1 succeeds on brick0 and fails on brick1, yet to do post-op on
the disk
5) read/lookup comes on the same file and refreshes read_subvols back
to all good
6) metadata transaction happens which makes ctx->write_subvol to be
assigned with ctx->read_subvol which is all good
7) w2 succeeds on brick1 and fails on brick0 and this will update the
brick in reverse order leading to arbiter becoming source
Fix:
Instead of setting the ctx->write_subvol to ctx->read_subvol in the
pre-op statge, if there is a metadata transaction, check in the
function __afr_set_in_flight_sb_status() if it is a data/metadata
transaction. Use the value of ctx->write_subvol if it is a data
transactions and ctx->read_subvol value for other transactions.
With this patch we assign the value of ctx->write_subvol in the
afr_transaction_perform_fop() with the on disk value, instead of
assigning it in the afr_changelog_pre_op() with the in memory value.
Change-Id: Id2025a7e965f0578af35b1abaac793b019c43cc4
BUG: 1482064
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
v1 of the patch started off as adding new fields to iatt that can be
filled up using statx but the discussions were more around introducing
masks to check the validity of different fields from a RIO perspective.
To that extent, I have dropped the statx call in this version and
introduced a 64 bit mask for existing fields. The masks I have defined
are similar with the statx() flags' masks.
I have *not* changed iatt_to_stat() to use the macros IATT_TYPE_VALID,
IATT_GFID_VALID etc before blindly copying from struct iatt to struct.
Also fixed warnings in xlators because of atime/mtime/ctime seconds
field change from uint32_t to int64_t.
Change-Id: I4ac614f1e8d5c8246fc99d5bc2d2a23e7941512b
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
In a replicated volume it was allowing to set the quorum-count value
between the range [1 - 2147483647]. This patch adds validation for
allowing only maximum of replica_count number of quorum-count value
to be set on a volume.
Change-Id: I13952f3c6cf498c9f2b91161503fc0fba9d94898
BUG: 1529515
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
| |
rchecksum uses MD5 which is not fips compliant. Hence
using sha256 for the same.
Updates: #230
Change-Id: I7fad016fcc2a9900395d0da919cf5ba996ec5278
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This patch creates a new way of defining message id's that is easier
and less error prone because it doesn't require so many manual changes
each time a new component is defined or a new message created.
Change-Id: I71ba8af9ac068f5add7e74f316a2478bc991c67b
Signed-off-by: Xavier Hernandez <jahernan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This patch takes care of volume options exposed via the CLI.
Updates #302
Change-Id: I6fd1645604928f6b9700e2425af4147cc6446a3a
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1.afr_discover_do: COPY_PASTE_ERROR
2.afr_fav_child_reset_sink_xattrs_cbk: REVERSE_INULL
3.afr_fop_lock_proceed: UNUSED_VALUE
4.afr_local_init: CHECKED_RETURN
5.afr_set_split_brain_choice: REVERSE_INULL
6.__afr_inode_write_finalize: FORWARD_NULL
7.afr_refresh_heal_done: REVERSE_INULL
8.afr_xl_op:UNUSED_VALUE
9.afr_changelog_populate_xdata: DEADCODE
10.set_afr_pending_xattrs_option: RESOURCE_LEAK
Note:
RESOURCE_LEAK complaints about afr_fgetxattr_pathinfo_cbk,
afr_getxattr_list_node_uuids_cbk and afr_getxattr_pathinfo_cbk seem to
be false alarms.
Change-Id: Ia4ca1478b5e2922084732d14c1e7b1b03ad5ac45
BUG: 789278
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In an arbiter volume, lookup was being served from one of the sink
bricks (source brick was down). shard uses the iatt values from lookup cbk
to calculate the size and block count, which in this case were incorrect
values. shard_local_t->last_block was thus initialised to -1, resulting
in an infinite while loop in shard_common_resolve_shards().
Fix:
Use client quorum logic to allow or fail the lookups from afr if there
are no readable subvolumes. So in replica-3 or arbiter vols, if there is
no good copy or if quorum is not met, fail lookup with ENOTCONN.
With this fix, we are also removing support for quorum-reads xlator
option. So if quorum is not met, neither read nor write txns are allowed
and we fail the fop with ENOTCONN.
Change-Id: Ic65c00c24f77ece007328b421494eee62a505fa0
BUG: 1467250
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
When eager-lock is on, and two writes happen in parallel on a FD
we were observing the following behaviour:
- First write fails on one data brick
- Since the post-op is not yet happened, the inode refresh will get
both the data bricks as readable and set it in the inode context
- In flight split brain check see both the data bricks as readable
and allows the second write
- Second write fails on the other data brick
- Now the post-op happens and marks both the data bricks as bad and
arbiter will become source for healing
Fix:
Adding one more variable called write_suvol in inode context and it
will have the in memory representation of the writable subvols. Inode
refresh will not update this value and its lifetime is pre-op through
unlock in the afr transaction. Initially the pre-op will set this
value same as read_subvol in inode context and then in the in flight
split brain check we will use this value instead of read_subvol.
After all the checks we will update the value of this and set the
read_subvol same as this to avoid having incorrect value in that.
Change-Id: I2ef6904524ab91af861d59690974bbc529ab1af3
BUG: 1482064
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Coverity ID: 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417,
418, 419, 423, 424, 425, 426, 427, 428, 429, 436, 437, 438, 439,
440, 441, 442, 443
Issue: Event include_recursion
Removed redundant, recursive includes from the files.
Change-Id: I920776b1fa089a2d4917ca722d0075a9239911a7
BUG: 789278
Signed-off-by: Girjesh Rajoria <grajoria@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
After setting split-brain-choice option to analyze the file to resolve
the split brain using the command
"setfattr -n replica.split-brain-choice -v "choiceX" <path-to-file>"
should allow to access the file from mount for default timeout of 5mins.
But the timeout was not honored and was able to access the file even after
the timeout.
Fix:
Call the inode_invalidate() in afr_set_split_brain_choice_cbk() so that
it will triger the cache invalidate after resetting the timer and the
split brain choice. So the next calls to access the file will fail with EIO.
Change-Id: I698cb833676b22ff3e4c6daf8b883a0958f51a64
BUG: 1503519
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Append on a file with split-brain succeeds. Open is intercepted by open-behind,
when write comes on the file, open-behind does open+write. Open succeeds
because afr doesn't fail it. Then write succeeds because write-behind
intercepts it. Flush is also intercepted by write-behind, so the application
never gets to know that the write failed.
Fix:
Fail open on split-brain, so that when open-behind does open+write open fails
which leads to write failure. Application will know about this failure.
Change-Id: I4bff1c747c97bb2925d6987f4ced5f1ce75dbc15
BUG: 1294051
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
If a brick crashes after an entry (file or dir) is created but before
gfid is assigned, the good bricks will have pending entry heal xattrs
but the heal won't complete because afr_selfheal_recreate_entry() tries
to create the entry again and it fails with EEXIST.
Fix:
We could have fixed posx_mknod/mkdir etc to assign the gfid if the file
already exists but the right thing to do seems to be to trigger a lookup
on the bad brick and let it heal the gfid instead of winding an
mknod/mkdir in the first place.
Change-Id: I82f76665a7541f1893ef8d847b78af6466aff1ff
BUG: 1493415
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Issue: Event value_overwrite:Overwriting previous write to "ret"
with value "-1".
Fix : An "If" condition is added to check the value of "ret".
Change-Id: I7b6bd4f20f73fa85eb8a5169644e275c7b56af51
BUG: 789278
Signed-off-by: Subha sree Mohankumar <smohanku@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
With this change, enabling choose-local (which means its state makes
transition from "off" to "on") will be effective after the first
gfid-lookup on "/" since volume-set was executed.
Change-Id: Ibab292ba705d993b475cd0303fb3318211fb2500
BUG: 1480525
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
|