summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Release notes for 3.12.12v3.12.12Jiffin Tony Thottan2018-07-121-0/+27
| | | | | | Change-Id: I77e8ef525ef5f816450280325a473c2edf2720d7 BUG: 1594909 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
* afr: don't update readables if inode refresh failed on all childrenRavishankar N2018-07-114-21/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of: https://review.gluster.org/#/c/20029/ 3.12 still supports quorum-reads, hence modified afr_inode_refresh_done() to support that. If inode refresh failed on all children of afr due to ENOENT (say file migrated by dht), it resets the readables to zero. Any inflight txn which then later comes on the inode fails with EIO because no readable children present for the inode. Fix: Don't update readables when inode refresh fails on *all* children of afr. In that way any inflight txns will either proceed with its own inode refresh if needed and fail it with the right errno or use the old value of readables and continue with the txn. Also, add quorum checks to the beginning of afr_transaction(). Otherwise, we seem to be winding the lock and checking for quorum only in pre-op pahse. Note: This should ideally fix BZ 1329505 since the stop gap fix for it is has been reverted at https://review.gluster.org/#/c/20028. Change-Id: I82990769f01be918a073fec83fc67ba4b3be24b1 BUG: 1599247 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* afr: heal gfids when file is not present on all bricksRavishankar N2018-07-116-12/+176
| | | | | | | | | | | | | | | Backport of https://review.gluster.org/#/c/20271/ (only change is in .t) commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression wherein if a file is present in only 1 brick of replica *and* doesn't have a gfid associated with it, it doesn't get healed upon the next lookup from the client. Fix it. Change-Id: I7d1111dcb45b1b8b8340a7d02558f05df70aa599 BUG: 1598121 fixes: bz#1598121 Signed-off-by: Ravishankar N <ravishankar@redhat.com> (cherry picked from commit eb472d82a083883335bc494b87ea175ac43471ff)
* afr: fix bug-1363721.t failureRavishankar N2018-07-094-3/+78
| | | | | | | | | | | | | | | | | | | | | | | Backport of https://review.gluster.org/#/c/20036/ Note: We need to update inode context's write_subvol even in case of compound fops. This is not there in master and 4.1 since compound FOPS was removed in it. Problem: In the .t, when the only good brick was brought down, writes on the fd were still succeeding on the bad bricks. The inflight split-brain check was marking the write as failure but since the write succeeded on all the bad bricks, afr_txn_nothing_failed() was set to true and we were unwinding writev with success to DHT and then catching the failure in post-op in the background. Fix: Don't wind the FOP phase if the write_subvol (which is populated with readable subvols obtained in pre-op cbk) does not have at least 1 good brick which was up when the transaction started. Change-Id: I4a1fef4569609c31cffeaef591a64c10870e8d0b BUG: 1598720 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* afr: add quorum checks in pre-opRavishankar N2018-07-061-29/+31
| | | | | | | | | | | | | | | | | Backport of https://review.gluster.org/#/c/19781/ Problem: We seem to be winding the FOP if pre-op did not succeed on quorum bricks and then failing the FOP with EROFS since the fop did not meet quorum. This essentially masks the actual error due to which pre-op failed. (See BZ). Fix: Skip FOP phase if pre-op quorum is not met and go to post-op. Change-Id: Ie58a41e8fa1ad79aa06093706e96db8eef61b6d9 BUG: 1597154 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* afr: capture the correct errno in post-op quorum checkRavishankar N2018-07-051-8/+8
| | | | | | | | | | If the post-op phase of txn did not meet quorm checks, use that errno to unwind the FOP rather than blindly setting ENOTCONN. Change-Id: I0cb0c8771ec75a45f9a25ad4cd8601103deddf0c BUG: 1597120 Signed-off-by: Ravishankar N <ravishankar@redhat.com> (cherry picked from commit 440a048f24b006c80af3d7bcd0a1f13fe3459d87)
* cluster/dht: act as passthrough for renames on single child DHTRaghavendra G2018-07-051-7/+15
| | | | | | | | | | | Various synchronization present in dht_rename while handling directories and files is necessary only if we have more than only one child. Change-Id: Ie21ad419125504ca2f391b1ae2e5c1d166fee247 fixes: bz#1563513 Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
* glusterfsd: Do not process GLUSTERD_BRICK_XLATOR_OP if graph is not readyRavishankar N2018-07-042-1/+9
| | | | | | | | | | | | | | | | | | | | | | | Backport of: https://review.gluster.org/#/c/20435/ Problem: If glustershd gets restarted by glusterd due to node reboot/volume start force/ or any thing that changes shd graph (add/remove brick), and index heal is launched via CLI, there can be a chance that shd receives this IPC before the graph is fully active. Thus when it accesses glusterfsd_ctx->active, it crashes. Fix: Since glusterd does not really wait for the daemons it spawned to be fully initialized and can send the request as soon as rpc initialization has succeeded, we just handle it at shd. If glusterfs_graph_activate() is not yet done in shd but glusterd sends GD_OP_HEAL_VOLUME to shd, we fail the request. Change-Id: If6cc07bc5455c4ba03458a36c28b63664496b17d BUG: 1597230 fixes: bz#1597230 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* afr: add quorum checks in post-opRavishankar N2018-07-042-1/+30
| | | | | | | | | | | afr relies on pending changelog xattrs to identify source and sinks and the setting of these xattrs happen in post-op. So if post-op fails, we need to unwind the write txn with a failure. Change-Id: I0f019ac03890108324ee7672883d774918b20be1 BUG: 1597120 Signed-off-by: Ravishankar N <ravishankar@redhat.com> (cherry picked from commit a40a87ec3b226ae86a6ed8f4af25b45965a20cad)
* glusterd: gluster v status is showing wrong status for glustershdSanju Rakonde2018-07-041-3/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | When we restart the bricks, connect and disconnect events happen for glustershd. glusterd use two threads to handle disconnect and connects events from glustershd. When we restart the bricks we'll get both disconnect and connect events. So both the threads will compete for the big lock. We want disconnect event to finish before connect event. But If connect thread gets the big lock first, it sets svc->online to true, and then disconnect thread will et svc->online to false. So, glustershd will be disconnected from glusterd and wrong status is shown. After killing shd, glusterd sleeps for 1 second. To avoid the problem, If glusterd releses the lock before sleep and acquires it after sleep, disconnect thread will get a chance to handle the glusterd_svc_common_rpc_notify before other thread completes connect event. >Change-Id: Ie82e823fdfc936feb7c0ae10599297b050ee9986 >Signed-off-by: Sanju Rakonde <srakonde@redhat.com> Change-Id: Ie82e823fdfc936feb7c0ae10599297b050ee9986 fixes: bz#1582443 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* afr: don't treat all cases all bricks being blamed as split-brainRavishankar N2018-07-045-9/+165
| | | | | | | | | | | | | | | | | | | | | | | | Problem: We currently don't have a roll-back/undoing of post-ops if quorum is not met. Though the FOP is still unwound with failure, the xattrs remain on the disk. Due to these partial post-ops and partial heals (healing only when 2 bricks are up), we can end up in split-brain purely from the afr xattrs point of view i.e each brick is blamed by atleast one of the others. These scenarios are hit when there is frequent connect/disconnect of the client/shd to the bricks while I/O or heal are in progress. Fix: Instead of undoing the post-op, pick a source based on the xattr values. If 2 bricks blame one, the blamed one must be treated as sink. If there is no majority, all are sources. Once we pick a source, self-heal will then do the heal instead of erroring out due to split-brain. Change-Id: I3d0224b883eb0945785ade0e9697a1c828aec0ae BUG: 1597123 Signed-off-by: Ravishankar N <ravishankar@redhat.com> (cherry picked from commit 0e6e8216823c2d9dafb81aae0f6ee3497c23d140)
* storage/posix: Fix posix_symlinks_match()Pranith Kumar K2018-07-042-8/+23
| | | | | | | | | | | 1) snprintf into linkname_expected should happen with PATH_MAX 2) comparison should happen with linkname_actual with complete string linkname_expected fixes bz#1595528 Change-Id: Ic3b3c362dc6c69c046b9a13e031989be47ecff14 BUG: 1595528 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/dht: Remove EIO from dht_inode_missingN Balachandran2018-07-042-4/+2
| | | | | | | | | | | Removed EIO from the list of errnos that triggered a migrate check task. (cherry picked from commit c925962b91c67c8cd2391df7dd0251e0cbf66648) Change-Id: I7f89c7a16056421588f1af2377cebe6affddcb47 BUG: 1579673 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* doc: Updated release notes for 3.12.11v3.12.11ShyamsundarR2018-06-251-0/+27
| | | | | | | fixes: bz#1594863 Change-Id: Ieeb16d0e04fdd52421041ac14a95430f6c6947f1 Signed-off-by: ShyamsundarR <srangana@redhat.com>
* glusterfs: access trusted peer group via remote-host commandMohit Agrawal2018-06-251-5/+0
| | | | | | | | | | | | | | | | Problem: In SSL environment the user is able to access volume via remote-host command without adding node in a trusted pool Solution: Change the list of rpc program in glusterd.c at the time of initialization while SSL is enabled > Change-Id: I987e433b639e68ad17b77b6452df1e22dbe0f199 > cherry picked from commit 234d611160840899bcfd5ab1c17a6253673d38ed BUG: 1593526 fixes: bz#1593526 Change-Id: I705253e032239e92ecad1c6a9b7e423a022132b5 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* storage/posix: Handle ENOSPC correctly in zero_fillPranith Kumar K2018-06-253-1/+121
| | | | | | | Change-Id: Icc521d86cc510f88b67d334b346095713899087a BUG: 1591187 fixes: bz#1591187 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* gcron: create the lockfile if it is missingNiels de Vos2018-06-251-1/+1
| | | | | | | | | | The lockfile for the job may not exist yet. If that is the case, it should be created upon the first time it is accessed. Change-Id: I4da2b3ecdb79cc63ed82cc7bfa026c8f08d4d043 Fixes: bz#1559829 Signed-off-by: Niels de Vos <ndevos@redhat.com> (cherry picked from commit 7005b1a336e483ec150c2f924a618dcfe197db0b)
* gcron: catch OSError as well as IOErrorNiels de Vos2018-06-181-2/+2
| | | | | | | | | | | In case os.open() fails because the file does not exist, an OSError is raised. To prevent the script to abort uncleanly, catch the OSError in addition to the IOError. Change-Id: I48e5b23e17d63639cc33db51b4229249a9887880 Fixes: bz#1559829 Signed-off-by: Niels de Vos <ndevos@redhat.com> (cherry picked from commit 26b52694feb04c98e6c9436bcd4e23e1687f0237)
* Release notes for 3.12.10v3.12.10Jiffin Tony Thottan2018-06-141-0/+28
| | | | | | Change-Id: Iad1e53b5617b8506d0617a5d303bf3604198bbde BUG: 1571326 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
* protocol/server: Fix xdata leak in seek fopPranith Kumar K2018-06-121-2/+1
| | | | | | | Change-Id: I6125283ed22c04564f0b77bb7a50579a83e02eb0 fixes: bz#1590133 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> (cherry picked from commit fd5b48ea0afd907deb08604415bee14ab65f378b)
* geo-rep/scheduler: Fix crashKotresh HR2018-06-111-35/+35
| | | | | | | | | | | | | | | | | | | | Fix crash where session_name is referenced before assignment. Well, this is a corner case where the geo-rep session exists and the status output doesn't show any rows. This might happen when glusterd is down or when the system is in inconsistent state w.r.t glusterd. Backport of: > Patch: https://review.gluster.org/19991/ > Change-Id: Iec1557e01b35068041b4b3c1aacee2bfa0e05873 > Signed-off-by: Kotresh HR <khiremat@redhat.com> (cherry picked from commit 829f32c61c364323bab494cf9dab880aad4be463) fixes: bz#1577871 Change-Id: Iec1557e01b35068041b4b3c1aacee2bfa0e05873 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* glusterd/geo-rep: Fix glusterd crashKotresh HR2018-06-111-1/+1
| | | | | | | | | | | | | | | | | | Using strdump instead of gf_strdup crashes during free if mempool is being used. gf_free checks the magic number in the header which will not be taken care if strdup is used. Backport of: > Patch: https://review.gluster.org/19993/ > Change-Id: Iab36496554b838a036af9d863e3f5fd07fd9780e > Signed-off-by: Kotresh HR <khiremat@redhat.com> (cherry picked from commit 57632e3c1a33187d1d23f101f83cd8759142acac) fixes: bz#1577868 Change-Id: Iab36496554b838a036af9d863e3f5fd07fd9780e Signed-off-by: Kotresh HR <khiremat@redhat.com>
* geo-rep: Fix upgrade issueKotresh HR2018-06-111-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Cause and Analysis: The last synced changelog for entry operations is marked in current version to avoid re-processing of already processed entry operations in a batch during crash/restart of geo-rep. This was not present in previous versoins. The marker is maintained in the dictionary with the key 'last_synced_entry' and dictionary is persisted into status file. So upgrading to current version in which the marker is present was failing with KeyError. Solution: Load the dictionary with default keys first which contains all the keys including latest ones and then load the values from status file instead of doing otherwise. Backport of > patch: https://review.gluster.org/19969/ > Change-Id: Ic654e6f9a3c97f616761f1362f890352a2186fb4 > Signed-off-by: Kotresh HR <khiremat@redhat.com> (cherry picked from commit 23c1385b5f6f6103e820d15ecfe1df31940fdb45) fixes: bz#1577862 Change-Id: Ic654e6f9a3c97f616761f1362f890352a2186fb4 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* geo-rep: Fix syncing of symlinkKotresh HR2018-06-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: If symlink is created on master pointing to current directory (e.g symlink -> ".") with non root uid or gid, geo-rep worker crashes with ENOTSUP. Cause: Geo-rep creates the symlink on slave and fixes the uid and gid using chown cmd. os.chown dereferences the symlink which is pointing to ".gfid" which is not supported. Note that geo-rep operates on aux-gfid-mount (e.g. "/mnt/.gfid/<gfid-of-symlink-file>"). Solution: The uid or gid change is acutally on symlink file. So use os.lchown, i.e, don't deference. Backport of > BUG: 1567209 > Patch: https://review.gluster.org/20017 BUG: 1577845 fixes: bz#1577845 Change-Id: I63575fc589d71f987bef1d350c030987738c78ad Signed-off-by: Kotresh HR <khiremat@redhat.com>
* use awk to get a specific line from the output instead of cutRaghavendra Bhat2018-06-111-7/+1
| | | | | | | | | | | cut -d$'\n' is not separating the xattrs shown as part of getfattr output. Hence use awk to get the nth line of getfattr output for nth iteration in the for loop. Change-Id: I1a96cd3f72f4f407f9a783375f78d9a69d5d3885 fixes: bz#1580519 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> (cherry picked from commit bef654f48c14bfd7ce20702edff41052f6f54bdc)
* gfapi: various broken symbol versionsKaleb S. KEITHLEY2018-06-112-1/+5
| | | | | | | | | | | | | | | | | | | | | lots of breakage in symbol versions: symbols added in 4.1 incorrectly, and symbols added in 4.1 but labeled 4.0.0, and Not noticed until someone tried to build 3.13.2 on FreeBSD 11.1, despite the fact that we build on FreeBSD 10.3 IIRC, somehow the 3.13 errors aren't a build error there. Note: in rereading the Ulrich Drepper write-up I noticed that when a symbol version is changed, you are supposed to leave the old symbol in its original section in addition to adding it to its new section. Adding back those symbols to their original sections. Reported-by: Roman Serbski <mefystofel@gmail.com> Change-Id: I9a883546d08e0847f7228d8ea5943bc54275b319 BUG: 1577164 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
* core: FreeBSD has pthread_set_name_np() (versus pthread_setname_np())Kaleb S. KEITHLEY2018-06-111-2/+8
| | | | | | | | | | And has had it since at least FreeBSD 9.0 Reported-by: Roman Serbski <mefystofel@gmail.com> Change-Id: I52cfde7f2f7a82d0e66465ac392ed7e201e1653b BUG: 1576816 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
* test: Marking test case bug-1309462.t as badShyamsundarR2018-05-231-1/+2
| | | | | | | | | | Details in the bug Updates: bz#1581746 Change-Id: Id984e10b60daf274d5510e3ccbf7abf0cb19f368 Signed-off-by: ShyamsundarR <srangana@redhat.com> (cherry picked from commit 92839c5a4a6c87973e4f6f2f96b359e9c2a0f5c0)
* cluster/dht: Fix dht_rename lock orderN Balachandran2018-05-091-18/+47
| | | | | | | | | | Fixed dht_order_rename_lock to use the same inodelk ordering as that of the dht selfheal locks (dictionary order of lock subvolumes). Change-Id: Ia3f8353b33ea2fd3bc1ba7e8e777dda6c1d33e0d BUG: 1570475 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* doc: Added release notes for 3.12.9 releasev3.12.9ShyamsundarR2018-04-241-0/+32
| | | | | | | fixes: bz#1571192 Change-Id: I7b5f2b3394c67ebeb0f20ff15ad877cef577a9cb Signed-off-by: ShyamsundarR <srangana@redhat.com>
* libglusterfs: fix comparison of a NULL dict with a non-NULL dictXavi Hernandez2018-04-241-8/+8
| | | | | | | | | | | | | Function are_dicts_equal() had a bug when the first argument was NULL and the second one wasn't NULL. In this case it incorrectly returned that the dicts were different when they could be equal. Backport of: > BUG: 1566732 BUG: 1569407 Change-Id: I0fc245c2e7d1395865a76405dbd05e5d34db3273 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* server/auth: add option for strict authenticationMohammed Rafi KC2018-04-226-12/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When this option is enabled, we will check for a matching username and password, if not found then the connection will be rejected. This also does a checksum validation of volfile The option is invalid when SSL/TLS is in use, at which point the SSL/TLS certificate user name is used to validate and hence authorize the right user. This expects TLS allow rules to be setup correctly rather than the default *. This option is not settable, as a result this cannot be enabled for volumes using the CLI. This is used with the shared storage volume, to restrict access to the same in non-SSL/TLS environments to the gluster peers only. Tested: ./tests/bugs/protocol/bug-1321578.t ./tests/features/ssl-authz.t - Ran tests on volumes with and without strict auth checking (as brick vol file needed to be edited to test, or rather to enable the option) - Ran tests on volumes to ensure existing mounts are disconnected when we enable strict checking Change-Id: I2ac4f0cfa5b59cc789cc5a265358389b04556b59 fixes: bz#1570430 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Signed-off-by: ShyamsundarR <srangana@redhat.com>
* shared storage: Prevent mounting shared storage from non-trusted clientMohammed Rafi KC2018-04-221-0/+21
| | | | | | | | | | | | | | gluster shared storage is a volume used for internal storage for various features including ganesha, geo-rep, snapshot. So this volume should not be exposed to the client, as it is a special volume for internal use. This fix wont't generate non trusted volfile for shared storage volume. Change-Id: I8ffe30ae99ec05196d75466210b84db311611a4c updates: bz#1570430 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* cluster/dht: Handle file migrations when brick downN Balachandran2018-04-181-5/+51
| | | | | | | | | | | | | | | | | | | The decision as to which node would migrate a file was based on the gfid of the file. Files were divided among the nodes for the replica/disperse set. However, if a brick was down when rebalance started, the nodeuuids would be saved as NULL and a set of files would not be migrated. Now, if the nodeuuid is NULL, the first non-null entry in the set is the node responsible for migrating the file. Change-Id: I72554c107792c7d534e0f25640654b6f8417d373 fixes: bz#1566820 Signed-off-by: N Balachandran <nbalacha@redhat.com> (cherry picked from commit 1f0765242a689980265c472646c64473a92d94c0) Change-Id: Id1a6e847b0191b6a40707bea789a2a35ea3d9f68
* cluster/dht: Wind open to all subvolsN Balachandran2018-04-181-10/+5
| | | | | | | | | | | dht_opendir should wind the open to all subvols whether or not local->subvols is set. This is because dht_readdirp winds the calls to all subvols. Change-Id: I67a96b06dad14a08967c3721301e88555aa01017 updates: bz#1566820 Signed-off-by: N Balachandran <nbalacha@redhat.com> (cherry picked from commit c4251edec654b4e0127577e004923d9729bc323d)
* cluster/afr: Fixing the flaws in arbiter becoming source patchRavishankar N2018-04-187-179/+276
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of https://review.gluster.org/19045 Problem: Setting the write_subvol value to read_subvol in case of metadata transaction during pre-op (commit 19f9bcff4aada589d4321356c2670ed283f02c03) might lead to the original problem of arbiter becoming source. Scenario: 1) All bricks are up and good 2) 2 writes w1 and w2 are in progress in parallel 3) ctx->read_subvol is good for all the subvolumes 4) w1 succeeds on brick0 and fails on brick1, yet to do post-op on the disk 5) read/lookup comes on the same file and refreshes read_subvols back to all good 6) metadata transaction happens which makes ctx->write_subvol to be assigned with ctx->read_subvol which is all good 7) w2 succeeds on brick1 and fails on brick0 and this will update the brick in reverse order leading to arbiter becoming source Fix: Instead of setting the ctx->write_subvol to ctx->read_subvol in the pre-op statge, if there is a metadata transaction, check in the function __afr_set_in_flight_sb_status() if it is a data/metadata transaction. Use the value of ctx->write_subvol if it is a data transactions and ctx->read_subvol value for other transactions. With this patch we assign the value of ctx->write_subvol in the afr_transaction_perform_fop() with the on disk value, instead of assigning it in the afr_changelog_pre_op() with the in memory value. Change-Id: Id2025a7e965f0578af35b1abaac793b019c43cc4 BUG: 1566131 Signed-off-by: karthik-us <ksubrahm@redhat.com> Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* cluster/afr: Fix for arbiter becoming sourcekarthik-us2018-04-184-6/+102
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of https://review.gluster.org/#/c/18049/ Problem: When eager-lock is on, and two writes happen in parallel on a FD we were observing the following behaviour: - First write fails on one data brick - Since the post-op is not yet happened, the inode refresh will get both the data bricks as readable and set it in the inode context - In flight split brain check see both the data bricks as readable and allows the second write - Second write fails on the other data brick - Now the post-op happens and marks both the data bricks as bad and arbiter will become source for healing Fix: Adding one more variable called write_suvol in inode context and it will have the in memory representation of the writable subvols. Inode refresh will not update this value and its lifetime is pre-op through unlock in the afr transaction. Initially the pre-op will set this value same as read_subvol in inode context and then in the in flight split brain check we will use this value instead of read_subvol. After all the checks we will update the value of this and set the read_subvol same as this to avoid having incorrect value in that. Change-Id: I2ef6904524ab91af861d59690974bbc529ab1af3 BUG: 1566131 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* Release notes for 3.12.8v3.12.8Jiffin Tony Thottan2018-04-121-0/+15
| | | | | | Change-Id: If49d876df9b96acbc20c6378cea4ba0fed386b9f BUG: 1564465 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
* features/index: Choose different base file on EMLINK errorPranith Kumar K2018-04-122-18/+61
| | | | | | | Change-Id: I4648816af908539efdc2528608aa2ebf7f0d0e2f fixes: bz#1565655 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> (cherry picked from commit bb12f2109a01856e8184e13cf984210d20155b13)
* timer: Fix possible race during cleanupSoumya Koduri2018-04-101-3/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As mentioned in bug1509189, there is a possible race between gf_timer_cancel(), gf_timer_proc() and gf_timer_registry_destroy() leading to use_after_free. Problem: 1) gf_timer_proc() is called, locks reg, and gets an event. It unlocks reg, and calls the callback. 2) Meanwhile gf_timer_registry_destroy() is called, and removes reg from ctx, and joins on gf_timer_proc(). 3) gf_timer_call_cancel() is called on the event being processed. It cannot find reg (since it's been removed from reg), so it frees event. 4) the callback returns into gf_timer_proc(), and it tries to free event, but it's already free, so double free. Solution: The fix is to bail out in gf_timer_cancel() when registry is not found. The logic behind this is that, gf_timer_cancel() is called only on any existing event. That means there was a valid registry earlier while creating that event. And the only reason we cannot find that registry now is that it must have got set to NULL when context cleanup is started. Since gf_timer_proc() takes care of releasing all the remaining events active on that registry, it seems safe to bail out in gf_timer_cancel(). master https://review.gluster.org/18652 master BZ: 1509189 Change-Id: Ia9b088533141c3bb335eff2fe06b52d1575bb34f BUG: 1565590 Reported-by: Daniel Gryniewicz <dang@redhat.com> Signed-off-by: Soumya Koduri <skoduri@redhat.com> Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
* cluster/dht: Skipped files are not treated as errorsN Balachandran2018-04-061-5/+9
| | | | | | | | | | | | | For skipped files, use a return value of 1 to prevent error messages being logged. > Change-Id: I18de31ac1a64d4460e88dea7826c3ba03c895861 > BUG: 1553598 > Signed-off-by: N Balachandran <nbalacha@redhat.com> Change-Id: I18de31ac1a64d4460e88dea7826c3ba03c895861 BUG: 1555161 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/afr: Prevent ping-event handling on shdPranith Kumar K2018-04-061-0/+2
| | | | | | | | | | On shd, we shouldn't treat any brick down based on latency, otherwise self-heal will never happen fixes: 1562723 Change-Id: Ica07fcc4fae91a6bfd9c9a670e2be464704d94b7 BUG: 1562723 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: send list-node-uuids request to all subvolumesXavi Hernandez2018-04-062-1/+2
| | | | | | | | | | | | | | | The xattr trusted.glusterfs.list-node-uuids was only sent to a single subvolume. This was returning null uuids from the other subvolumes as if they were down. This fix forces that xattr to be requested from all subvolumes. Backport of: > BUG: 1561406 Change-Id: If62eb39a6857258923ba625e153d4ad79018ea2f BUG: 1561731 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* cluster/ec: Change default read policy to gfid-hashAshish Pandey2018-04-062-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Whenever we read data from file over NFS, NFS reads more data then requested and caches it. Based on the stat information it makes sure that the cached/pre-read data is valid or not. Consider 4 + 2 EC volume and all the bricks are on differnt nodes. In EC, with round-robin read policy, reads are sent on different set of data bricks. This way, it balances the read fops to go on all the bricks and avoid heating UP (overloading) same set of bricks. Due to small difference in clock speed, it is possible that we get minor difference for atime, mtime or ctime for different bricks. That might cause a different stat returned to NFS based on which NFS will discard cached/pre-read data which is actually not changed and could be used. Solution: Change read policy for EC as gfid-hash. That will force all the read to go to same set of bricks. >Change-Id: I825441cc519e94bf3dc3aa0bd4cb7c6ae6392c84 >BUG: 1554743 >Signed-off-by: Ashish Pandey <aspandey@redhat.com> Change-Id: I825441cc519e94bf3dc3aa0bd4cb7c6ae6392c84 BUG: 1558352 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* cluster/ec: avoid delays in self-healXavi Hernandez2018-04-065-48/+134
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Self-heal creates a thread per brick to sweep the index looking for files that need to be healed. These threads are started before the volume comes online, so nothing is done but waiting for the next sweep. This happens once per minute. When a replace brick command is executed, the new graph is loaded and all index sweeper threads started. When all bricks have reported, a getxattr request is sent to the root directory of the volume. This causes a heal on it (because the new brick doesn't have good data), and marks its contents as pending to be healed. This is done by the index sweeper thread on the next round, one minute later. This patch solves this problem by waking all index sweeper threads after a successful check on the root directory. Additionally, the index sweep thread scans the index directory sequentially, but it might happen that after healing a directory entry more index entries are created but skipped by the current directory scan. This causes the remaining entries to be processed on the next round, one minute later. The same can happen in the next round, so the heal is running in bursts and taking a lot to finish, specially on volumes with many directory levels. This patch solves this problem by immediately restarting the index sweep if a directory has been healed. Backport of: > BUG: 1547662 Change-Id: I58d9ab6ef17b30f704dc322e1d3d53b904e5f30e BUG: 1555201 Signed-off-by: Xavi Hernandez <jahernan@redhat.com>
* extras/hooks: Fix S10selinux-label-brick.sh hook scriptMilan Zink2018-04-061-28/+29
| | | | | | | | | | | | | | | * script was failng due to syntax error * shellcheck issues fixed * improved performance: semanage & restorecon is being run on unique path Upstream reference: >Change-Id: I58b357d9fd37586004a2a518f7a5d1c5c9ddd7e3 >BUG: 1533342 >Signed-off-by: Milan Zink <zeten30@gmail.com> Change-Id: I58b357d9fd37586004a2a518f7a5d1c5c9ddd7e3 BUG: 1546627 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
* glusterfsd: Memleak in glusterfsd process while brick mux is onMohit Agrawal2018-04-0624-133/+274
| | | | | | | | | | | | | | | | | | | | | Problem: At the time of stopping the volume while brick multiplex is enabled memory is not cleanup from all server side xlators. Solution: To cleanup memory for all server side xlators call fini in glusterfs_handle_terminate after send GF_EVENT_CLEANUP notification to top xlator. > BUG: 1544090 > Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> > (cherry picked from commit 7c3cc485054e4ede1efb358552135b432fb7047a) >Note: Run all test-cases in separate build (https://review.gluster.org/19574) > with same patch after enable brick mux forcefully, all test cases are > passed. BUG: 1549473 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Change-Id: Ia10dc7f2605aa50f2b90b3fe4eb380ba9299e2fc
* glusterd: import volumes in separate synctaskAtin Mukherjee2018-04-066-69/+343
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | With brick multiplexing, to attach a brick to an existing brick process the prerequisite is to have the compatible brick to finish it's initialization and portmap sign in and hence the thread might have to go to a sleep and context switch the synctask to allow the brick process to communicate with glusterd. In normal code path, this works fine as glusterd_restart_bricks () is launched through a separate synctask. In case there's a mismatch of the volume when glusterd restarts, glusterd_import_friend_volume is invoked and then it tries to call glusterd_start_bricks () from the main thread which eventually may land into the similar situation. Now since this is not done through a separate synctask, the 1st brick will never be able to get its turn to finish all of its handshaking and as a consequence to it, all the bricks will fail to get attached to it. Solution : Execute import volume and glusterd restart bricks in separate synctask. Importing snaps had to be also done through synctask as there's a dependency of the parent volume need to be available for the importing snap functionality to work. >mainline patch : https://review.gluster.org/#/c/19357/ https://review.gluster.org/#/c/19536/ Change-Id: I290b244d456afcc9b913ab30be4af040d340428c BUG: 1543708 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* cluster/dht: ENOSPC will not fail rebalanceN Balachandran2018-04-021-9/+3
| | | | | | | | | ENOSPC returned by a file migration is no longer considered a rebalance failure. Change-Id: I21cf3a8acdc827bc478e138d6cb5db649d53a28c BUG: 1555161 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* Release notes for 3.12.7v3.12.7Jiffin Tony Thottan2018-03-082-1/+15
| | | | | | Change-Id: I282af0c03e5da6d5a98780004720e14435b18082 BUG: 1551444 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>