summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* rpc: Cleanup SSL specific data at the time of freeing rpc objectl17zhou2020-07-132-5/+40
| | | | | | | | | | | | | | | | | Problem: At the time of cleanup rpc object ssl specific data is not freeing so it has become a leak. Solution: To avoid the leak cleanup ssl specific data at the time of cleanup rpc object > Credits: l17zhou <cynthia.zhou@nokia-sbell.com.cn> > Fixes: bz#1768407 > Change-Id: I37f598673ae2d7a33c75f39eb8843ccc6dffaaf0 > (cherry picked from commit 54ed71dba174385ab0d8fa415e09262f6250430c) Change-Id: I37f598673ae2d7a33c75f39eb8843ccc6dffaaf0 Fixes: #1016 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* socket/ssl: fix crl handlingMilind Changire2020-07-104-21/+107
| | | | | | | | | | | | | | | | | | | | | | Problem: Just setting the path to the CRL directory in socket_init() wasn't working. Solution: Need to use special API to retrieve and set X509_VERIFY_PARAM and set the CRL checking flags explicitly. Also, setting the CRL checking flags is a big pain, since the connection is declared as failed if any CRL isn't found in the designated file or directory. A comment has been added to the code appropriately. > Change-Id: I8a8ed2ddaf4b5eb974387d2f7b1a85c1ca39fe79 > fixes: bz#1687326 > Signed-off-by: Milind Changire <mchangir@redhat.com> > (Cherry pick from commit 06fa261207f0f0625c52fa977b96e5875e9a91e0) > (Reviewed on upstream link https://review.gluster.org/#/c/glusterfs/+/22334) Change-Id: I8a8ed2ddaf4b5eb974387d2f7b1a85c1ca39fe79 Fixes: #1362 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* locks/fencing: Address hang while lock preemptionSusant Palai2020-07-083-20/+29
| | | | | | | | | | | | | | | | | The fop_wind_count can go negative when fencing is enabled on unwind path of the IO leading to hang. Also changed code so that fop_wind_count needs to be maintained only till fencing is enabled on the file. > updates: bz#1717824 > Change-Id: Icd04b42bc16cd3d50eaa581ee57233910194f480 > signed-off-by: Susant Palai <spalai@redhat.com> (backport of https://review.gluster.org/#/c/glusterfs/+/23088/) fixes: bz#1740494 Change-Id: Icd04b42bc16cd3d50eaa581ee57233910194f480 Signed-off-by: Susant Palai <spalai@redhat.com>
* features/shard: Fix crash during shards cleanup in error casesKrutika Dhananjay2020-07-081-2/+9
| | | | | | | | | | | | | | | | | | | | | | | | | A crash is seen during a reattempt to clean up shards in background upon remount. And this happens even on remount (which means a remount is no workaround for the crash). In such a situation, the in-memory base inode object will not be existent (new process, non-existent base shard). So local->resolver_base_inode will be NULL. In the event of an error (in this case, of space running out), the process would crash at the time of logging the error in the following line - gf_msg(this->name, GF_LOG_ERROR, local->op_errno, SHARD_MSG_FOP_FAILED, "failed to delete shards of %s", uuid_utoa(local->resolver_base_inode->gfid)); Fixed that by using local->base_gfid as the source of gfid when local->resolver_base_inode is NULL. Change-Id: I0b49f2b58becd0d8874b3d4b14ff8d92a89d02d5 Fixes: #1127 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> (cherry picked from commit cc43ac8651de9aa508b01cb259b43c02d89b2afc)
* cli: duplicate defns of cli_default_conn_timeout and cli_ten_minutes_timeoutKaleb S. KEITHLEY2020-07-082-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Winter is coming. So is gcc-10. Compiling with gcc-10-20191219 snapshot reveals dupe defns of cli_default_conn_timeout and cli_ten_minutes_timeout in .../cli/src/cli.[ch] due to missing extern decl. There are many changes coming in gcc-10 described in https://gcc.gnu.org/gcc-10/changes.html compiling cli.c with gcc-9 we see: ... .quad .LC88 .comm cli_ten_minutes_timeout,4,4 .comm cli_default_conn_timeout,4,4 .text .Letext0: ... and with gcc-10: ... .quad .LC88 .globl cli_ten_minutes_timeout .bss .align 4 .type cli_ten_minutes_timeout, @object .size cli_ten_minutes_timeout, 4 cli_ten_minutes_timeout: .zero 4 .globl cli_default_conn_timeout .align 4 .type cli_default_conn_timeout, @object .size cli_default_conn_timeout, 4 cli_default_conn_timeout: .zero 4 .text .Letext0: ... which is reflected in the .o file as (gcc-9): ... 0000000000000004 C cli_ten_minutes_timeout 0000000000000004 C cli_default_conn_timeout ... and (gcc-10): ... 0000000000000020 B cli_ten_minutes_timeout 0000000000000024 B cli_default_conn_timeout ... See nm(1) and ld(1) for a description C (common) and B (BSS) and how they are treated by the linker. Note: there is still a small chance that gcc-10 will land in Fedora-32, despite 31 Dec. 2019 having been the deadline for that to happen. Change-Id: I54ea485736a4910254eeb21222ad263721cdef3c Fixes: #1349 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> (cherry picked from commit f66bd85af09397300ad434655fc68861f48c2e3c)
* afr: event gen changesRavishankar N2020-07-084-82/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The general idea of the changes is to prevent resetting event generation to zero in the inode ctx, since event gen is something that should follow 'causal order'. Change #1: For a read txn, in inode refresh cbk, if event_generation is found zero, we are failing the read fop. This is not needed because change in event gen is only a marker for the next inode refresh to happen and should not be taken into account by the current read txn. Change #2: The event gen being zero above can happen if there is a racing lookup, which resets even get (in afr_lookup_done) if there are non zero afr xattrs. The resetting is done only to trigger an inode refresh and a possible client side heal on the next lookup. That can be acheived by setting the need_refresh flag in the inode ctx. So replaced all occurences of resetting even gen to zero with a call to afr_inode_need_refresh_set(). Change #3: In both lookup and discover path, we are doing an inode refresh which is not required since all 3 essentially do the same thing- update the inode ctx with the good/bad copies from the brick replies. Inode refresh also triggers background heals, but I think it is okay to do it when we call refresh during the read and write txns and not in the lookup path. The .ts which relied on inode refresh in lookup path to trigger heals are now changed to do read txn so that inode refresh and the heal happens. Change-Id: Iebf39a9be6ffd7ffd6e4046c96b0fa78ade6c5ec Fixes: #1179 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reported-by: Erik Jacobson <erik.jacobson at hpe.com> (cherry picked from commit f0fcd909ad4535b60c9208d4804ebe6afe421a09)
* cluster/afr: Prioritize ENOSPC over other errorskarthik-us2020-07-084-48/+86
| | | | | | | | | | | | | | | | | | | | | | | Problem: In a replicate/arbiter volume if file creations or writes fails on quorum number of bricks and on one brick it is due to ENOSPC and on other brick it fails for a different reason, it may fail with errors other than ENOSPC in some cases. Fix: Prioritize ENOSPC over other lesser priority errors and do not set op_errno in posix_gfid_set if op_ret is 0 to avoid receiving any error_no which can be misinterpreted by __afr_dir_write_finalize(). Also removing the function afr_has_arbiter_fop_cbk_quorum() which might consider a successful reply form a single brick as quorum success in some cases, whereas we always need fop to be successful on quorum number of bricks in arbiter configuration. Change-Id: I106e267f8b9451f681022f1cccb410d9bc824c08 Fixes: #1254 Signed-off-by: karthik-us <ksubrahm@redhat.com> (cherry picked from commit fa63b45ca5edf172b1b89b28b5db3c5129cc57b6)
* afr: more quorum checks in lookup and new entry markingRavishankar N2020-07-084-13/+25
| | | | | | | | | | | | | | | | | Problem: See github issue for details. Fix: -In lookup if the entry exists in 2 out of 3 bricks, don't fail the lookup with ENOENT just because there is an entrylk on the parent. Consider quorum before deciding. -If entry FOP does not succeed on quorum no. of bricks, do not perform new entry mark. Fixes: #1303 Change-Id: I56df8c89ad53b29fa450c7930a7b7ccec9f4a6c5 Signed-off-by: Ravishankar N <ravishankar@redhat.com> (cherry picked from commit c4a6748f25d2c1ab3ebcf89952278ebf94c8d371)
* fuse: degrade logging of write failure to fuse deviceCsaba Henk2020-07-082-7/+80
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: FUSE uses failures of communicating with /dev/fuse with various errnos to indicate in-kernel conditions to userspace. Some of these shouldn't be handled as an application error. Also the standard POSIX errno description should not be shown as they are misleading in this context. Solution: When writing to the fuse device, the caller of the respective convenience routine can mask those errnos which don't qualify to be an error for the application in that context, so then those shall be reported at DEBUG level. The possible non-standard errnos are reported with their POSIX name instead of their description to avoid confusion. (Eg. for ENOENT we don't log "no such file or directory", we log indeed literal "ENOENT".) Change-Id: I510158843e4b1d482bdc496c2e97b1860dc1ba93 >updates: bz#1193929 updates: #1000 Signed-off-by: Csaba Henk <csaba@redhat.com> (cherry picked from commit 1166df1920dd9b2bd5fce53ab49d27117db40238)
* cluster/ec: Return correct error code and log messageAshish Pandey2020-07-081-2/+9
| | | | | | | | | | | | | In case of readdir was send with an FD on which opendir was failed, this FD will be useless and we return it with error. For now, we are returning it with EINVAL without logging any message in log file. Return a correct error code and also log the message to improve thing to debug. fixes: #1220 Change-Id: Iaf035254b9c5aa52fa43ace72d328be622b06169 (cherry picked from commit af70cb5eedd80207cd184e69f2a4fb252b72d070)
* md-cache: fix several NULL dereferencesXavi Hernandez2020-07-081-66/+129
| | | | | | | | | | | | | | This patch includes the following CID from Coverity Scan: * 1425196 * 1425197 * 1425198 * 1425199 * 1525200 Change-Id: Iddcfea449d3dd56d4dfcc39f4c3c608518e611e4 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com> Updates: #1060
* tests/bug-844688.t: test bug-844688.t is failing on masterMohammed Rafi KC2020-06-161-11/+32
| | | | | | | | | | | | | | | Test case bug-844688.t is failing quite frequently on master. This test check for the existence of call_stack, frame creation time. But there is a chance that at a point in time, the stack count might become zero. So doing the check in EXPECT_WITHIN make more sense. Change-Id: Id2ede7f6fdcb5f016f52c5c0557ce6ac510d4e96 Fixes: #1307 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> (cherry picked from commit 08a9f198d576bbae3596536bbd2c4d34dadd1a93)
* tests: skip tests on absence of reflink in xfsPranith Kumar K2020-05-263-10/+12
| | | | | | | Fixes: #1223 Change-Id: I36cb72d920ffd77405051546615c5262c392daef Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> (cherry picked from commit b85f01abab658d1d704cd6caf84dd64eddafbff7)
* Adding release notes for release-6.9v6.9Hari Gowtham2020-04-221-0/+32
| | | | | | Change-Id: I21d153ed3d2991b7dc2bffea605b5abdca87b748 fixes: #1175 Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
* features/utime: Don't access frame after stack-windPranith Kumar K2020-04-222-15/+52
| | | | | | | | | | | | | Problem: frame is accessed after stack-wind. This can lead to crash if the cbk frees the frame. Fix: Use new frame for the wind instead. Updates: #832 Change-Id: I64754609f1114b0bbd4d1336fa81a56f2cca6e03 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* utime: resolve an issue of permission denied logsAmar Tumballi2020-04-222-1/+12
| | | | | | | | | | | | | In case where uid is not set to be 0, there are possible errors from acl xlator. So, set `uid = 0;` with pid indicating this is set from UTIME activity. The message "E [MSGID: 148002] [utime.c:146:gf_utime_set_mdata_setxattr_cbk] 0-dev_SNIP_data-utime: dict set of key for set-ctime-mdata failed [Permission denied]" repeated 2 times between [2019-12-19 21:27:55.042634] and [2019-12-19 21:27:55.047887] Change-Id: Ieadf329835a40a13ac0bf908dac776e66954466c Fixes: #832 Signed-off-by: Amar Tumballi <amar@kadalu.io> (cherry picked from commit eb916c057036db8289b41265797e5dce066d1512)
* mount/fuse: Wait for 'mount' child to exit before dyingPranith Kumar K2020-04-222-0/+28
| | | | | | | | | | | | | | | | Problem: tests/bugs/protocol/bug-1433815-auth-allow.t fails sometimes because of stale mount. This stale mount comes into picture when parent process dies without waiting for the child process which mounts fuse fs to die Fix: Wait for mounting child process to die before dying. Fixes: #1152 Change-Id: I8baee8720e88614fdb762ea822d5877973eef8dc Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* md-cache: avoid clearing cache when not necessaryXavi Hernandez2020-04-211-72/+93
| | | | | | | | | | | | mdc_inode_xatt_set() blindly cleared current cache when dict was not NULL, even if there was no xattr requested. This patch fixes this by only calling mdc_inode_xatt_set() when we have explicitly requested something to cache. Change-Id: Idc91a4693f1ff39f7059acde26682ccc361b947d Fixes: #1140 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* write-behind: fix data corruptionXavi Hernandez2020-04-203-2/+309
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There was a bug in write-behind that allowed a previous completed write to overwrite the overlapping region of data from a future write. Suppose we want to send three writes (W1, W2 and W3). W1 and W2 are sequential, and W3 writes at the same offset of W2: W2.offset = W3.offset = W1.offset + W1.size Both W1 and W2 are sent in parallel. W3 is only sent after W2 completes. So W3 should *always* overwrite the overlapping part of W2. Suppose write-behind processes the requests from 2 concurrent threads: Thread 1 Thread 2 <received W1> <received W2> wb_enqueue_tempted(W1) /* W1 is assigned gen X */ wb_enqueue_tempted(W2) /* W2 is assigned gen X */ wb_process_queue() __wb_preprocess_winds() /* W1 and W2 are sequential and all * other requisites are met to merge * both requests. */ __wb_collapse_small_writes(W1, W2) __wb_fulfill_request(W2) __wb_pick_unwinds() -> W2 /* In this case, since the request is * already fulfilled, wb_inode->gen * is not updated. */ wb_do_unwinds() STACK_UNWIND(W2) /* The application has received the * result of W2, so it can send W3. */ <received W3> wb_enqueue_tempted(W3) /* W3 is assigned gen X */ wb_process_queue() /* Here we have W1 (which contains * the conflicting W2) and W3 with * same gen, so they are interpreted * as concurrent writes that do not * conflict. */ __wb_pick_winds() -> W3 wb_do_winds() STACK_WIND(W3) wb_process_queue() /* Eventually W1 will be * ready to be sent */ __wb_pick_winds() -> W1 __wb_pick_unwinds() -> W1 /* Here wb_inode->gen is * incremented. */ wb_do_unwinds() STACK_UNWIND(W1) wb_do_winds() STACK_WIND(W1) So, as we can see, W3 is sent before W1, which shouldn't happen. The problem is that wb_inode->gen is only incremented for requests that have not been fulfilled but, after a merge, the request is marked as fulfilled even though it has not been sent to the brick. This allows that future requests are assigned to the same generation, which could be internally reordered. Solution: Increment wb_inode->gen before any unwind, even if it's for a fulfilled request. Special thanks to Stefan Ring for writing a reproducer that has been crucial to identify the issue. Change-Id: Id4ab0f294a09aca9a863ecaeef8856474662ab45 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com> Fixes: #884
* snap_scheduler: python3 compatibility and new test caseSunny Kumar2020-04-202-1/+50
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: "snap_scheduler.py init" command failing with the below traceback: [root@dhcp43-104 ~]# snap_scheduler.py init Traceback (most recent call last): File "/usr/sbin/snap_scheduler.py", line 941, in <module> sys.exit(main(sys.argv[1:])) File "/usr/sbin/snap_scheduler.py", line 851, in main initLogger() File "/usr/sbin/snap_scheduler.py", line 153, in initLogger logfile = os.path.join(process.stdout.read()[:-1], SCRIPT_NAME + ".log") File "/usr/lib64/python3.6/posixpath.py", line 94, in join genericpath._check_arg_types('join', a, *p) File "/usr/lib64/python3.6/genericpath.py", line 151, in _check_arg_types raise TypeError("Can't mix strings and bytes in path components") from None TypeError: Can't mix strings and bytes in path components Solution: Added the 'universal_newlines' flag to Popen to support backward compatibility. Added a basic test for snapshot scheduler. Backport of: >Upstream Patch: https://review.gluster.org/#/c/glusterfs/+/24257/ >Change-Id: I78e8fabd866fd96638747ecd21d292f5ca074a4e >Fixes: #1134 >Signed-off-by: Sunny Kumar <sunkumar@redhat.com> >(cherry picked from commit a7d7ec066e56ac03bf252c26beb20fdc2c3b6772) Change-Id: I78e8fabd866fd96638747ecd21d292f5ca074a4e Fixes: #1134 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* afr: mark pending xattrs as a part of metadata healRavishankar N2020-04-202-1/+120
| | | | | | | | | | | | | | | | | | | | ...if pending xattrs are zero for all children. Problem: If there are no pending xattrs and a metadata heal needs to be performed, it can be possible that we end up with xattrs inadvertendly deleted from all bricks, as explained in the BZ. Fix: After picking one among the sources as the good copy, mark pending xattrs on all sources to blame the sinks. Now even if this metadata heal fails midway, a subsequent heal will still choose one of the valid sources that it picked previously. Updates: #1067 Change-Id: If1b050b70b0ad911e162c04db4d89b263e2b8d7b Signed-off-by: Ravishankar N <ravishankar@redhat.com> (cherry picked from commit 2d5ba449e9200b16184b1e7fc84cabd015f1f779)
* cluster/afr: fix race when bricks come upXavi Hernandez2020-04-203-6/+9
| | | | | | | | | | | | | | | | | | | | | | | | | The was a problem when self-heal was sending lookups at the same time that one of the bricks was coming up. In this case there was a chance that the number of 'up' bricks changes in the middle of sending the requests to subvolumes which caused a discrepancy in the expected number of replies and the actual number of sent requests. This discrepancy caused that AFR continued executing requests before all requests were complete. Eventually, the frame of the pending request was destroyed when the operation terminated, causing a use- after-free issue when the answer was finally received. In theory the same thing could happen in the reverse way, i.e. AFR tries to wait for more replies than sent requests, causing a hang. Backport of: > Change-Id: I7ed6108554ca379d532efb1a29b2de8085410b70 > Signed-off-by: Xavi Hernandez <xhernandez@redhat.com> > Fixes: bz#1808875 Change-Id: I7ed6108554ca379d532efb1a29b2de8085410b70 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com> Fixes: bz#1809439
* open-behind: fix missing fd referenceXavi Hernandez2020-04-201-11/+16
| | | | | | | | | | | Open behind was not keeping any reference on fd's pending to be opened. This makes it possible that a concurrent close and en entry fop (unlink, rename, ...) caused destruction of the fd while it was still being used. Change-Id: Ie9e992902cf2cd7be4af1f8b4e57af9bd6afd8e9 Fixes: #1028 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* gfapi: Suspend synctasks instead of blocking themSoumya Koduri2020-04-163-2/+50
| | | | | | | | | | | | | | | | | | | | | | | | | There are certain conditions which blocks the current execution thread (like waiting on mutex lock or condition variable or I/O response). In such cases, if it is a synctask thread, we should suspend the task instead of blocking it (like done in SYNCOP using synctask_yield) This is to avoid deadlock like the one mentioned below - 1) synctaskA sets fs->migration_in_progress to 1 and does I/O (LOOKUP) 2) Other synctask threads wait for fs->migration_in_progress to be reset to 0 by synctaskA and hence blocked 3) but synctaskA cannot resume as all synctask threads are blocked on (2). Note: this same approach is already used by few other components like syncbarrier etc. Change-Id: If90f870d663bb242c702a5b86ac52eeda67c6f0d Fixes: #1146 Signed-off-by: Soumya Koduri <skoduri@redhat.com> (cherry picked from commit 55914f968d907ed747774da15285b42653afda61)
* glusterd: Brick process fails to come up with brickmux onVishal Pandey2020-03-042-14/+75
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Issue: 1- In a cluster of 3 Nodes N1, N2, N3. Create 3 volumes vol1, vol2, vol3 with 3 bricks (one from each node) 2- Set cluster.brick-multiplex on 3- Start all 3 volumes 4- Check if all bricks on a node are running on same port 5- Kill N1 6- Set performance.readdir-ahead for volumes vol1, vol2, vol3 7- Bring N1 up and check volume status 8- All bricks processes not running on N1. Root Cause - Since, There is a diff in volfile versions in N1 as compared to N2 and N3 therefore glusterd_import_friend_volume() is called. glusterd_import_friend_volume() copies the new_volinfo and deletes old_volinfo and then calls glusterd_start_bricks(). glusterd_start_bricks() looks for the volfiles and sends an rpc request to glusterfs_handle_attach(). Now, since the volinfo has been deleted by glusterd_delete_stale_volume() from priv->volumes list before glusterd_start_bricks() and glusterd_create_volfiles_and_notify_services() and glusterd_list_add_order is called after glusterd_start_bricks(), therefore the attach RPC req gets an empty volfile path and that causes the brick to crash. Fix- Call glusterd_list_add_order() and glusterd_create_volfiles_and_notify_services before glusterd_start_bricks() cal is made in glusterd_import_friend_volume > Change-Id: Idfe0e8710f7eb77ca3ddfa1cabeb45b2987f41aa > Bug: bz#1773856 > Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Change-Id: Idfe0e8710f7eb77ca3ddfa1cabeb45b2987f41aa Fixes: bz#1808966 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Adding release notes for release-6.8v6.8Hari Gowtham2020-03-021-0/+39
| | | | | | Change-Id: I7cb7d0f863c4ef32ab8d4e0db43e2f8135a54e4f fixes: bz#1806846 Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
* events: fix IPv6 memory corruptionXavi Hernandez2020-02-281-41/+15
| | | | | | | | | | | | | | | | | | When an event was generated and the target host was resolved to an IPv6 address, there was a memory overflow when that address was copied to a fixed IPv4 structure (IPv6 addresses are longer than IPv4 ones). This fix correctly handles IPv4 and IPv6 addresses returned by getaddrinfo() Backport of: > Change-Id: I5864a0c6e6f1b405bd85988529570140cf23b250 > Fixes: bz#1790870 > Signed-off-by: Xavi Hernandez <xhernandez@redhat.com> Change-Id: I5864a0c6e6f1b405bd85988529570140cf23b250 Fixes: bz#1792857 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* eventsapi: Set IPv4/IPv6 family based on input IPAravinda VK2020-02-281-1/+4
| | | | | | | | | | | | | | | | | server.sin_family was set to AF_INET while creating socket connection, this was failing if the input address is IPv6(`::1`). With this patch, sin_family is set by reading the ai_family of `getaddrinfo` result. Backport of: > Fixes: bz#1752330 > Change-Id: I499f957b432842fa989c698f6e5b25b7016084eb > Signed-off-by: Aravinda VK <avishwan@redhat.com> Fixes: bz#1807786 Change-Id: I499f957b432842fa989c698f6e5b25b7016084eb Signed-off-by: Aravinda VK <avishwan@redhat.com>
* core: replace inet_addr with inet_ptonRinku Kothiya2020-02-281-1/+7
| | | | | | | | | | | | | | Fixes warning raised by RPMDiff on the use of inet_addr, which may impact Ipv6 support Backport of: > fixes: bz#1721385 > Change-Id: Id2d9afa1747efa64bc79d90dd2566bff54deedeb > Signed-off-by: Rinku Kothiya <rkothiya@redhat.com> Fixes: bz#1807793 Change-Id: Id2d9afa1747efa64bc79d90dd2566bff54deedeb Signed-off-by: Rinku Kothiya <rkothiya@redhat.com>
* core: fix memory pool management racesXavi Hernandez2020-02-285-105/+137
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Objects allocated from a per-thread memory pool keep a reference to it to be able to return the object to the pool when not used anymore. The object holding this reference can have a long life cycle that could survive a glfs_fini() call. This means that it's unsafe to destroy memory pools from glfs_fini(). Another side effect of destroying memory pools from glfs_fini() is that the TLS variable that points to one of those pools cannot be reset for all alive threads. This means that any attempt to allocate memory from those threads will access already free'd memory, which is very dangerous. To fix these issues, mem_pools_fini() doesn't destroy pool lists anymore. They should be destroyed when the library is unloaded or the process is terminated, but this cannot be done right now because gluster doesn't stop other threads before calling exit(), which could cause some races. This patch is the backport of 2 master patches: > Change-Id: Ib189a5510ab6bdac78983c6c65a022e9634b0965 > Fixes: bz#1801684 > Signed-off-by: Xavi Hernandez <xhernandez@redhat.com> > > Change-Id: Id7cfb4407fcf208e28f03a7c3cdc3ef9c1f3bf9b > Fixes: bz#1801684 > Signed-off-by: Xavi Hernandez <xhernandez@redhat.com> Change-Id: Id7cfb4407fcf208e28f03a7c3cdc3ef9c1f3bf9b Fixes: bz#1805671 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* cluster/ec: Change handling of heal failure to avoid crashAshish Pandey2020-02-282-13/+13
| | | | | | | | | | | | | | | | | Problem: ec_getxattr_heal_cbk was called with NULL as second argument in case heal was failing. This function was dereferencing "cookie" argument which caused crash. Solution: Cookie is changed to carry the value that was supposed to be stored in fop->data, so even in the case when fop is NULL in error case, there won't be any NULL dereference. Thanks to Xavi for the suggestion about the fix. Change-Id: I0798000d5cadb17c3c2fbfa1baf77033ffc2bb8c fixes: bz#1806836
* afr: prevent spurious entry heals leading to gfid split-brainRavishankar N2020-02-287-29/+69
| | | | | | | | | | | | | | | | | | | | | Problem: In a hyperconverged setup with granular-entry-heal enabled, if a file is recreated while one of the bricks is down, and an index heal is triggered (with the brick still down), entry-self heal was doing a spurious heal with just the 2 good bricks. It was doing a post-op leading to removal of the filename from .glusterfs/indices/entry-changes as well as erroneous setting of afr xattrs on the parent. When the brick came up, the xattrs were cleared, resulting in the renamed file not getting healed and leading to gfid split-brain and EIO on the mount. Fix: Proceed with entry heal only when shd can connect to all bricks of the replica, just like in data and metadata heal. fixes: bz#1804594 Change-Id: I916ae26ad1fabf259bc6362da52d433b7223b17e Signed-off-by: Ravishankar N <ravishankar@redhat.com> (cherry picked from commit 06453d77d056fbaa393a137ca277a20e38d2f67e)
* lock: check null value of dict to avoid log floodingMohit Agrawal2020-02-271-1/+1
| | | | | | | | | | | > updates: bz#1712322 > Change-Id: I120a1d23506f9ebcf88c7ea2f2eff4978a61cf4a > Signed-off-by: Susant Palai <spalai@redhat.com> > (cherry picked from commit 2bb1807879493cb77ec9b5088485d88f13b84828) updates: bz#1797985 Change-Id: I120a1d23506f9ebcf88c7ea2f2eff4978a61cf4a Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* cluster/thin-arbiter: Wait for TA connection before ta-file lookupAshish Pandey2020-02-261-19/+21
| | | | | | | | | | | | | | | | | | | Problem: When we mount a ta volume, as soon as 2 data bricks are connected we consider that the mount is done and then send a lookup/create on ta file on ta node. However, this connection with ta node might not have been completed. Due to this delay, ta replica id file will not be created and we will see ENOTCONN error in log file if we do lookup. Solution: As we know that this ta node could have a higher latency, we should wait for reasonable time for connection to happen before sending lookup/create on replica id file. fixes: bz#1804546 Change-Id: I36f90865afe617e4e84cee57fec832a16f5dd6cc (cherry picked from commit a7fa54ddea3fe429f143b37e4de06a93b49d776a)
* tools/glusterfind: Remove an extra argumentShwetha K Acharya2020-02-261-1/+1
| | | | | | | | | | | | | Backport of: > Upstream Patch: https://review.gluster.org/#/c/glusterfs/+/24011/ >fixes: bz#1790748 >Change-Id: I1cb12c975142794139456d0f8e99fbdbb03c53a1 >Signed-off-by: Shwetha K Acharya <sacharya@redhat.com> >(cherry picked from commit d73872e764214f8071c8915536a75bdac1e5e685) fixes: bz#1790850 Change-Id: I1cb12c975142794139456d0f8e99fbdbb03c53a1 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* gf-event: Handle unix volfile-serversPranith Kumar K2020-02-261-1/+10
| | | | | | | | | | | | | | | Problem: glfsheal program uses unix-socket-based volfile server. volfile server will be the path to socket in this case. gf_event expects this to be hostname in all cases. So getaddrinfo will fail on the unix-socket path, events won't be sent in this case. Fix: In case of unix sockets, default to localhost fixes: bz#1793096 Change-Id: I60d27608792c29d83fb82beb5fde5ef4754bece8 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: skip updating ctx->loc again when ec_fix_open/opendirKinglong Mee2020-02-262-10/+14
| | | | | | | | | | | | | The ec_manager_open/opendir memsets ctx->loc which causes memory/inode leak, and ec_fheal uses ctx->loc out of fd->lock that loc_copy may copy bad data when memset it. This patch skips updating ctx->loc when it is initilizaed. With it, ctx->loc is filled once, and never updated. Change-Id: I3bf5ffce4caf4c1c667f7acaa14b451d37a3550a fixes: bz#1806838 Signed-off-by: Kinglong Mee <mijinlong@horiscale.com>
* Cluster/afr: Don't treat all bricks having metadata pending as split-brainkarthik-us2020-02-254-67/+133
| | | | | | | | | | | | | | | | | | | | | | | | | | Problem: We currently don't have a roll-back/undoing of post-ops if quorum is not met. Though the FOP is still unwound with failure, the xattrs remain on the disk. Due to these partial post-ops and partial heals (healing only when 2 bricks are up), we can end up in metadata split-brain purely from the afr xattrs point of view i.e each brick is blamed by atleast one of the others for metadata. These scenarios are hit when there is frequent connect/disconnect of the client/shd to the bricks. Fix: Pick a source based on the xattr values. If 2 bricks blame one, the blamed one must be treated as sink. If there is no majority, all are sources. Once we pick a source, self-heal will then do the heal instead of erroring out due to split-brain. This patch also adds restriction of all the bricks to be up to perform metadata heal to avoid any metadata loss. Removed the test case tests/bugs/replicate/bug-1468279-source-not-blaming-sinks.t as it was doing metadata heal even when only 2 of 3 bricks were up. Change-Id: I07a9d62f84ceda329dcab1f02a33aeed258dcb09 fixes: bz#1805097 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* server: Mount fails after reboot 1/3 gluster nodesMohit Agrawal2020-02-113-16/+29
| | | | | | | | | | | | | | | | | | | | | | Problem: At the time of coming up one server node(1x3) after reboot client is unmounted.The client is unmounted because a client is getting AUTH_FAILED event and client call fini for the graph.The client is getting AUTH_FAILED because brick is not attached with a graph at that moment Solution: To avoid the unmounting the client graph throw ENOENT error from server in case if brick is not attached with server at the time of authenticate clients. > Credits: Xavi Hernandez <xhernandez@redhat.com> > Change-Id: Ie6fbd73cbcf23a35d8db8841b3b6036e87682f5e > Fixes: bz#1793852 > Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> > (cherry picked from commit > f6421dff22a6ddaf14134f6894deae219948c89d) Change-Id: Ie6fbd73cbcf23a35d8db8841b3b6036e87682f5e Fixes: bz#1794020 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* To fix readdir-ahead memory leakHuangShujun2020-02-111-0/+1
| | | | | | | | | | | | | | | | Glusterfs client process has memory leak if create serveral files under one folder, and delete the folder. According to statedump, the ref counts of readdir-ahead is bigger than zero in the inode table. Readdir-ahead get parent inode by inode_parent in rda_mark_inode_dirty when each rda_writev_cbk,the inode ref count of parent folder will be increased in inode_parent, but readdir-ahead do not unref it later. The correction is unref the parent inode at the end of rda_mark_inode_dirty Backport of: > Change-Id: Iee68ab1089cbc2fbc4185b93720fb1f66ee89524 > Fixes: bz#1779055 > Signed-off-by: HuangShujun <549702281@qq.com> Change-Id: Iee68ab1089cbc2fbc4185b93720fb1f66ee89524 (cherry picked from commit 99044a5cedcff9a9eec40a07ecb32bd66271cd02) Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Fixes: bz#1789337
* glusterfind: python3 compatibilitySunny Kumar2020-02-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: While we delete gluster volume the hook script 'S57glusterfind-delete-post.py' is failed to execute and error message can be observed in glusterd log. Traceback: File "/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post", line 69, in <module> main() File "/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post", line 39, in main glusterfind_dir = os.path.join(get_glusterd_workdir(), "glusterfind") File "/usr/lib64/python3.7/posixpath.py", line 94, in join genericpath._check_arg_types('join', a, *p) File "/usr/lib64/python3.7/genericpath.py", line 155, in _check_arg_types raise TypeError("Can't mix strings and bytes in path components") from None TypeError: Can't mix strings and bytes in path components Solution: Added the 'universal_newlines' flag to Popen to support backward compatibility. Backport of: > Change-Id: Ie5655b11b55535c5ad2338108d0448e6fdaacf4f > Fixes: bz#1789478 > Signed-off-by: Sunny Kumar <sunkumar@redhat.com> > (cherry picked from commit 33c3cbe71b67f523538b04334f1ef962953281ed) Change-Id: Ie5655b11b55535c5ad2338108d0448e6fdaacf4f Fixes: bz#1790449 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* tools/glusterfind: handle offline bricksMilind Changire2020-02-112-25/+61
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: glusterfind is unable to copy remote output file to local node when a remove-brick is in progress on the remote node. After copying remote files, in the --full output listing path, a "sort -u" command is run on the collected files. However, "sort" exits with an error code if it finds any file missing. Solution: Maintain a map of (pid, output file) when the node commands are started and remove the mapping for the pid for which the command returns an error. Use the list of files present in the map for the "sort" command. Backport of: > Change-Id: Ie6e019037379f4cb163f24b1c65eb382efc2fb3b > fixes: bz#1410439 > Signed-off-by: Milind Changire <mchangir@redhat.com> > Signed-off-by: Shwetha K Acharya <sacharya@redhat.com> > (cherry picked from commit 42c1605f42b89520d4d05806d7074e9e93b63640) Change-Id: Ie6e019037379f4cb163f24b1c65eb382efc2fb3b Fixes: bz#1790445 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* extras: enable log rotation for USS logsSunny Kumar2020-02-111-0/+21
| | | | | | | | | | | | | | Added logrotate support for user serviceable snapshot's logs. Backport of: >Change-Id: Ic920eaa8ab5e44daf5937a027c6913d7bb26d517 >Fixes: bz#1786722 >Signed-off-by: Sunny Kumar <sunkumar@redhat.com> Change-Id: Ic920eaa8ab5e44daf5937a027c6913d7bb26d517 Fixes: bz#1786754 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* cluster/dht: Correct fd processing loopN Balachandran2019-12-301-22/+62
| | | | | | | | | | | | | | | | | The fd processing loops in the dht_migration_complete_check_task and the dht_rebalance_inprogress_task functions were unsafe and could cause an open to be sent on an already freed fd. This has been fixed. > Change-Id: I0a3c7d2fba314089e03dfd704f9dceb134749540 > Fixes: bz#1757399 > Signed-off-by: N Balachandran <nbalacha@redhat.com> > (cherry picked from commit 9b15867070b0cc241ab165886292ecffc3bc0aed) Change-Id: I0a3c7d2fba314089e03dfd704f9dceb134749540 Fixes: bz#1786983 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* doc: Added release 6.7 notesv6.7hari gowtham2019-12-262-1/+33
| | | | | | | Fixes: bz#1780540 Change-Id: I02ef8c24bc19c321ab2db1d13224ea9e89325e3a Signed-off-by: hari gowtham <hgowtham@redhat.com>
* rpc: Synchronize slot allocation codeMohit Agrawal2019-12-261-34/+42
| | | | | | | | | | | | | | | | | | Problem: Current slot allocation/deallocation code path is not synchronized.There are scenario when due to race condition in slot allocation/deallocation code path brick is crashed. Solution: Synchronize slot allocation/deallocation code path to avoid the issue > Change-Id: I4fb659a75234218ffa0e5e0bf9308f669f75fc25 > Fixes: bz#1763036 > Signed-off-by: Mohit Agrawal <moagrawal@redhat.com> > (cherry picked from commit faf5ac13c4ee00a05e9451bf8da3be2a9043bbf2) Change-Id: I4fb659a75234218ffa0e5e0bf9308f669f75fc25 Fixes: bz#1778182 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
* geo-rep: Fix py2/py3 compatibility in repceKotresh HR2019-12-241-3/+2
| | | | | | | | | | | | | | | | | Geo-rep fails to start on python2 only machine like centos6. It fails with "ImportError no module named _io". This patch fixes the same. Backport of: > Patch: https://review.gluster.org//23702/ > BUG: 1771577 > Change-Id: I8228458a853a230546f9faf29a0e9e0f23b3efec > Signed-off-by: Kotresh HR <khiremat@redhat.com> (cherry picked from commit 9595ecca3de49fdf37d30b151f5c3e071e0a80d0) fixes: bz#1771842 Change-Id: I8228458a853a230546f9faf29a0e9e0f23b3efec Signed-off-by: Kotresh HR <khiremat@redhat.com>
* test: fix non-root test case for geo-repSunny Kumar2019-12-241-1/+1
| | | | | | | | | | | | | | | | | | | | | Problem: On a freshly installed system non-root geo-rep test case gets blocked. Solution: On a freshly installed system, the remote key need to be accepted automatically by ssh-copy-id. Credits: M. Scherer <mscherer@redhat.com> Backport of: > Change-Id: I5077f99a6681660f7e3e84c25ef216f521b7c29c > Fixes: bz#1779742 > Signed-off-by: Sunny Kumar <sunkumar@redhat.com> Change-Id: I5077f99a6681660f7e3e84c25ef216f521b7c29c Fixes: bz#1784796 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* socket: fix error handlingXavi Hernandez2019-12-131-84/+91
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When __socket_proto_state_machine() detected a problem in the size of the request or it couldn't allocate an iobuf of the requested size, it returned -ENOMEM (-12). However the caller was expecting only -1 in case of error. For this reason the error passes undetected initially, adding back the socket to the epoll object. On further processing, however, the error is finally detected and the connection terminated. Meanwhile, another thread could receive a poll_in event from the same connection, which could cause races with the connection destruction. When this happened, the process crashed. To fix this, all error detection conditions have been hardened to be more strict on what is valid and what not. Also, we don't return -ENOMEM anymore. We always return -1 in case of error. An additional change has been done to prevent destruction of the transport object while it may still be needed. Backport of: > Change-Id: I6e59cd81cbf670f7adfdde942625d4e6c3fbc82d > Fixes: bz#1782495 > Signed-off-by: Xavi Hernandez <xhernandez@redhat.com> Change-Id: I6e59cd81cbf670f7adfdde942625d4e6c3fbc82d Fixes: bz#1749625 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* extras: Cgroup(CPU/Mem) restriction are not working on gluster processMohit Agrawal2019-11-192-2/+2
| | | | | | | | | | | | | | | | | | | | Problem: After Configure the Cgroup(CPU/MEM) limit to a gluster processes resource(CPU/MEM) limits are not applicable to the gluster processes.Cgroup limits are not applicable because all threads are not moved into a newly created cgroup to apply restriction. Solution: To move a gluster thread to newly created cgroup change the condition in script > Change-Id: I8ad81c69200e4ec43a74f6052481551cf835354c > Fixes: bz#1764208 > Signed-off-by: Mohit Agrawal <moagrawal@redhat.com> > (cherry picked from commit f5811979935ce607391825ac6913a95f588818e3) > (Reviewed on upstream link https://review.gluster.org/#/c/glusterfs/+/23599/) Change-Id: I8ad81c69200e4ec43a74f6052481551cf835354c Fixes: bz#1766425 Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>