summaryrefslogtreecommitdiffstats
path: root/xlators/features
Commit message (Collapse)AuthorAgeFilesLines
* ctime: Fix ctime issue with utime family of syscallsKotresh HR2019-08-201-1/+12
| | | | | | | | | When atime|mtime is updated via utime family of syscalls, ctime is not updated. This patch fixes the same. Change-Id: I7f86d8f8a1e06a332c3449b5bbdbf128c9690f25 fixes: bz#1738786 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* performance/md-cache: Do not skip caching of null character xattr valuesAnoop C S2019-08-201-1/+11
| | | | | | | | | | | | | | | | | | | | | Null character string is a valid xattr value in file system. But for those xattrs processed by md-cache, it does not update its entries if value is null('\0'). This results in ENODATA when those xattrs are queried afterwards via getxattr() causing failures in basic operations like create, copy etc in a specially configured Samba setup for Mac OS clients. On the other side snapview-server is internally setting empty string("") as value for xattrs received as part of listxattr() and are not intended to be cached. Therefore we try to maintain that behaviour using an additional dictionary key to prevent updation of entries in getxattr() and fgetxattr() callbacks in md-cache. Credits: Poornima G <pgurusid@redhat.com> Change-Id: I7859cbad0a06ca6d788420c2a495e658699c6ff7 Fixes: bz#1726205 Signed-off-by: Anoop C S <anoopcs@redhat.com>
* features/locks: avoid use after freed of frame for blocked lockKinglong Mee2019-08-205-8/+14
| | | | | | | | | | | | | The fop contains blocked lock may use freed frame info when other unlock fop has unwind the blocked lock. Because the blocked lock is added to block list in inode lock(or other lock), after that, when out of the inode lock, the fop contains the blocked lock should not use it. Change-Id: Icb309a1cc78380dc982b26d50c18d67e4f2c8915 fixes: bz#1737291 Signed-off-by: Kinglong Mee <mijinlong@horiscale.com>
* logging: Structured logging reference PRAravinda VK2019-08-2011-184/+199
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To convert the existing `gf_msg` to `gf_smsg`: - Define `_STR` of respective Message ID as below(In `*-messages.h`) #define PC_MSG_REMOTE_OP_FAILED_STR "remote operation failed." - Change `gf_msg` to use `gf_smsg`. Convert values into fields and add any missing fields. Note: `errno` and `error` fields will be added automatically to log message in case errnum is specified. Example: gf_smsg( this->name, // Name or log domain GF_LOG_WARNING, // Log Level rsp.op_errno, // Error number PC_MSG_REMOTE_OP_FAILED, // Message ID "path=%s", local->loc.path, // Key Value 1 "gfid=%s", loc_gfid_utoa(&local->loc), // Key Value 2 NULL // Log End ); Key value pairs formatting Help: gf_slog( this->name, // Name or log domain GF_LOG_WARNING, // Log Level rsp.op_errno, // Error number PC_MSG_REMOTE_OP_FAILED, // Message ID "op=CREATE", // Static Key and Value "path=%s", local->loc.path, // Format for Value "brick-%d-status=%s", brkidx, brkstatus, // Use format for key and val NULL // Log End ); Before: [2019-07-03 08:16:18.226819] W [MSGID: 114031] [client-rpc-fops_v2.c \ :2633:client4_0_lookup_cbk] 0-gv3-client-0: remote operation failed. \ Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint \ is not connected] After: [2019-07-29 07:50:15.773765] W [MSGID: 114031] \ [client-rpc-fops_v2.c:2633:client4_0_lookup_cbk] 0-gv1-client-0: \ remote operation failed. [{path=/f1}, \ {gfid=00000000-0000-0000-0000-000000000000}, \ {errno=107}, {error=Transport endpoint is not connected}] To add new `gf_smsg`, Add a Message ID in respective `*-messages.h` file and the follow the steps mentioned above. Change-Id: I4e7d37f27f106ab398e991d931ba2ac7841a44b1 Updates: #657 Signed-off-by: Aravinda VK <avishwan@redhat.com>
* features/shard: Send correct size when reads are sent beyond file sizeKrutika Dhananjay2019-08-121-0/+2
| | | | | | Change-Id: I0cebaaf55c09eb1fb77a274268ff564e871b743b fixes bz#1738419 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
* features/utime: always update ctime at setattrKinglong Mee2019-08-061-12/+1
| | | | | | | | | | | | | | | | | | | | For the nfs EXCLUSIVE mode create may sets a later time to mtime (at verifier), it should not set to ctime for storage.ctime does not allowed set ctime to a earlier time. /* Earlier, mdata was updated only if the existing time is less * than the time to be updated. This would fail the scenarios * where mtime can be set to any time using the syscall. Hence * just updating without comparison. But the ctime is not * allowed to changed to older date. */ According to kernel's setattr, always set ctime at setattr, and doesnot set ctime from mtime at storage.ctime. Change-Id: I5cfde6cb7f8939da9617506e3dc80bd840e0d749 fixes: bz#1737288 Signed-off-by: Kinglong Mee <kinglongmee@gmail.com>
* locks/fencing: Address hang while lock preemptionSusant Palai2019-08-023-20/+29
| | | | | | | | | | | | The fop_wind_count can go negative when fencing is enabled on unwind path of the IO leading to hang. Also changed code so that fop_wind_count needs to be maintained only till fencing is enabled on the file. updates: bz#1717824 Change-Id: Icd04b42bc16cd3d50eaa581ee57233910194f480 Signed-off-by: Susant Palai <spalai@redhat.com>
* Multiple files: get trivial stuff done before lockYaniv Kaul2019-08-013-7/+7
| | | | | | | | | Initialize a dictionary for example seems to be prefectly fine to be done before taking a lock. Change-Id: Ib29516c4efa8f0e2b526d512beab488fcd16d2e7 updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* event: rename event_XXX with gf_ prefixedXiubo Li2019-07-292-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I hit one crash issue when using the libgfapi. In the libgfapi it will call glfs_poller() --> event_dispatch() in file api/src/glfs.c:721, and the event_dispatch() is defined by libgluster locally, the problem is the name of event_dispatch() is the extremly the same with the one from libevent package form the OS. For example, if a executable program Foo, which will also use and link the libevent and the libgfapi at the same time, I can hit the crash, like: kernel: glfs_glfspoll[68486]: segfault at 1c0 ip 00007fef006fd2b8 sp 00007feeeaffce30 error 4 in libevent-2.0.so.5.1.9[7fef006ed000+46000] The link for Foo is: lib_foo_LADD = -levent $(GFAPI_LIBS) It will crash. This is because the glfs_poller() is calling the event_dispatch() from the libevent, not the libglsuter. The gfapi link info : GFAPI_LIBS = -lacl -lgfapi -lglusterfs -lgfrpc -lgfxdr -luuid If I link Foo like: lib_foo_LADD = $(GFAPI_LIBS) -levent It will works well without any problem. And if Foo call one private lib, such as handler_glfs.so, and the handler_glfs.so will link the GFAPI_LIBS directly, while the Foo won't and it will dlopen(handler_glfs.so), then the crash will be hit everytime. The link info will be: foo_LADD = -levent libhandler_glfs_LIBADD = $(GFAPI_LIBS) I can avoid the crash temporarily by linking the GFAPI_LIBS in Foo too like: foo_LADD = $(GFAPI_LIBS) -levent libhandler_glfs_LIBADD = $(GFAPI_LIBS) But this is ugly since the Foo won't use any APIs from the GFAPI_LIBS. And in some cases when the --as-needed link option is added(on many dists it is added as default), then the crash is back again, the above workaround won't work. Fixes: #699 Change-Id: I38f0200b941bd1cff4bf3066fca2fc1f9a5263aa Signed-off-by: Xiubo Li <xiubli@redhat.com>
* quiesce: add missing fopsAmar Tumballi2019-07-251-0/+30
| | | | | | Updates: bz#1693692 Change-Id: I4f005e7168c201709a85db443d643b81e6d3d282 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* features/utime: Fix mem_put crashPranith Kumar K2019-07-221-1/+3
| | | | | | | | | | | | | | Problem: When frame->local is not null FRAME_DESTROY calls mem_put on it. Since the stub is already destroyed in call_resume(), it leads to crash Fix: Set frame->local to NULL before calling call_resume() fixes: bz#1593542 Change-Id: I0f8adf406f4cefdb89d7624ba7a9d9c2eedfb1de Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* (multiple files) use dict_allocate_and_serialize() where applicable.Yaniv Kaul2019-07-221-20/+3
| | | | | | | | This function does length, allocation and serialization for you. Change-Id: I142a259952a2fe83dd719442afaefe4a43a8e55e updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* ctime: Set mdata xattr on legacy filesKotresh HR2019-07-222-14/+143
| | | | | | | | | | | | | | | | | | | | | | | | | | Problem: The files which were created before ctime enabled would not have "trusted.glusterfs.mdata"(stores time attributes) xattr. Upon fops which modifies either ctime or mtime, the xattr gets created with latest ctime, mtime and atime, which is incorrect. It should update only the corresponding time attribute and rest from backend Solution: Creating xattr with values from brick is not possible as each brick of replica set would have different times. So create the xattr upon successful lookup if the xattr is not created Note To Reviewers: The time attributes used to set xattr is got from successful lookup. Instead of sending the whole iatt over the wire via setxattr, a structure called mdata_iatt is sent. The mdata_iatt contains only time attributes. Change-Id: I5e535631ddef04195361ae0364336410a2895dd4 fixes: bz#1593542 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* Fix spelling errorsAravinda VK2019-07-141-1/+1
| | | | | | | Fixes: bz#1728554 Change-Id: I88357aed7c14988a12616035c3738c32c09a8f9a Signed-off-by: Patrick Matthäi <pmatthaei@debian.org> Signed-off-by: Aravinda VK <avishwan@redhat.com>
* features/snapview-server: obtain the list of snapshots inside the lockRaghavendra Bhat2019-07-121-1/+1
| | | | | | | | The current list of snapshots from priv->dirents is obtained outside the lock. Change-Id: I8876ec0a38308da5db058397382fbc82cc7ac177 Fixes: bz#1726783
* features/snapview-server: use the same volfile server for gfapi optionsRaghavendra Bhat2019-07-032-4/+42
| | | | | | | | | | | snapview server xlator makes use of "localhost" as the volfile server while initing the new glfs instance to talk to a snapshot. While localhost is fine, better use the same volfile server that was used to start the snapshot daemon containing the snapview-server xlator. Change-Id: I4485d39b0e3d066f481adc6958ace53ea33237f7 fixes: bz#1725211 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterfs-fops: fix the modularityAmar Tumballi2019-07-021-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | glusterfs-fops.h was moved to rpc/xdr to support compound fops. (ref: https://review.gluster.org/14032, 2f945b86d3) This was fine as long as all these header files were in single include directory after 'install'. With the move to separate out glusterfs specific header files into another directory inside /usr/include (ref: https://review.gluster.org/21746, 20ef211cfa), glusterfs-fops.h file was not in the proper path when an external .c file tried to include any of glusterfs specific .h file (like xlator.h). Now, we have removed compound-fops, with that, none of the enums declared in glusterfs-fops.h are actually getting used on wire anymore. Hence, it makes sense to get this to libglusterfs/src as a single point of definition. With this change, the external programs can use glusterfs header files. also remove some enum definitions which are not used in code anymore. Updates: bz#1636297 Change-Id: I423c44d3dbe2efc777299c544ece3cb172fc7e44 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* graph/shd: Use top down approach while cleaning xlatorMohammed Rafi KC2019-06-279-1/+12
| | | | | | | | | | | | | | We were cleaning xlator from botton to top, which might lead to problems when upper xlators trying to access the xlator object loaded below. One such scenario is when fd_unref happens as part of the fini call which might lead to calling the releasedir to lower xlator. This will lead to invalid mem access Change-Id: I8a6cb619256fab0b0c01a2d564fc88287c4415a0 Updates: bz#1716695 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* graph/shd: Use glusterfs_graph_deactivate to free the xl recMohammed Rafi KC2019-06-271-0/+3
| | | | | | | | | | | | We were using glusterfs_graph_fini to free the xl rec from glusterfs_process_volfp as well as glusterfs_graph_cleanup. Instead we can use glusterfs_graph_deactivate, which is does fini as well as other common rec free. Change-Id: Ie4a5f2771e5254aa5ed9f00c3672a6d2cc8e4bc1 Updates: bz#1716695 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
* locks: enable notify-contention by defaultXavi Hernandez2019-06-261-1/+1
| | | | | | | | This patch enables the lock contention notification by default. Change-Id: I10131b026a7cb09fc7c93e1e6c8549988c1d7751 Fixes: bz#1717754 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* lcov: add more tests to glfsxmp-coverageAmar Tumballi2019-06-251-9/+0
| | | | | | | | | * found a bug with quiesce fallocate() - fixed. * found a bug with cloudsync part of code in posix - fixed updates: bz#1693692 Change-Id: I4f315ffebb612de072ae08761b8cd0f47714080a Signed-off-by: Amar Tumballi <amarts@redhat.com>
* WORM-Xlator: Avoid performing fsetxattr if fd is NULLDavid Spisla2019-06-211-0/+7
| | | | | | | | | | | If worm_create_cbk receives an error (op_ret == -1) fd will be NULL and therefore performing fsetxattr would lead to a segfault and the brick process crashes. To avoid this we allow setting fsetxattr only if op_ret >= 0 . If an error happens we explicitly unwind Change-Id: Ie7f8a198add93e5cd908eb7029cffc834c3b58a6 fixes: bz#1717757 Signed-off-by: David Spisla <david.spisla@iternity.com>
* core: fedora 30 compiler warningsSheetalPamecha2019-06-181-4/+4
| | | | | | | | warning: ‘%s’ directive argument is null [-Wformat-overflow=] Change-Id: I69b8d47f0002c58b00d1cc947fac6f1c64e0b295 updates: bz#1193929 Signed-off-by: SheetalPamecha <spamecha@redhat.com>
* uss: Fix tar issue with ctime and uss enabledKotresh HR2019-06-171-9/+13
| | | | | | | | | | | | | | | | | | | Problem: If ctime and uss enabled, tar still complains with 'file changed as we read it' Cause: To clear nfs cache (gluster-nfs), the ctime was incremented in snap-view client on stat cbk. Fix: The ctime should not be incremented manually. Since gluster-nfs is planning to be deprecated, this code is being removed to fix the issue. Change-Id: Iae7f100c20fce880a50b008ba716077350281404 fixes: bz#1720290 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* multiple files: another attempt to remove includesYaniv Kaul2019-06-1413-28/+4
| | | | | | | | | | | | | | | | | | There are many include statements that are not needed. A previous more ambitious attempt failed because of *BSD plafrom (see https://review.gluster.org/#/c/glusterfs/+/21929/ ) Now trying a more conservative reduction. It does not solve all circular deps that we have, but it does reduce some of them. There is just too much to handle reasonably (dht-common.h includes dht-lock.h which includes dht-common.h ...), but it does reduce the overall number of lines of include we need to look at in the future to understand and fix the mess later one. Change-Id: I550cd001bdefb8be0fe67632f783c0ef6bee3f9f updates: bz#1193929 Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
* upcall: Avoid sending notifications for invalid inodesSoumya Koduri2019-06-141-1/+18
| | | | | | | | | | | | | | | | | | | For nameless LOOKUPs, server creates a new inode which shall remain invalid until the fop is successfully processed post which it is linked to the inode table. But incase if there is an already linked inode for that entry, it discards that newly created inode which results in upcall notification. This may result in client being bombarded with unnecessary upcalls affecting performance if the data set is huge. This issue can be avoided by looking up and storing the upcall context in the original linked inode (if exists), thus saving up on those extra callbacks. Change-Id: I044a1737819bb40d1a049d2f53c0566e746d2a17 fixes: bz#1718338 Signed-off-by: Soumya Koduri <skoduri@redhat.com>
* libglusterfs: cleanup iovec functionsXavi Hernandez2019-06-112-15/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch cleans some iovec code and creates two additional helper functions to simplify management of iovec structures. iov_range_copy(struct iovec *dst, uint32_t dst_count, uint32_t dst_offset, struct iovec *src, uint32_t src_count, uint32_t src_offset, uint32_t size); This function copies up to 'size' bytes from 'src' at offset 'src_offset' to 'dst' at 'dst_offset'. It returns the number of bytes copied. iov_skip(struct iovec *iovec, uint32_t count, uint32_t size); This function removes the initial 'size' bytes from 'iovec' and returns the updated number of iovec vectors remaining. The signature of iov_subset() has also been modified to make it safer and easier to use. The new signature is: iov_subset(struct iovec *src, int src_count, uint32_t start, uint32_t size, struct iovec **dst, int32_t dst_count); This function creates a new iovec array containing the subset of the 'src' vector starting at 'start' with size 'size'. The resulting array is allocated if '*dst' is NULL, or copied to '*dst' if it fits (based on 'dst_count'). It returns the number of iovec vectors used. A new set of functions to iterate through an iovec array have been created. They can be used to simplify the implementation of other iovec-based helper functions. Change-Id: Ia5fe57e388e23392a8d6cdab17670e337cadd587 Updates: bz#1193929 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* features/shard: Fix extra unref when inode object is lru'd out and added backKrutika Dhananjay2019-06-091-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Long tale of double unref! But do read... In cases where a shard base inode is evicted from lru list while still being part of fsync list but added back soon before its unlink, there could be an extra inode_unref() leading to premature inode destruction leading to crash. One such specific case is the following - Consider features.shard-deletion-rate = features.shard-lru-limit = 2. This is an oversimplified example but explains the problem clearly. First, a file is FALLOCATE'd to a size so that number of shards under /.shard = 3 > lru-limit. Shards 1, 2 and 3 need to be resolved. 1 and 2 are resolved first. Resultant lru list: 1 -----> 2 refs on base inode - (1) + (1) = 2 3 needs to be resolved. So 1 is lru'd out. Resultant lru list - 2 -----> 3 refs on base inode - (1) + (1) = 2 Note that 1 is inode_unlink()d but not destroyed because there are non-zero refs on it since it is still participating in this ongoing FALLOCATE operation. FALLOCATE is sent on all participant shards. In the cbk, all of them are added to fync_list. Resulting fsync list - 1 -----> 2 -----> 3 (order doesn't matter) refs on base inode - (1) + (1) + (1) = 3 Total refs = 3 + 2 = 5 Now an attempt is made to unlink this file. Background deletion is triggered. The first $shard-deletion-rate shards need to be unlinked in the first batch. So shards 1 and 2 need to be resolved. inode_resolve fails on 1 but succeeds on 2 and so it's moved to tail of list. lru list now - 3 -----> 2 No change in refs. shard 1 is looked up. In lookup_cbk, it's linked and added back to lru list at the cost of evicting shard 3. lru list now - 2 -----> 1 refs on base inode: (1) + (1) = 2 fsync list now - 1 -----> 2 (again order doesn't matter) refs on base inode - (1) + (1) = 2 Total refs = 2 + 2 = 4 After eviction, it is found 3 needs fsync. So fsync is wound, yet to be ack'd. So it is still inode_link()d. Now deletion of shards 1 and 2 completes. lru list is empty. Base inode unref'd and destroyed. In the next batched deletion, 3 needs to be deleted. It is inode_resolve()able. It is added back to lru list but base inode passed to __shard_update_shards_inode_list() is NULL since the inode is destroyed. But its ctx->inode still contains base inode ptr from first addition to lru list for no additional ref on it. lru list now - 3 refs on base inode - (0) Total refs on base inode = 0 Unlink is sent on 3. It completes. Now since the ctx contains ptr to base_inode and the shard is part of lru list, base shard is unref'd leading to a crash. FIX: When shard is readded back to lru list, copy the base inode pointer as is into its inode ctx, even if it is NULL. This is needed to prevent double unrefs at the time of deleting it. Change-Id: I99a44039da2e10a1aad183e84f644d63ca552462 Updates: bz#1696136 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
* uss: Ensure that snapshot is deleted before creating a new snapshotRaghavendra Bhat2019-06-083-3/+17
| | | | | | | | * Also some logging enhancements in snapview-server Change-Id: I6a7646771cedf4bd1c62806eea69d720bbaf0c83 fixes: bz#1715921 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* across: clang-scan: fix NULL dereferencing warningsAmar Tumballi2019-06-046-18/+33
| | | | | | | | | All these checks are done after analyzing clang-scan report produced by the CI job @ https://build.gluster.org/job/clang-scan updates: bz#1622665 Change-Id: I590305af4ceb779be952974b2a36066ffc4865ca Signed-off-by: Amar Tumballi <amarts@redhat.com>
* features/shard: Fix block-count accounting upon truncate to lower sizeKrutika Dhananjay2019-06-042-13/+49
| | | | | | | | | | | | | | | | | The way delta_blocks is computed in shard is incorrect, when a file is truncated to a lower size. The accounting only considers change in size of the last of the truncated shards. FIX: Get the block-count of each shard just before an unlink at posix in xdata. Their summation plus the change in size of last shard (from an actual truncate) is used to compute delta_blocks which is used in the xattrop for size update. Change-Id: I9128a192e9bf8c3c3a959e96b7400879d03d7c53 fixes: bz#1705884 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
* lcov: improve line coverageAmar Tumballi2019-06-032-78/+36
| | | | | | | | | | | | upcall: remove extra variable assignment and use just one initialization. open-behind: reduce the overall number of lines, in functions not frequently called selinux: reduce some lines in init failure cases updates: bz#1693692 Change-Id: I7c1de94f2ec76a5bfe1f48a9632879b18e5fbb95 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* across: coverity fixesAmar Tumballi2019-06-032-2/+1
| | | | | | | | | | | | | | | * locks/posix.c: key was not freed in one of the cases. * locks/common.c: lock was being free'd out of context. * nfs/exports: handle case of missing free. * protocol/client: handle case of entry not freed. * storage/posix: handle possible case of double free CID: 1398628, 1400731, 1400732, 1400756, 1124796, 1325526 updates: bz#789278 Change-Id: Ieeaca890288bc4686355f6565f853dc8911344e8 Signed-off-by: Amar Tumballi <amarts@redhat.com> Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
* stack: Make sure to have unique call-stacks in all casesPranith Kumar K2019-05-301-3/+0
| | | | | | | | | | | | | | | At the moment new stack doesn't populate frame->root->unique in all cases. This makes it difficult to debug hung frames by examining successive state dumps. Fuse and server xlators populate it whenever they can, but other xlators won't be able to assign 'unique' when they need to create a new frame/stack because they don't know what 'unique' fuse/server xlators already used. What we need is for unique to be correct. If a stack with same unique is present in successive statedumps, that means the same operation is still in progress. This makes 'finding hung frames' part of debugging hung frames easier. fixes bz#1714098 Change-Id: I3e9a8f6b4111e260106c48a2ac3a41ef29361b9e Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* marker: remove some unused functionsAmar Tumballi2019-05-307-148/+8
| | | | | | | | | After basic analysis, found that these methods were not being used at all. updates: bz#1693692 Change-Id: If9cfa1ab189e6e7b56230c4e1d8e11f9694a9a65 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* Fix some "Null pointer dereference" coverity issuesXavi Hernandez2019-05-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | This patch fixes the following CID's: * 1124829 * 1274075 * 1274083 * 1274128 * 1274135 * 1274141 * 1274143 * 1274197 * 1274205 * 1274210 * 1274211 * 1288801 * 1398629 Change-Id: Ia7c86cfab3245b20777ffa296e1a59748040f558 Updates: bz#789278 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* lock: check null value of dict to avoid log floodingSusant Palai2019-05-231-1/+1
| | | | | | updates: bz#1712322 Change-Id: I120a1d23506f9ebcf88c7ea2f2eff4978a61cf4a Signed-off-by: Susant Palai <spalai@redhat.com>
* features/shard: Fix crash during background shard deletion in a specific caseKrutika Dhananjay2019-05-161-3/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Consider the following case - 1. A file gets FALLOCATE'd such that > "shard-lru-limit" number of shards are created. 2. And then it is deleted after that. The unique thing about FALLOCATE is that unlike WRITE, all of the participant shards are resolved and created and fallocated in a single batch. This means, in this case, after the first "shard-lru-limit" number of shards are resolved and added to lru list, as part of resolution of the remaining shards, some of the existing shards in lru list will need to be evicted. So these evicted shards will be inode_unlink()d as part of eviction. Now once the fop gets to the actual FALLOCATE stage, the lru'd-out shards get added to fsync list. 2 things to note at this point: i. the lru'd out shards are only part of fsync list, so each holds 1 ref on base shard ii. and the more recently used shards are part of both fsync and lru list. So each of these shards holds 2 refs on base inode - one for being part of fsync list, and the other for being part of lru list. FALLOCATE completes successfully and then this very file is deleted, and background shard deletion launched. Here's where the ref counts get mismatched. First as part of inode_resolve()s during the deletion, the lru'd-out inodes return NULL, because they are inode_unlink()'d by now. So these inodes need to be freshly looked up. But as part of linking them in lookup_cbk (precisely in shard_link_block_inode()), inode_link() returns the lru'd-out inode object. And its inode ctx is still valid and ctx->base_inode valid from the last time it was added to list. But shard_common_lookup_shards_cbk() passes NULL in the place of base_pointer to __shard_update_shards_inode_list(). This means, as part of adding the lru'd out inode back to lru list, base inode is not ref'd since its NULL. Whereas post unlinking this shard, during shard_unlink_block_inode(), ctx->base_inode is accessible and is unref'd because the shard was found to be part of LRU list, although the matching ref didn't occur. This at some point leads to base_inode refcount becoming 0 and it getting destroyed and released back while some of its associated shards are continuing to be unlinked in parallel and the client crashes whenever it is accessed next. Fix is to pass base shard correctly, if available, in shard_link_block_inode(). Also, the patch fixes the ret value check in tests/bugs/shard/shard-fallocate.c Change-Id: Ibd0bc4c6952367608e10701473cbad3947d7559f Updates: bz#1696136 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
* features/shard: Fix integer overflow in block count accountingKrutika Dhananjay2019-05-061-1/+1
| | | | | | | | ... by holding delta_blocks in 64-bit int as opposed to 32-bit int. Change-Id: I2c1ddab17457f45e27428575ad16fa678fd6c0eb updates: bz#1705884 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
* cloudsync/plugin: coverity fixesSusant Palai2019-04-301-6/+6
| | | | | | | | | CID 1401087: Null pointer dereferences (REVERSE_INULL) CID 1401088: Null pointer dereferences (FORWARD_NULL) Change-Id: I71bf67af80e1b22bcd2eb997b01a1a5ef0b4d80b Updates: bz#789278 Signed-off-by: Susant Palai <spalai@redhat.com>
* cloudsync: Fix bug in cloudsync-fops-c.pyAnuradha Talur2019-04-261-3/+21
| | | | | | | | | | | | | | | | In some of the fops generated by generator.py, xdata request was not being wound to the child xlator correctly. This was happening because when though the logic in cloudsync-fops-c.py was correct, generator.py was generating a resultant code that omits this logic. Made changes in cloudsync-fops-c.py so that correct code is produced. Change-Id: I6f25bdb36ede06fd03be32c04087a75639d79150 updates: bz#1642168 Signed-off-by: Anuradha Talur <atalur@commvault.com>
* cloudsync/cvlt: Cloudsync plugin for commvault storeAnuradha Talur2019-04-2610-2/+1204
| | | | | | Change-Id: Icbe53e78e9c4f6699c7a26a806ef4b14b39f5019 updates: bz#1642168 Signed-off-by: Anuradha Talur <atalur@commvault.com>
* features/locks: error-out {inode,entry}lk fops with all-zero lk-ownerPranith Kumar K2019-04-265-15/+53
| | | | | | | | | | | | | | | | | Problem: Sometimes we find that developers forget to assign lk-owner for an inodelk/entrylk/lk before writing code to wind these fops. locks xlator at the moment allows this operation. This leads to multiple threads in the same client being able to get locks on the inode because lk-owner is same and transport is same. So isolation with locks can't be achieved. Fix: Disallow locks with lk-owner zero. fixes bz#1624701 Change-Id: I1aadcfbaaa4d49308f7c819505857e201809b3bc Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cloudsync: Make readdirp return stat info of all the direntsAnuradha Talur2019-04-252-1/+36
| | | | | | | | | | | | | | | This change got missed while the initial changes were sent. Should have been a part of : https://review.gluster.org/#/c/glusterfs/+/21757/ Gist of the change: Function that fills in stat info for dirents is invoked in readdirp in posix when cloudsync populates xdata request with GF_CS_OBJECT_STATUS. Change-Id: Ide0c4e80afb74cd2120f74ba934ed40123152d69 updates: bz#1642168 Signed-off-by: Anuradha Talur <atalur@commvault.com>
* features/bit-rot: Unconditionally sign the files during oneshot crawlRaghavendra Bhat2019-04-251-1/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently bit-rot feature has an issue with disabling and reenabling it on the same volume. Consider enabling bit-rot detection which goes on to crawl and sign all the files present in the volume. Then some files are modified and the bit-rot daemon goes on to sign the modified files with the correct signature. Now, disable bit-rot feature. While, signing and scrubbing are not happening, previous checksums of the files continue to exist as extended attributes. Now, if some files with checksum xattrs get modified, they are not signed with new signature as the feature is off. At this point, if the feature is enabled again, the bit rot daemon will go and sign those files which does not have any bit-rot specific xattrs (i.e. those files which were created after bit-rot was disabled). Whereas the files with bit-rot xattrs wont get signed with proper new checksum. At this point if scrubber runs, it finds the on disk checksum and the actual checksum of the file to be different (because the file got modified) and marks the file as corrupted. FIX: The fix is to unconditionally sign the files when the bit-rot daemon comes up (instead of skipping the files with bit-rot xattrs). Change-Id: Iadfb47dd39f7e2e77f22d549a4a07a385284f4f5 fixes: bz#1700078 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* core: avoid dynamic TLS allocation when possibleXavi Hernandez2019-04-242-49/+5
| | | | | | | | | | | | | | | | | | | Some interdependencies between logging and memory management functions make it impossible to use the logging framework before initializing memory subsystem because they both depend on Thread Local Storage allocated through pthread_key_create() during initialization. This causes a crash when we try to log something very early in the initialization phase. To prevent this, several dynamically allocated TLS structures have been replaced by static TLS reserved at compile time using '__thread' keyword. This also reduces the number of error sources, making initialization simpler. Updates: bz#1193929 Change-Id: I8ea2e072411e30790d50084b6b7e909c7bb01d50 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* features/sdfs: Assign unique lk-owner for entrylk fopPranith Kumar K2019-04-221-0/+6
| | | | | | | | | | | | | sdfs is supposed to serialize entry fops by doing entrylk, but all the locks are being done with all-zero lk-owner. In essence sdfs doesn't achieve its goal of mutual exclusion when conflicting operations are executed by same client because two locks on same entry with same all-zero-owner will get locks. Fixed this up by assigning lk-owner before taking entrylk updates bz#1624701 Change-Id: Ifabfc998c9f1724915d38e90ed8287e05797d769 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* features/locks: fix coverity issuesXavi Hernandez2019-04-192-1/+6
| | | | | | | | | | | | | | This patch fixes the following NULL dereferences identified by Coverity: * CID 1398619 * CID 1398621 * CID 1398623 * CID 1398625 * CID 1398626 Change-Id: Id6af0d7cba0bb3346005376bc27180e8476255a4 Updates: bz#789278 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* Revert "features/locks: error-out {inode,entry}lk fops with all-zero lk-owner"Atin Mukherjee2019-04-175-53/+15
| | | | | | | | | This reverts commit 3883887427a7f2dc458a9773e05f7c8ce8e62301 as it has broken sdfs-sanity.t. Updates: bz#1624701 Change-Id: Icb2b0d6bfcce4d556f1cd0f11695c03ffc138736 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* features/bit-rot-stub: clean the mutex after cancelling the signer threadRaghavendra Bhat2019-04-172-7/+59
| | | | | | | | | | | | | | | | | | When bit-rot feature is disabled, the signer thread from the bit-rot-stub xlator (the thread which performs the setxattr of the signature on to the disk) is cancelled. But, if the cancelled signer thread had already held the mutex (&priv->lock) which it uses to monitor the queue of files to be signed, then the mutex is never released. This creates problems in future when the feature is enabled again. Both the new instance of the signer thread and the regular thread which enqueues the files to be signed will be blocked on this mutex. So, as part of cancelling the signer thread, unlock the mutex associated with it as well using pthread_cleanup_push and pthread_cleanup_pop. Change-Id: Ib761910caed90b268e69794ddeb108165487af40 updates: bz#1700078 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>