summaryrefslogtreecommitdiffstats
path: root/tests/basic
Commit message (Collapse)AuthorAgeFilesLines
* protocol/server: don't assume there would be a volfile idAmar Tumballi2018-05-081-0/+26
| | | | | | | | | | | | | | | | | | | | | Earlier glusterfs never had an assumption someone would start it with right arguments, and brick processes would be spawned by a management layer. It just assume the role based on the volfile. Other than volfile, no other arguments should be technically mandatory for working of glusterfs. With this patch, that assumption holds true. Updates: github issue # 352 A note on why this particular issue for this basic sanity? As per the design of thin-arbiter/tie-breaker, it can be started independently on any machine, without need of glusterd. So, similar to 'glusterd', we should be able to spawn a process with any translator without options/volume id etc. fixes: bz#1569399 Change-Id: I5c0650fe0bfde35ad94ccba60e63f6cdcd1ae5ff Signed-off-by: Amar Tumballi <amarts@redhat.com>
* glusterd: volume inode/fd status broken with brick muxhari gowtham2018-04-191-0/+12
| | | | | | | | | | | | | | | | | | | | | | | Problem: The values for inode/fd was populated from the ctx received from the server xlator. Without brickmux, every brick from a volume belonged to a single brick from the volume. So searching the server and populating it worked. With brickmux, a number of bricks can be confined to a single process. These bricks can be from different volumes too (if we use the max-bricks-per-process option). If they are from different volumes, using the server xlator to populate causes problem. Fix: Use the brick to validate and populate the inode/fd status. Signed-off-by: hari gowtham <hgowtham@redhat.com> Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd fixes: bz#1566067
* experimental/cloudsync: Download xlator for archival featureSusant Palai2018-04-101-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | spec-files: https://review.gluster.org/#/c/18854/ Overview: * Cloudsync maintains three file states in it's inode-ctx i.e 1 - LOCAL, 2 - REMOTE, 3 - DOWNLOADING. * A data modifying fop is allowed only if the state is LOCAL. If the state is REMOTE or DOWNLOADING, client will download or wait for the download to finish initiated by other client. * Multiple download and upload from different clients are synchronized by inodelk. * In POSIX a state check is done (part of different commit)before allowing the fop to continue. If the state is remote/downloading the fop is unwound with EREMOTE. The client will then download the file and continue with the fop again. * Basic Algo for fop (let's say write fop): - If LOCAL -> resume fop - If REMOTE -> - INODELK - STAT (this gets state and heal the state if needed) - DOWNLOAD - resume fop Note: * Developers will need to write plugins for download, based on the remote store they choose. In phase-1, support will be added for one remote store per volume. In future, more options for multiple remote stores will be explored. TODOs: - Implement stat/lookup/readdirp to return size info from xattr - Make plugins configurable - Implement unlink fop - Add metrics collection - Add sharding support Design Contributions: Aravinda V K <avishwan@redhat.com> Amar Tumballi <amarts@redhat.com> Ram Ankireddypalle <areddy@commvault.com> Susant Palai <spalai@redhat.com> updates: #387 Change-Id: Iddf711ee7ab4e946ae3e472ff62791a7b85e6d4b Signed-off-by: Susant Palai <spalai@redhat.com>
* afr: add new value for read-hash-mode volume optionRavishankar N2018-03-291-0/+56
| | | | | | | | | | Updates: #363 This new value (3) will try to wind read requests to the child of AFR having the least amount of pending requests in its queue. Change-Id: If6bda2aac9bf7aec3fc39622f78659313c4b6508 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* cluster/ec: send list-node-uuids request to all subvolumesXavi Hernandez2018-03-281-0/+1
| | | | | | | | | | | | The xattr trusted.glusterfs.list-node-uuids was only sent to a single subvolume. This was returning null uuids from the other subvolumes as if they were down. This fix forces that xattr to be requested from all subvolumes. Change-Id: If62eb39a6857258923ba625e153d4ad79018ea2f fixes: bz#1561406 Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
* tests: fix nl-cache.t failureAtin Mukherjee2018-03-261-1/+1
| | | | | | | | | | commit fef9293 changed network.inode-lru-limit from 50000 to 200000 in nl-cache group profile but the test wasn't changed to reflect it accordingly. Change-Id: Ibb5fb0a387f160f6b726246b161a9a7b33135755 fixes: bz#1560589 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* md-cache: fix ./tests/basic/md-cache/bug-1418249.tSusant Palai2018-03-261-1/+1
| | | | | | | | | inode table size is currently set to 200000. Hence the need of change in testcase which was expecting the old value 50000. Change-Id: I8e44b1d0a2da1e8100bebd25f48bb36e2897b4f8 fixes: bz#1560393 Signed-off-by: Susant Palai <spalai@redhat.com>
* cluster/afr: Switch to active-fd-count for open-fd checksPranith Kumar K2018-03-211-0/+20
| | | | | | BUG: 1557932 Change-Id: I3783e41b3812267bc10c0d05d062a31396ce135b Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: Add test cases for stripe-cache optionAshish Pandey2018-03-201-0/+227
| | | | | | Change-Id: I1508a336a7a927b389a19815ef57001cdf29b109 BUG: 1558074 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* cluster/ec: Change default read policy to gfid-hashAshish Pandey2018-03-141-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Whenever we read data from file over NFS, NFS reads more data then requested and caches it. Based on the stat information it makes sure that the cached/pre-read data is valid or not. Consider 4 + 2 EC volume and all the bricks are on differnt nodes. In EC, with round-robin read policy, reads are sent on different set of data bricks. This way, it balances the read fops to go on all the bricks and avoid heating UP (overloading) same set of bricks. Due to small difference in clock speed, it is possible that we get minor difference for atime, mtime or ctime for different bricks. That might cause a different stat returned to NFS based on which NFS will discard cached/pre-read data which is actually not changed and could be used. Solution: Change read policy for EC as gfid-hash. That will force all the read to go to same set of bricks. Change-Id: I825441cc519e94bf3dc3aa0bd4cb7c6ae6392c84 BUG: 1554743 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* tests/basic/namespace: Fix the namespace test failureVarsha Rao2018-03-141-5/+7
| | | | | | | | | | | In the jenkins regression test brick multiplexing is enabled by is_brick_mx_enabled function and not by setting cluster.brick-multiplex option. Hence check the count of bricks and its logs, this fixes the failure. Change-Id: Ibb2ed8fbffd3765f283da741689304a5579d447c BUG: 1555167 Signed-off-by: Varsha Rao <varao@redhat.com>
* cluster/afr: Remove compound-fops usage in afrPranith Kumar K2018-03-061-37/+0
| | | | | | | | | We are not seeing much improvement with this change. So removing the feature so that it doesn't need to be maintained anymore. Fixes: #414 Change-Id: Ic7969b151544daf2547bd262a9fa03f575626411 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* features/shard: Fix shard inode refcount when it's part of priv->lru_list.Krutika Dhananjay2018-03-021-17/+0
| | | | | | | | | | | For as long as a shard's inode is in priv->lru_list, it should have a non-zero ref-count. This patch achieves it by taking a ref on the inode when it is added to lru list. When it's time for the inode to be evicted from the lru list, a corresponding unref is done. Change-Id: I289ffb41e7be5df7489c989bc1bbf53377433c86 BUG: 1468483 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
* tests/basic/namespace: Check if brick multiplex is enabledVarsha Rao2018-02-271-0/+23
| | | | | | | | | This patch fixes the namespace test failure when brick multiplexing is enabled. By changing the log file name, when brick multiplexing is enabled. As only one log file generated for all bricks. Change-Id: Ide941946e5e1b2676e7139e1b5bf6b93b93c0815 Signed-off-by: Varsha Rao <varao@redhat.com>
* xlators/features/namespace: Add namespace xlator and link into brick graphVarsha Rao2018-02-211-0/+104
| | | | | | | | | | | | | | | | | | | | | The following release-3.8-fb branch patch is upstreamed: > features/namespace: Add namespace xlator and link into brick graph > Commit ID: dbd30776f26e > https://review.gluster.org/#/c/18041/ > By Michael Goulet <mgoulet@fb.com> Changes in this patch: Removes extra config.h and namespace.h file in namespace.c Adds default_getspec_cbk to libglusterfs.sym Rename dict_for_each to dict_foreach_inline Remove fd.h header file stack.h Add test case for truncate, open and symlink This patch is required to forward port io-threads namespace patch. Updates: #401 Change-Id: Ib88c95b89eecee9b8957df8a4c8712c899c761d1 Signed-off-by: Varsha Rao <varao@redhat.com>
* tests: Set timeout of 300 for self-heal.tNigel Babu2018-02-211-0/+2
| | | | | | | There are a few tests that take more time on regression nodes Change-Id: If126d5ebd422cd6d99125db040e74f0d104af7bc Signed-off-by: Nigel Babu <nigelb@redhat.com>
* tests: bring option of per test timeoutAmar Tumballi2018-02-152-0/+4
| | | | | | | | | | | | | | This uses 'timeout' command with 300 seconds default. Right now, there is just 1 test which takes more than that in a properly setup machine. Ideally best case is set the default to something like 30 seconds, and if a test is supposed to take more than that, owner should add a timeout line to test knowingly. That way, it makes test writers think about a time limit too. Change-Id: I747005ce1f208aeb2ecbf899e8feea487ecd21a0 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* gfapi: return pre/post attributes at callback for glfs apiKinglong Mee2018-02-122-2/+4
| | | | | | Updates: #389 Change-Id: Ic71632722effe4b8855d5de3e65688efd9afe1e3 Signed-off-by: Kinglong Mee <mijinlong@open-fs.com>
* gfapi: return pre/post attributes from glfs_ftruncateKinglong Mee2018-02-121-1/+1
| | | | | | Updates: #389 Change-Id: I8faea0828921fb17f05f7321c3cb01747373f21e Signed-off-by: Kinglong Mee <mijinlong@open-fs.com>
* gfapi: return pre/post attributes from glfs_pread/pwriteKinglong Mee2018-02-122-2/+2
| | | | | | | | | | | | | | | As nfs-ganesha, a wcc data contains pre/post attributes is return in read/write rpc reply. nfs-ganesha get those attributes by two getattr between the real read/write right now. But, gluster has return pre/post attributes from glusterfsd, those attributes are skipped in syncop/gfapi, if gfapi return them, the upper user (nfs-ganesha) can use them directly without any duplicate getattr. Updates: #389 Change-Id: I7b643ae4241cfe2aeb17063de00192d81674024a Signed-off-by: Kinglong Mee <mijinlong@open-fs.com>
* performance/io-threads: expose io-thread queue depthsVarsha Rao2018-02-081-0/+7
| | | | | | | | | | | | | | | | | | | | The following release-3.8-fb branch patch is upstreamed: > io-stats: Expose io-thread queue depths > Commit ID: 69509ee7d2 > https://review.gluster.org/#/c/18143/ > By Shreyas Siravara <sshreyas@fb.com> Changes in this patch: - Replace iot_pri_t with gf_fop_pri_t - Replace IOT_PRI_{HI, LO, NORMAL, MAX, LEAST} with GF_FOP_PRI_{HI, LO, NORMAL, MAX, LEAST} - Use dict_unref() instead of dict_destroy() This patch is required to forward port io-threads namespace patch. Updates: #401 Change-Id: I1b47a63185a441a30fbc423ca1015df7b36c2518 Signed-off-by: Varsha Rao <varao@redhat.com>
* tests/dht: Non-root can delete stale linkto filesN Balachandran2018-02-081-0/+51
| | | | | | | | | Test to check that non-root users can delete stale linkto files Change-Id: Ic9bc76bc485cab839927af60cfce78a058eee2e4 BUG: 1542318 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/dht: avoid overwriting client writes during migrationSusant Palai2018-02-022-0/+51
| | | | | | | | | | | | | | | | | | | | | | | | For more details on this issue see https://github.com/gluster/glusterfs/issues/308 Solution: This is a restrictive solution where a file will not be migrated if a client writes to it during the migration. This does not check if the writes from the rebalance and the client actually do overlap. If dht_writev_cbk finds that the file is being migrated (PHASE1) it will set an xattr on the destination file indicating the file was updated by a non-rebalance client. Rebalance checks if any other client has written to the dst file and aborts the file migration if it finds the xattr. updates gluster/glusterfs#308 Change-Id: I73aec28bc9dbb8da57c7425ec88c6b6af0fbc9dd Signed-off-by: Susant Palai <spalai@redhat.com> Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Signed-off-by: N Balachandran <nbalacha@redhat.com>
* sdfs: crash fixesAmar Tumballi2018-02-011-0/+22
| | | | | | | | | | | | | * from the patch which got tested in experimental branch, there was a code cleanup involved, which missed setting of a local variable, which led to crash immediately after enabling the feature. * added a sanity test case to validate all the fops of sdfs. Updates: #397 Change-Id: I7e0bebfc195c344620577cb16c1afc5f4e7d2d92 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* afr: don't treat all cases all bricks being blamed as split-brainRavishankar N2018-02-011-0/+16
| | | | | | | | | | | | | | | | | | | | | | | Problem: We currently don't have a roll-back/undoing of post-ops if quorum is not met. Though the FOP is still unwound with failure, the xattrs remain on the disk. Due to these partial post-ops and partial heals (healing only when 2 bricks are up), we can end up in split-brain purely from the afr xattrs point of view i.e each brick is blamed by atleast one of the others. These scenarios are hit when there is frequent connect/disconnect of the client/shd to the bricks while I/O or heal are in progress. Fix: Instead of undoing the post-op, pick a source based on the xattr values. If 2 bricks blame one, the blamed one must be treated as sink. If there is no majority, all are sources. Once we pick a source, self-heal will then do the heal instead of erroring out due to split-brain. Change-Id: I3d0224b883eb0945785ade0e9697a1c828aec0ae BUG: 1539358 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* quiesce, gfproxy: Implement failover across multiple gfproxy nodesPoornima G2018-01-301-0/+2
| | | | | | Updates: #242 Change-Id: I767e574a26e922760a7130bd209c178d74e8cf69 Signed-off-by: Poornima G <pgurusid@redhat.com>
* libgfapi: Add new api for supporting mandatory-locksAnoop C S2018-01-223-1/+543
| | | | | | | | | | | | | | | | The current API for byte-range locks [glfs_posix_lock()] doesn't allow applications to specify whether it is advisory or mandatory type locks. This particular change is to introduce an extended byte-range lock API with an additional argument for including the byte-range lock mode to be one among advisory(default) or mandatory. Patch also includes a gfapi test case which make use of this new api to acquire mandatory locks. Ref: https://github.com/gluster/glusterfs-specs/blob/master/done/GlusterFS%203.8/Mandatory%20Locks.md Change-Id: Ia09042c755d891895d96da857321abc4ce03e20c Updates #393 Signed-off-by: Anoop C S <anoopcs@redhat.com>
* locks: added inodelk/entrylk contention upcall notificationsXavier Hernandez2018-01-161-0/+62
| | | | | | | | | | | | | | The locks xlator now is able to send a contention notification to the current owner of the lock. This is only a notification that can be used to improve performance of some client side operations that might benefit from extended duration of lock ownership. Nothing is done if the lock owner decides to ignore the message and to not release the lock. For forced release of acquired resources, leases must be used. Change-Id: I7f1ad32a0b4b445505b09908a050080ad848f8e0 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
* cluster/ec Mark ./tests/basic/ec/heal-info.t as bad testAshish Pandey2018-01-121-0/+1
| | | | | | Change-Id: I7369fdd7510cc7ebf051cc621fc83764ba9591f3 BUG: 1533815 Signed-off-by: Ashish Pandey <aspandey@redhat.com>
* tests: Use /dev/urandom instead of /dev/random for ddPranith Kumar K2018-01-081-1/+1
| | | | | | | | | | | | If there's not enough entropy in the system then reading /dev/random would take a significant time since it would take a long time for the /dev/random buffers to get full as is desired in this dd run. Milind found that this test file takes almost a 1000 seconds or more to pass instead of just a minute because of this. BUG: 1431955 Change-Id: I9145b17f77f09d0ab71816ae249c69b8fe14c1a5 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* cluster/ec: OpenFD heal implementation for ECSunil Kumar Acharya2018-01-051-0/+109
| | | | | | | | | | | | | Existing EC code doesn't try to heal the OpenFD to avoid unnecessary healing of the data later. Fix implements the healing of open FDs before carrying out file operations on them by making an attempt to open the FDs on required up nodes. BUG: 1431955 Change-Id: Ib696f59c41ffd8d5678a484b23a00bb02764ed15 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* cluster/ec: Change [f]getxattr to parallel-dispatch-onePranith Kumar K2017-12-222-0/+173
| | | | | | | | | | | | | | | | | | | | | | | | | | | At the moment in EC, [f]getxattr operations wait to acquire a lock while other operations are in progress even when it is in the same mount with a lock on the file/directory. This happens because [f]getxattr operations follow the model where the operation is wound on 'k' of the bricks and are matched to make sure the data returned is same on all of them. This consistency check requires that no other operations are on-going while [f]getxattr operations are wound to the bricks. We can perform [f]getxattr in another way as well, where we find the good_mask from the lock that is already granted and wind the operation on any one of the good bricks and unwind the answer after adjusting size/blocks to the parent xlator. Since we are taking into account good_mask, the reply we get will either be before or after a possible on-going operation. Using this method, the operation doesn't need to depend on completion of on-going operations which could be taking long time (In case of some slow disks and writes are in progress etc). Thus we reduce the time to serve [f]getxattr requests. I changed [f]getxattr to dispatch-one and added extra logic in ec_link_has_lock_conflict() to not have any conflicts for fops with EC_MINIMUM_ONE as fop->minimum to achieve the effect described above. Modified scripts to make sure READ fop is received in EC to trigger heals. Updates gluster/glusterfs#368 Change-Id: I3b4ebf89181c336b7b8d5471b0454f016cdaf296 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* quick-read: Integrate quick read with upcall and increase cache timePoornima G2017-12-131-0/+69
| | | | | | | Fixes : #261 Co-author: Subha sree Mohankumar <smohanku@redhat.com> Change-Id: Ie9dd94e86459123663b9b200d92940625ef68eab Signed-off-by: Poornima G <pgurusid@redhat.com>
* debug/io-stats: Adding stat for weighted & unweighted average latencyRichard Wareing2017-12-091-0/+43
| | | | | | | | | | | | | | | | | | | | | | Summary: - Our current approach to measuring "average fop latency" is badly flawed in that it doesn't weight the FOPs correctly according to how many occurred in the time interval. This makes Statisticians very sad. This patch adds an internally computed weighted average latency which will be far more efficient to display via ODS, as well as having the benefit of not being complete nonsense. Reviewers: kvigor, dph, sshreyas Reviewed By: sshreyas Change-Id: Ie3618f279b545610b7ed1a8482243fcc8dc53217 BUG: 1523353 Reviewed-on: https://review.gluster.org/18192 Reviewed-by: Shreyas Siravara <sshreyas@fb.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Signed-off-by: Ana M. Neri <amnerip@fb.com>
* storage/posix: Add limit to number of hard linksShreyas Siravara2017-12-081-0/+44
| | | | | | | | | | | | Summary: Too may hard links blow up btrfs by exceeding max xattr size (recordign pgfid for each hardlink). Add a limit to prevent this explosion. > Reviewed-on: https://review.gluster.org/18232 > Reviewed-by: Shreyas Siravara <sshreyas@fb.com> Fixes gluster/glusterfs#370 Signed-off-by: ShyamsundarR <srangana@redhat.com> Change-Id: I614a247834fb8f2b2743c0c67d11cefafff0dbaa
* libglusterfs: specify ctx in gf_log_set_loglevelZhang Huan2017-12-061-3/+3
| | | | | | | | | specify ctx in gf_log_set_loglevel, instead of getting it from a thread specific variable. Change-Id: I498f826e8e32231235a6b0005026a27c327727fd BUG: 1521213 Signed-off-by: Zhang Huan <zhanghuan@open-fs.com>
* Tier: Stop tierd for detach starthari gowtham2017-12-011-7/+15
| | | | | | | | | | | | | | | | | | | Problem: tierd was stopped only after detach commit This makes the detach take a longer time. The detach demotes the files to the cold brick and if the promotion frequency is hit, then the tierd starts to promote files to hot tier again. Fix: stop tierd after detach start so the files get demoted faster. Note: the is_tier_enabled was not maintained properly. That has been fixed too. some code clean up has been done. Signed-off-by: hari gowtham <hgowtham@redhat.com> Change-Id: I532f7410cea04fbb960105483810ea3560ca149b BUG: 1446381
* tests: fix for bug-1260185-donot-allow-detach-commit-unnecessarily.t failurehari gowtham2017-11-301-0/+47
| | | | | | | | | | problem: detach commit was issues before detach start was completed. fix: wait for detach start to finish and then detach commit. Change-Id: I639962be6de6dbd1512f0a5617050d1e6872eac8 BUG: 1517961 Signed-off-by: hari gowtham <hgowtham@redhat.com>
* tests: Renable basic/afr/split-brain-favorite-child-policy.tNigel Babu2017-11-291-6/+0
| | | | | | | | | This test was failing due to an infra issue. The infra issue is now fixed. BUG: 1517961 Change-Id: I09dfab9c0a3ebe73c738222e6269d9e35c85eddb Signed-off-by: Nigel Babu <nigelb@redhat.com>
* tests: mark currently failing regression tests as known issuesAmar Tumballi2017-11-281-0/+6
| | | | | | Change-Id: If6c36dc6c395730dfb17b5b4df6f24629d904926 BUG: 1517961 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* cluster/ec: EC DISCARD doesn't punch hole properlySunil Kumar Acharya2017-11-281-1/+9
| | | | | | | | | | | | | | Problem: DISCARD operation on EC volume was punching hole of lesser size than the specified size in some cases. Solution: EC was not handling punch hole for tail part in some cases. Updated the code to handle it appropriately. BUG: 1516206 Change-Id: If3e69e417c3e5034afee04e78f5f78855e65f932 Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
* tests/basic/inode-leak.t: mark as known issueAmar Tumballi2017-11-271-0/+7
| | | | | | | | | | | | Mainly because the tests are consistently taking more than 20mins for each runs. One of the samples from a regression run: > /tests/basic/inode-leak.t - 1643 second Change-Id: If11572203c702f64847794f6d578a6dc19a0dee8 Signed-off-by: Amar Tumballi <amarts@redhat.com>
* cluster/ec: Remove unneded testsXavier Hernandez2017-11-233-42/+0
| | | | | | | | | | | | | | | | To reduce regression test execution time, some of the EC tests have been removed to save time. These tests were only doing the same than other existing tests but with different volume configurations. I keep ec-3-1.t, ec-4-1.t, ec-5-2.t and ec-6-2.t because they cover all the combinations of the most important cases: * Configurations with redundancy 1 and redundancy > 1 * Configurations with #fragments = power of 2 and not a power of 2 Change-Id: I0b1d15b50428b605c6a1c96df12d8054556b1f23 Signed-off-by: Xavier Hernandez <jahernan@redhat.com>
* tests : add a test to check if there is an inode leakMohamed Ashiq Liyazudeen2017-11-221-0/+41
| | | | | | | | | | | This test check if there is inode leak in bricks. lru_size for mount is expected to be zero. active_size for mount is expected to be 1 which is of root. Change-Id: I18762b4255af411f1b55c0be98451c8ef1b35478 BUG: 1370116 Signed-off-by: Mohamed Ashiq Liyazudeen <mliyazud@redhat.com>
* afr: add checks for allowing lookupsRavishankar N2017-11-181-23/+0
| | | | | | | | | | | | | | | | | | | | | | Problem: In an arbiter volume, lookup was being served from one of the sink bricks (source brick was down). shard uses the iatt values from lookup cbk to calculate the size and block count, which in this case were incorrect values. shard_local_t->last_block was thus initialised to -1, resulting in an infinite while loop in shard_common_resolve_shards(). Fix: Use client quorum logic to allow or fail the lookups from afr if there are no readable subvolumes. So in replica-3 or arbiter vols, if there is no good copy or if quorum is not met, fail lookup with ENOTCONN. With this fix, we are also removing support for quorum-reads xlator option. So if quorum is not met, neither read nor write txns are allowed and we fail the fop with ENOTCONN. Change-Id: Ic65c00c24f77ece007328b421494eee62a505fa0 BUG: 1467250 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* *.pc: Fix include path in CflagsAndrea Bolognani2017-11-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The include path in glusterfs-api.pc looks like -I${includedir}/glusterfs However, client code will include the glusterfs headers using #include <glusterfs/api/glfs.h> rather than #include <api/glfs.h> which makes the "/glusterfs" part entirely unnecessary. More importantly, on some platforms such as FreeBSD, the header files for glusterfs will be installed in /usr/local/include, which is *not* part of the compiler's default include path, so compilation will fail with something like fatal error: 'glusterfs/api/glfs.h' file not found #include <glusterfs/api/glfs.h> ^~~~~~~~~~~~~~~~~~~~~~ The fix is to simply drop the extra "/glusterfs". The same change is applied to other *.pc files as well, althought I haven't actually tested those. A test program (gfapi-load-volfile) and the glfsxmp example application were using the wrong include paths, so they had to be fixed as well. Change-Id: I9a16de47fee7ab9c12d1cb823bbe061a69352670 BUG: 1508947 Signed-off-by: Andrea Bolognani <abologna@redhat.com>
* cluster/ec: create eager-lock option for non-regular filesXavier Hernandez2017-11-052-0/+2
| | | | | | | | | A new option is added to allow independent configuration of eager locking for regular files and non-regular files. Change-Id: I8f80e46d36d8551011132b15c0fac549b7fb1c60 BUG: 1502610 Signed-off-by: Xavier Hernandez <jahernan@redhat.com>
* gfapi: Register/Unregister Upcall events' callbackSoumya Koduri2017-10-312-0/+311
| | | | | | | | | | | | | | | | | Polling continuously for upcall events is not optimal. Hence new APIs have been added to allow applications to register and unregister upcall events it is interested in along with callback function to be invoked in case of any such upcalls sent by backend server. @TODO: Make changes in upcall xlator so that events are sent to only those clients which either registered callbacks or started polling. Shall be addressed in separate patch. Updates: #315 Change-Id: I40473fd5cf689172ff2d7bb2869756b7fd5bc761 Signed-off-by: Soumya Koduri <skoduri@redhat.com>
* tests: Update tier CLI in .t filesN Balachandran2017-10-3016-21/+21
| | | | | | | | Update .t tier tests to use the new tier CLI. Change-Id: I0e7f1769071108d8266fc86378c4466bcaf96e7d BUG: 1505253 Signed-off-by: N Balachandran <nbalacha@redhat.com>
* cluster/afr: Fail open on split-brainPranith Kumar K2017-10-261-0/+38
| | | | | | | | | | | | | | | | | Problem: Append on a file with split-brain succeeds. Open is intercepted by open-behind, when write comes on the file, open-behind does open+write. Open succeeds because afr doesn't fail it. Then write succeeds because write-behind intercepts it. Flush is also intercepted by write-behind, so the application never gets to know that the write failed. Fix: Fail open on split-brain, so that when open-behind does open+write open fails which leads to write failure. Application will know about this failure. Change-Id: I4bff1c747c97bb2925d6987f4ced5f1ce75dbc15 BUG: 1294051 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>