summaryrefslogtreecommitdiffstats
path: root/tests/bugs/core
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: Introduce option to limit no. of muxed bricks per processSamikshan Bairagya2017-07-101-0/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit introduces a new global option that can be set to limit the number of multiplexed bricks in one process. Usage: `# gluster volume set all cluster.max-bricks-per-process <value>` If this option is not set then multiplexing will happen for now with no limitations set; i.e. a brick process will have as many bricks multiplexed to it as possible. In other words the current multiplexing behaviour won't change if this option isn't set to any value. This commit also introduces a brick process instance that contains information about brick processes, like the number of bricks handled by the process (which is 1 in non-multiplexing cases), list of bricks, and port number which also serves as an unique identifier for each brick process instance. The brick process list is maintained in 'glusterd_conf_t'. Updates: #151 Change-Id: Ib987d14ab0a4f6034dac01b73a4b2839f7b0b695 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17469 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* tests: remove tests/bugs/core/bug-1421590-brick-mux-reuse-ports.tAtin Mukherjee2017-04-121-60/+0
| | | | | | | | | | | | | | | | | | | | | | | | | bug-1421590-brick-mux-reuse-ports.t seems to be a bad test to me and here is my reasoning: This test tries to check if the ports are reused or not. When a volume is restarted, by the time glusterd tries to allocate a new port to the one of the brick processes of the volume there is no guarantee that the older port will be allocated given the kernel might take some extra time to free up the port between this time frame. From https://build.gluster.org/job/regression-test-burn-in/2932/console we can clearly see that post restart of the volume, glusterd allocated port 49153 & 49155 for brick1 & brick2 respectively but the test was expecting the ports to be matched with 49155 & 49156 which were allocated before the volume was restarted. Change-Id: Id887bf28445261d4de04fc7502e58057659c9512 BUG: 1441035 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17033 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* glusterd: hold off volume deletes while still restarting bricksJeff Darcy2017-03-302-0/+96
| | | | | | | | | | | | | | | | We need to do this because modifying the volume/brick tree while glusterd_restart_bricks is still walking it can lead to segfaults. Without waiting we could accidentally "slip in" while attach_brick has released big_lock between retries and make such a modification. Change-Id: I30ccc4efa8d286aae847250f5d4fb28956a74b03 BUG: 1432542 Signed-off-by: Jeff Darcy <jeff@pl.atyp.us> Reviewed-on: https://review.gluster.org/16927 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* core: Clean up pmap registry up correctly on volume/brick stopSamikshan Bairagya2017-02-271-0/+55
| | | | | | | | | | | | | | | | | | | | | | | This commit changes the following: 1. In glusterfs_handle_terminate, send out individual pmap signout requests to glusterd for every brick. 2. Add another parameter to glusterfs_mgmt_pmap_signout function to pass the brickname that needs to be removed from the pmap registry. 3. Make sure pmap_registry_search doesn't break out from the loop iterating over the list of bricks per port if the first brick entry corresponding to a port is whitespaced out. 4. Make sure the pmap registry entries are removed for other daemons like snapd. Change-Id: I69949874435b02699e5708dab811777ccb297174 BUG: 1421590 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/16689 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Gaurav Yadav <gyadav@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: take conn->lock around operations on conn->reconnectJeff Darcy2017-02-181-0/+25
| | | | | | | | | | | | | | | | Failure to do this could lead to a race in which a timer would be removed twice concurrently, corrupting the timer list (because gf_timer_call_cancel has no internal protection against this) and possibly causing a crash. Change-Id: Ic1a8b612d436daec88fd6cee935db0ae81a47d5c BUG: 1421721 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: https://review.gluster.org/16662 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* tests: Fix spurious failure in bug-1402841.t-mt-dir-scan-race.tKrutika Dhananjay2016-12-191-0/+10
| | | | | | | | | | | | | | Check that shd is up before executing 'volume heal' command Change-Id: Ib510a5de06d732fd3a738e90fa16376698479897 BUG: 1405554 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/16169 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Ravishankar N <ravishankar@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
* syncop: fix conditional wait bug in parallel dir scanRavishankar N2016-12-091-0/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: The issue as seen by the user is detailed in the BZ but what is happening is if the no. of items in the wait queue == max-qlen, syncop_mt_dir_scan() does a pthread_cond_wait until the launched synctask workers dequeue the queue. But if for some reason the worker fails, the queue is never emptied due to which further invocations of syncop_mt_dir_scan() are blocked forever. Fix: Made some changes to _dir_scan_job_fn - If a worker encounters error while processing an entry, notify the readdir loop in syncop_mt_dir_scan() of the error but continue to process other entries in the queue, decrementing the qlen as and when we dequeue elements, and ending only when the queue is empty. - If the readdir loop in syncop_mt_dir_scan() gets an error form the worker, stop the readdir+queueing of further entries. Change-Id: I39ce073e01a68c7ff18a0e9227389245a6f75b88 BUG: 1402841 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/16073 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* logging: Fix per xl log levelPoornima G2016-08-151-0/+43
| | | | | | | | | | | | | | | | | | | Currently per xlator loglevel setting doesn't work, due to the flaw in loglevel checking. Fix the same. Per xlator logging can be set using the below command: Eg: setfattr -n trusted.glusterfs.patchy-md-cache.set-log-level -v TRACE /mnt/glusterfs/0 Change-Id: I8ff1d15bd5693b6f682d99bee22a4bbb5eee646c BUG: 1362520 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: http://review.gluster.org/15071 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* libglusterfs: fix glusterd statedump crashAtin Mukherjee2016-08-041-1/+6
| | | | | | | | | | | | | | | | | | commit 3c04a91 removed setting typeStr to NULL if num_allocs is set to 0, this has caused this regression. Code has been put back like earlier and to avoid statedump printing all the NULL values check is modified to see skip the records if num_allocs is 0 instead of total_allocs Change-Id: Ib8bcc2fba908e88cf52b641c3f6bcba74f5e667c BUG: 1359190 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/14987 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* io-stats: Fix io-stat dump to dump at all levelsShyam2016-06-151-5/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previous commit to fix the bug, where io-stat-dump was overwriting the dump file when the client and a brick was on the same host, failed to consider the existing behaviour where io-stats can help generate closely correlated set of stats across clients and bricks, by triggering the dump using the same command. This was introduced in commit: 0facb11220aea20a6573b656785922219c9650cf Further, by limiting the first io-stat to unwind the dump request, there is no way to trigger other io-stat xlators in the stack to dump their stat information. This bug hence is being fixed by this commit keeping the following in mind, - We need to trigger io-stat-dump for all instances in the graph when this attr is set - We need to write the output to different files, so that they do not overwrite each others data - We need to prevent this xattr from being set on the path that is used to trigger the io-stat-dump information Change-Id: I31ec380f0d85e10313a9d7b977da0e1ec74638a6 BUG: 1322825 Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/14552 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Ravishankar N <ravishankar@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: default value of nfs.disable, change from false to trueKaleb S KEITHLEY2016-04-271-0/+1
| | | | | | | | | | | | | | | | | | Next step in eventual deprecation of glusterfs nfs server in favor of ganesha.nfsd. Also replace several open-coded strings with constant. Change-Id: If52f5e880191a14fd38e69b70a32b0300dd93a50 BUG: 1092414 Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/13738 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* statedump: Prevent (null) typestr to be printedPranith Kumar K2016-04-241-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: After the commits: 7e44c783ad731856956929f6614bbe045c26ea3a - lock: use spinlock only on multicore systems a6aecae2cd8171b8538bfe5d2800bdd157380b85 - nfs: fix lock variable type we see a lot of "[global.glusterfs - usage-type (null) memusage]" in statedump because lock status is not all-zeros after init, and the memcmp to check that a datatype is never allocated is invalid. Fix: Changed if a datatype is allocated or not check based on total_allocs. Also removed setting typestr to NULL on gf_free even when num_allocs is 0. Because even that is leading to 'null' memusage string to be printed in statedump. BUG: 1329870 Change-Id: If2b01a557cbdc787625db32e276e06cee3ac46ee Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/14054 Smoke: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* qemu-block: mop leftover codePrasanna Kumar Kalever2016-04-241-1/+1
| | | | | | | | | | | | | | | | | | | | This patch cleans off the code that was leftover by '6860968' which basically remove qemu-block from gluster code repo Also update 'bug-1168803-snapd-option-validation-fix.t' which previously used 'features.file-snapshot' for checking 'volume set' for some reason. Change-Id: I2c4f28e186b74a4ce55d48c0fa7f3f79ca1901b5 BUG: 1198849 Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> Reviewed-on: http://review.gluster.org/13964 Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* io-stats: Fix overwriting of client profile by the bricksPoornima G2016-04-121-0/+16
| | | | | | | | | | | | | | | | | | | | | Issue: When the user executes the following command to generate the client perf profile, if the client is on the same node as bricks, the bricks overwrite the profile info written by clients. Also xattr "trusted.io-stats-dump" gets set on the mount point. setxattr -n trusted.io-stats-dump -v /tmp/iostat.log /mnt/fuse Fix: Unwind from setxattr, when xattr is 'io-stats-dump' Change-Id: Iba0e5df2f25f4ba3b1399ac176a3f8a916ff372e BUG: 1322825 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: http://review.gluster.org/13872 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* tests: Disable flush behind so that file will be closedPranith Kumar K2015-05-031-0/+1
| | | | | | | | | | | Change-Id: I97eb8d236448e8783f09b4922aa465a2b1b3979b BUG: 1217788 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/10488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
* tests: move bug-1168875.t to tests/bugs/snapshot folderAtin Mukherjee2015-04-061-96/+0
| | | | | | | | | Change-Id: Ie4e960002b8de7e31f91365785c44df3ac04c88d BUG: 1178685 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/10131 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* snapshot: append timestamp with snapnameMohammed Rafi KC2015-03-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Appending GMT time stamp with snapname by default. If no-timestamp flag is given during snapshot creation, then time stamp will not append with snapname; Initial consumer of this feature is Samba's Shadow Copy feature. This feature allows Windows user to get previous revisions of a file. For this feature to work snapshot names under .snaps folder (USS) should have timestamp in following format appended: @GMT-YYYY.MM.DD-hh.mm.ss PS: https://www.samba.org/samba/docs/man/manpages/vfs_shadow_copy2.8.html This format is configurable by Samba conf file. Due to a limitation in Windows directory access the exact format cannot be used by USS. Therefore we have modified the file format to: _GMT-YYYY.MM.DD-hh.mm.ss Snapshot scheduling feature also required to append timestamp to the snapshot name therefore timestamp is appended in snapshot creation itself instead of doing the changes in snapview server. More info: https://www.mail-archive.com/gluster-users@gluster.org/msg18895.html Change-Id: Idac24670948cf4c0fbe916ea6690e49cbc832d07 BUG: 1189473 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9597 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* uss: disable memory accounting for the snapshot daemonRaghavendra Bhat2015-01-281-1/+3
| | | | | | | | | | | | | | | | | | * Bring in option to disable memory accounting for a glusterfs process This reverses the changes done by the commit 7fba3a88f1ced610eca0c23516a1e720d75160cd. * Change the key from "memory-accounting" to "no-memory-accounting", as by default all the glusterfs process enable memory accounting now. So to disable memory accounting for some process, "no-mem-accounting" argument has to be passed. Change-Id: I39c7cefb0fe764ea3e48f4e73e1305b084c5f497 BUG: 1184366 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/9469 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* tests: move all test-cases into component subdirectoriesNiels de Vos2015-01-0618-0/+629
There are around 300 regression tests, 250 being in tests/bugs. Running partial set of tests/bugs is not easy because this is a flat directory with almost all tests inside. It would be valuable to make partial test/bugs easier, and allow the use of mulitple build hosts for a single commit, each running a subset of the tests for a quicker result. Additional changes made: - correct the include path for *.rc shell libraries and *.py utils - make the testcases pass checkpatch - arequal-checksum in afr/self-heal.t was never executed, now it is - include.rc now complains loudly if it fails to find env.rc Change-Id: I26ffd067e9853d3be1fd63b2f37d8aa0fd1b4fea BUG: 1178685 Reported-by: Emmanuel Dreyfus <manu@netbsd.org> Reported-by: Atin Mukherjee <amukherj@redhat.com> URL: http://www.gluster.org/pipermail/gluster-devel/2014-December/043414.html Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/9353 Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>