summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt
Commit message (Collapse)AuthorAgeFilesLines
...
* quota: fixes issue in quota.conf when setting large number of limitsSanoj Unnikrishnan2017-11-091-12/+33
| | | | | | | | | | | | | | | Problem: It was not possible to configure more than 7712 quota limits. This was because a stack buffer of size 131072 was used to read from quota.conf file. In the new format of quota.conf file each gfid entry takes 17bytes (16byte gfid + 1 byte type). So, the buf_size was not a multiple of gfid entry size and as per code this was considered as corruption. Solution: make buf size multiple of gfid entry size Change-Id: Id036225505a47a4f6fa515a572ee7b0c958f30ed BUG: 1510940 Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com>
* glusterd: display gluster volume status, when quorum type is serverSanju Rakonde2017-11-091-0/+6
| | | | | | | | | | | | | Problem: when server-quorum-type is server, after restarting glusterd in the node which is up, gluster volume status is giving incorrect information. Fix: check whether server is blank, before adding other keys into the dictionary. Change-Id: I926ebdffab330ccef844f23f6d6556e137914047 BUG: 1511339 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Coverity Issue: PW.INCLUDE_RECURSION in several filesGirjesh Rajoria2017-11-0912-13/+0
| | | | | | | | | | | | | | Coverity ID: 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 423, 424, 425, 426, 427, 428, 429, 436, 437, 438, 439, 440, 441, 442, 443 Issue: Event include_recursion Removed redundant, recursive includes from the files. Change-Id: I920776b1fa089a2d4917ca722d0075a9239911a7 BUG: 789278 Signed-off-by: Girjesh Rajoria <grajoria@redhat.com>
* snapshot: fix coverity issue 'UNREACHABLE'Sunny Kumar2017-11-091-34/+1
| | | | | | | | | | | | | | | | Problem : In glusterd-snapshot-utils.c:2778 code path is unreachable as it was intended for n-way replication snapshot support. Fix : Removed code now, will revert back this change in future once we support n-way replication in snapshot. This patch fixes coverity issue 687 from [1]. [1] https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-10-30-9aa574a5/html/ Change-Id: I163f318e62a7e84d211d9930dedee6587d37b2a0 BUG: 789278 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* glusterd: restart the brick if qorum status is NOT_APPLICABLE_QUORUMAtin Mukherjee2017-11-091-1/+2
| | | | | | | | | | | If a volume is not having server quorum enabled and in a trusted storage pool all the glusterd instances from other peers are down, on restarting glusterd the brick start trigger doesn't happen resulting into the brick not coming up. Change-Id: If1458e03b50a113f1653db553bb2350d11577539 BUG: 1509845 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Coverity Issue FORWARD_NULL fixVishal Pandey2017-11-081-1/+1
| | | | | | | | | | | | | | | | | | Problems: According to Coverity Issue ID 207 issues are as follows- a-) Dereferencing "this" pointer without checking if it points to NULL. b-) The IF condition on line 1750 should check if any *one* of dict, this or peerinfo is NULL and not *all*. Fix: Replace the && operator with || to check if any *one* of this, dict or peerinfo is NULL and then execute out label. Change-Id: I40057d6cade71d3862c8e491bf4137cf25dda327 BUG: 789278 Signed-off-by: Vishal Pandey <vishpandey2014@gmail.com>
* snapshot : fix coverity issue in glusterd-snapshot.cSunny Kumar2017-11-081-1/+3
| | | | | | | | | | This patch fixes coverity issue 87 from [1]. [1] https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-10-30-9aa574a5/html/ Change-Id: I5c9c67b4dd43ebf1a5a83532cbd06178000da145 BUG: 789278 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* snapshot: lvm cleanup during snapshot removeSunny Kumar2017-11-081-7/+13
| | | | | | | | | | | Problem : During snapshot remove lvm cleanup was skipped for deactivated snapshots by assuming that its mount point is not present. Fix : Do no skip lvm cleanup by checking active mount point. Change-Id: I856d2d647c75db8b37b7f430277daef6eb7580a8 BUG: 1509254 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* xlators/mgmt/glusterd/: Coverity Issue MISSING_BREAK in ↵Girjesh Rajoria2017-11-071-1/+2
| | | | | | | | | | | | | | glusterd_op_stage_rebalance Coverity ID: 297, 298 Issue: Event unterminated_case Added Fall through in comment. Change-Id: Ied742e64d9741cc974906921dcca80f0b738b7d6 BUG: 789278 Signed-off-by: Girjesh Rajoria <grajoria@redhat.com>
* rpc: optimize fop program lookupMilind Changire2017-11-061-1/+1
| | | | | | | | | | Ensure that the fop program is the first in the program list so that there's minimum amount of time spent to search the program for the most frequently needed use case. Change-Id: I45c3dcdbf39ec90ba39d914432d13a2ace00a5ee BUG: 1509647 Signed-off-by: Milind Changire <mchangir@redhat.com>
* xlators/mgmt/glusterd/: DEADCODE in client_graph_set_rda_optionsGirjesh Rajoria2017-11-061-1/+1
| | | | | | | | | | | | | Coverity ID: 137 Issue: Event dead_error_begin: Execution cannot reach this statement: "set_graph_errstr(graph, "in...". Assigned return value of gf_string2bytesize_uint64 () to ret. Change-Id: Ibef1b1398b233f6442b23282f5ca205c1869a8ee BUG: 789278 Signed-off-by: Girjesh Rajoria <grajoria@redhat.com>
* glusterd: Fix few coverity errorsPrashanth Pai2017-11-066-11/+34
| | | | | | | | | | | Fixes issues 810, 248, 491, 499, 85, 786, 811, 43, and 44 from the report at [1]. [1]: https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-10-30-9aa574a5/html/ BUG: 789278 Change-Id: I27ebae2ffb2256b8eef0757d768cc46e5a942e9f Signed-off-by: Prashanth Pai <ppai@redhat.com>
* glusterd: Changed default op-version for some optionsShyamsundarR2017-11-062-6/+6
| | | | | | | | | | As 3.13 is branched at a point that includes the features that are changed with this commit, their minimum supported op-versions should also change to 3.13. Change-Id: I7ef8eccc13a16f93939c1edbff9508d1e167c5e4 BUG: 1509412 Signed-off-by: ShyamsundarR <srangana@redhat.com>
* cluster/ec: create eager-lock option for non-regular filesXavier Hernandez2017-11-051-0/+5
| | | | | | | | | A new option is added to allow independent configuration of eager locking for regular files and non-regular files. Change-Id: I8f80e46d36d8551011132b15c0fac549b7fb1c60 BUG: 1502610 Signed-off-by: Xavier Hernandez <jahernan@redhat.com>
* glusterd: coverity fixesSanju Rakonde2017-11-031-2/+15
| | | | | | | | This patch fixes coverity issue 780, 88, 692, 747, 357 Change-Id: I44b0fe9119853a6d2210507c7247e168d7c3ae0a BUG: 789278 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd/mgmt: coverity fixesSanju Rakonde2017-11-031-12/+6
| | | | | | | | | | | | | | | | | | | This patch fixes coverity issues 179, 59. Problem: control flow won't enter into if block as volname is always NULL and return value of function synctask_barrier_init is unchecked. Fix: remove condition checking which is always resulting in false, and removing if block which is always unreachable. As a result of removal of condition checking volname became a unused variable, so remove declaration of volname. assign return valur of synctask_barrier_init to ret, if it nonzero value goto out. Change-Id: I870aeb020034e97a761e9314f7ace7573f7c520c BUG: 789278 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* core: make gf_boolean_t a C99 bool instead of an enumJeff Darcy2017-11-031-1/+3
| | | | | | | | | | | | This reduces the space used from four bytes to one, and allows new code to use familiar C99 types/values interoperably with our old cruft. It does *not* change current declarations or code; that will be left for a separate - much larger - patch. Updates: #80 Change-Id: I5baedd17d3fb05b38f0d8b8bb9dd62824475842e Signed-off-by: Jeff Darcy <jdarcy@fb.com>
* glusterd: UNUSED VALUE coverity fixSanju Rakonde2017-11-021-0/+2
| | | | | | | | | ret is overwritten before it is used, adding a line for checking its value solves it. Change-Id: I4ac4ad33bc236d9294da64b19d16b3bc9bb8807b BUG: 789278 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: PW.INCLUDE_RECURSION coverity fixSanju Rakonde2017-11-021-1/+0
| | | | | | | | | | | Problem: #include file "glusterd.h" includes itself, glusterd.h -> glusterd-pmap.h -> glusterd.h Fix: remove inclusion statement Change-Id: I28f2043a73a50ae2c7b670e2dece4a3f3fffbcd6 BUG: 789278 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd/pmap: UNUSED VALUE coverity fixSanju Rakonde2017-11-021-1/+0
| | | | | | | | | | | Problem: port value is assigned to p, which is overwritten before using it. Fix: remaoval of assignment line. Change-Id: If007167f37b0325c7bfe12f8e9075ca68524e40a BUG: 789278 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: fix brick restart parallelismAtin Mukherjee2017-11-016-32/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | glusterd's brick restart logic is not always sequential as there is atleast three different ways how the bricks are restarted. 1. through friend-sm and glusterd_spawn_daemons () 2. through friend-sm and handling volume quorum action 3. through friend handshaking when there is a mimatch on quorum on friend import. In a brick multiplexing setup, glusterd ended up trying to spawn the same brick process couple of times as almost in fraction of milliseconds two threads hit glusterd_brick_start () because of which glusterd didn't have any choice of rejecting any one of them as for both the case brick start criteria met. As a solution, it'd be better to control this madness by two different flags, one is a boolean called start_triggered which indicates a brick start has been triggered and it continues to be true till a brick dies or killed, the second is a mutex lock to ensure for a particular brick we don't end up getting into glusterd_brick_start () more than once at same point of time. Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2 BUG: 1506513 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* snapshot: fix coverity issue 'DEADCODE'Sunny Kumar2017-10-311-11/+1
| | | | | | | | | | | | | | Problem : Unreachable code at glusterd-snapshot.c:6718 Unreachable code at glusterd-snapshot.c:7352 FIx : Remove unreachable code At glusterd-snapshot.c:6718 in if condition the value of "snap" must be "NULL" to call glusterd_snap_remove() which is not possible here. Change-Id: Id865bde7c1474a9b9ed11c0ed614676b4e2443c6 BUG: 789278 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* glusterd: delete source brick only once in reset-brick commit forceAtin Mukherjee2017-10-311-1/+1
| | | | | | | | | | | While stopping the brick which is to be reset and replaced delete_brick flag was passed as true which resulted glusterd to free up to source brick before the actual operation. This results commit force to fail failing to find the source brickinfo. Change-Id: I1aa7508eff7cc9c9b5d6f5163f3bb92736d6df44 BUG: 1507466 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: clean up portmap on brick disconnectAtin Mukherjee2017-10-314-11/+46
| | | | | | | | | | | | | | | | | | | | GlusterD's portmap entry for a brick is cleaned up when a PMAP_SIGNOUT event is initiated by the brick process at the shutdown. But if the brick process crashes or gets killed through SIGKILL then this event is not initiated and glusterd ends up with a stale port. Since GlusterD's portmap traversal happens both ways, forward for allocation and backward for registry search, there is a possibility that glusterd might end up running with a stale port for a brick which eventually will end up with clients to fail to connect to the bricks. Solution is to clean up the port entry in case the process is down as part of the brick disconnect event. Although with this the handling PMAP_SIGNOUT event becomes redundant in most of the cases, but this is the safeguard method to avoid glusterd getting into the stale port issues. Change-Id: I04c5be6d11e772ee4de16caf56dbb37d5c944303 BUG: 1503246 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: persist brickinfo's port change into glusterd's storeGaurav Yadav2017-10-315-10/+62
| | | | | | | | | | | | | | | | | | Problem: Consider a case where node reboot is performed and prior to reboot brick was listening to 49153. Post reboot glusterd assigned 49152 to brick and started the brick process but the new port was never persisted. Now when glusterd restarts glusterd always read the port from its persisted store i.e 49153 however pmap signin happens with the correct port i.e 49152. Fix: Make sure when glusterd_brick_start is called, glusterd_store_volinfo is eventually invoked. Change-Id: Ic0efbd48c51d39729ed951a42922d0e59f7115a1 BUG: 1506589 Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
* Make sure attach_brick return a proper error codeMichael Scherer2017-10-261-1/+1
| | | | | | | | | | | | | | Coverity warn about "Using uninitialized value "ret".", which is indeed the case. If we can't attach after 15 tries, attach_brick return uninitiliazed return code, which is likely hard to trigger, but bad. I use -1 to follow the UNIX convetion, but there is no specific error code tested by code. Change-Id: I43ad60d25ab169f9cea0db600805ced7f77c37ba BUG: 789278 Signed-off-by: Michael Scherer <misc@redhat.com>
* cluster/ec: Allow parallel writes in EC if possiblePranith Kumar K2017-10-241-0/+6
| | | | | | | | | | | | | | | | | | Problem: Ec at the moment sends one modification fop after another, so if some of the disks become slow, for a while then the wait time for the writes that are waiting in the queue becomes really bad. Fix: Allow parallel writes when possible. For this we need to make 3 changes. 1) Each fop now has range parameters they will be updating. 2) Xattrop is changed to handle parallel xattrop requests where some would be modifying just dirty xattr. 3) Fops that refer to size now take locks and update the locks. Fixes #251 Change-Id: Ibc3c15372f91bbd6fb617f0d99399b3149fa64b2 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
* gluster: IPv6 single stack supportKevin Vigor2017-10-241-0/+4
| | | | | | | | | | | | | | | | | | | | | | Summary: - This diff changes all locations in the code to prefer inet6 family instead of inet. This will allow change GlusterFS to operate via IPv6 instead of IPv4 for all internal operations while still being able to serve (FUSE or NFS) clients via IPv4. - The changes apply to NFS as well. - This diff ports D1892990, D1897341 & D1896522 to the 3.8 branch. Test Plan: Prove tests! Reviewers: dph, rwareing Signed-off-by: Shreyas Siravara <sshreyas@fb.com> Change-Id: I34fdaaeb33c194782255625e00616faf75d60c33 BUG: 1406898 Reviewed-on-3.8-fb: http://review.gluster.org/16059 Reviewed-by: Shreyas Siravara <sshreyas@fb.com> Tested-by: Shreyas Siravara <sshreyas@fb.com>
* snapshot: Issue with other processes accessing the mounted brickSunny Kumar2017-10-235-44/+218
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Added code for unmount of activated snapshot brick during snapshot deactivation process which make sense as mount point for deactivated bricks should not exist. Removed code for mounting newly created snapshot, as newly created snapshots should not mount until it is activated. Added code for mount point creation and snapshot mount during snapshot activation. Added validation during glusterd init for mounting only those snapshot whose status is either STARTED or RESTORED. During snapshot restore, mount point for stopped snap should exist as it is required to set extended attribute. During handshake, after getting updates from friend mount point for activated snapshot should exist and should not for deactivated snapshot. While getting snap status we should show relevent information for deactivated snapshots, after this pathch 'gluster snap status' command will show output like- Snap Name : snap1 Snap UUID : snap-uuid Brick Path : server1:/run/gluster/snaps/snap-vol-name/brick Volume Group : N/A (Deactivated Snapshot) Brick Running : No Brick PID : N/A Data Percentage : N/A LV Size : N/A Fixes: #276 Change-Id: I65783488e35fac43632615ce1b8ff7b8e84834dc BUG: 1482023 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* glusterd: documenting server.allow-insecureSanju Rakonde2017-10-181-1/+1
| | | | | | | | | | | | problem: "server.allow-insecure" is invisible in gluster volume set help. Fix: "server.allow-insecure" is defined as NO_DOC type, chainging it to DOC type solve the problem. Change-Id: I327f1e4c1684ff846deb8b7df07d4d8a09073274 BUG: 1503424 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* gfproxyd: Let glusterd manage gfproxy daemonPoornima G2017-10-1815-75/+849
| | | | | | | Updates: #242 BUG: 1428063 Change-Id: Iaaf2edf99b2ecc75f6d30762c752a6d445c1c826 Signed-off-by: Poornima G <pgurusid@redhat.com>
* glusterd : introduce timer in mgmt_v3_lockGaurav Yadav2017-10-174-19/+244
| | | | | | | | | | | | | | | | Problem: In a multinode environment, if two of the op-sm transactions are initiated on one of the receiver nodes at the same time, there might be a possibility that glusterd may end up in stale lock. Solution: During mgmt_v3_lock a registration is made to gf_timer_call_after which release the lock after certain period of time Change-Id: I16cc2e5186a2e8a5e35eca2468b031811e093843 BUG: 1499004 Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
* glusterd:Marking all the brick status as stopped when a process goes down in ↵Sanju Rakonde2017-10-121-1/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | brick multiplexing In brick multiplexing environment, if a brick process goes down i.e., if we kill it with SIGKILL, the status of the brick for which the process came up for the first time is only changing to stopped. all other brick statuses are remain started. This is happening because the process was killed abruptly using SIGKILL signal and signal handler wasn't invoked and further cleanup wasn't triggered. When we try to start a volume using force, it shows error saying "Request timed out", since all the brickinfo->status are still in started state, we're waiting for one of the brick process to come up which never going to happen since the brick process was killed. To resolve this, In the disconnect event, We are checking all the processes that whether the brick which got disconnected belongs the process. Once we get the process we are calling a function named glusterd_mark_bricks_stopped_by_proc() and sending brick_proc_t object as an argument. From the glusterd_brick_proc_t we can get all the bricks attached to that process. but these are duplicated ones. To get the original brickinfo we are reading volinfo from brick. In volinfo we will have original brickinfo copies. We are changing brickinfo->status to stopped for all the bricks. Change-Id: Ifb9054b3ee081ef56b39b2903ae686984fe827e7 BUG: 1499509 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* gfproxy: Introduce new server-side daemon called GFProxyShreyas Siravara2017-10-107-21/+360
| | | | | | | | | | | | | | Summmary: Adds a new server-side daemon called gfproxyd & a new FUSE client called gfproxy-client Updates: #242 BUG: 1428063 Change-Id: I83210098d3a381922bc64fed1110ae1b76e6519f Tested-by: Shreyas Siravara <sshreyas@fb.com> Reviewed-by: Kevin Vigor <kvigor@fb.com> Signed-off-by: Shreyas Siravara <sshreyas@fb.com> Signed-off-by: Poornima G <pgurusid@redhat.com>
* glusterd : fix client io-threads option for replicate volumesRavishankar N2017-10-096-34/+92
| | | | | | | | | | | | | | | | | | | Problem: Commit ff075a3d6f9b142911d25c27fd209838782bfff0 disabled loading client-io-threads for replicate volumes (it was set to on by default in commit e068c1997314046658dd502e9118dab32decf879) due to performance issues but in doing so, inadvertently failed to load the xlator even if the user explicitly enabled the option using the volume set command. This was despite returning returning sucess for the volume set. Fix: Modify the check in perfxl_option_handler() and add checks in volume create/add-brick/remove-brick code paths, tying it all to GD_OP_VERSION_3_12_2. Change-Id: Ib612973a999a7da818cc926f5c2601b1f0794fcf BUG: 1498570 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* cli/afr: gluster volume heal info "healed" command output is not appropriateMohit Agrawal2017-10-041-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | Problem: "gluster volume heal info [healed] [heal-failed]" command output on terminal is not appropriate in case of down any volume. Solution: To make message more appropriate change the condition in function "gd_syncop_mgmt_brick_op". Test : To verify the fix followed below procedure 1) Create 2*3 distribute replicate volume 2) set self-heal daemon off 3) kill two bricks (3, 6) 4) create some file on mount point 5) bring brick 3,6 up 6) kill other two brick (2 and 4) 7) make self heal daemon on 8) run "gluster v heal <vol-name>" Note: After apply the patch options (healed | heal-failed) will deprecate from command line. BUG: 1388509 Change-Id: I229c320c9caeb2525c76b78b44a53a64b088545a Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* glusterd/snapshot: Buffer Size WarningSanju Rakonde2017-10-031-1/+1
| | | | | | | | | | | new_brickinfo->mnt_opts is allocated memory using calloc. So it is already zeroed out at allocation. we need not to add null character at the end. we pass one less than maximum size to the strncpy to make sure that destination string is terminated. Change-Id: I463dddd2171fb39a509bb75ffcc074d5b1cf7d62 BUG: 789278 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: FORWARD_NULL coverity fixAkarsha Rai2017-09-261-11/+22
| | | | | | | | | | | | Problem: Pointer used to print xlator name could be NULL. Solution: Updated the code to use xlator name as appropriate. BUG: 789278 Change-Id: I26927ef1f33f362e17c104684d7f722a643c7f97 Signed-off-by: Akarsha Rai <akrai@redhat.com>
* Coverity Issue Fix: IDENTICAL_BRANCHESGirjesh Rajoria2017-09-261-2/+0
| | | | | | | | | | | | Issue: Event identical_branches: The same code is executed when the condition "ret" is true or false, because the code in the if-then branch and after the if statement is identical. Function: glusterd_print_gsync_status_by_vol Fix: removed if and goto statement. Change-Id: I966d793c9f3b65487acfb07083a4039caf593105 BUG: 789278 Signed-off-by: Girjesh Rajoria <grajoria@redhat.com>
* glusterd: retrieve uuid under mutex lockAtin Mukherjee2017-09-251-7/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | In a multi node cluster, if one of the glusterd instance goes down and comes back, then there might be a race situation where glusterd needs to retrieve its uuid (glusterd_retrieve_uuid) and at the same time as part of receiving a friend handshake from other peer, glusterd iterates over the volume information recieved from remote node and checks for if a brick is local or not by calling MY_UUID which in turn calls glusterd_retrieve_uuid. And the same applies for glusterd_store_global_info () function too. This could end up in a situation where for the same node glusterd ends up generating two UUID files in /var/lib/glusterd. Following is the log snippet which confirms the above: [2017-09-01 03:09:24.458030] I [glusterd.c:146:glusterd_uuid_init] 0-management: retrieved UUID: fd46a495-7e33-468f-88f6-63c815fac640 // thread 1 retrieve uuid from glusterd.info [2017-09-01 03:09:24.458034] E [glusterd-store.c:2109:glusterd_retrieve_uuid] 0-: No previous uuid is present //thread 2 can not retrieve uuid, because in thread1 the file pointer has already become eof. [2017-09-01 03:09:24.458041] E [glusterd-store.c:2117:glusterd_retrieve_uuid] 0-: Returning -1 [2017-09-01 03:09:24.458076] I [glusterd.c:176:glusterd_uuid_generate_save] 0-management: generated UUID: 190bb292-a296-4125-96da-42b247511cc4 [2017-09-01 03:09:24.458129] E [store.c:367:gf_store_save_value] 0-: Able to store key: UUID,value: 190bb292-a296-4125-96da-42b247511cc4 Fix is to retrieve the uuid under mutex lock. Credits : cynthia.zhou@nokia-sbell.com Change-Id: Ib9a5e159c3febf2aef13aa5e38f0a51fe409dadb BUG: 1493967 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: disallow replace brick for dist only volumesAtin Mukherjee2017-09-191-1/+11
| | | | | | | | | | | | | | | | | | Allowing replace-brick on dist only volumes will lead to data loss. This patch blocks replace brick commit force to fail if a volume is dist only. Also removing tests/basic/pump.t as its of no use as per the discussion in http://lists.gluster.org/pipermail/gluster-devel/2017-September/053652.html Change-Id: Iabb0c16f865f3fc361b64a19bfcf0c0fbb5c2682 BUG: 1489432 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/18226 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* heal: New feature heal info summary to list the status of brick and count of ↵Mohamed Ashiq Liyazudeen2017-09-151-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | entries to be healed Command output: Brick 192.168.2.8:/brick/1 Status: Connected Total Number of entries: 363 Number of entries in heal pending: 362 Number of entries in split-brain: 0 Number of entries possibly healing: 1 <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <healInfo> <bricks> <brick hostUuid="9105dd4b-eca8-4fdb-85b2-b81cdf77eda3"> <name>192.168.2.8:/brick/1</name> <status>Connected</status> <totalNumberOfEntries>363</numberOfEntries> <numberOfEntriesInHealPending>362</numberOfEntriesInHealPending> <numberOfEntriesInSplitBrain>0</numberOfEntriesInSplitBrain> <numberOfEntriesPossiblyHealing>1</numberOfEntriesPossiblyHealing> </brick> </bricks> </healInfo> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> </cliOutput> Change-Id: I40cb6f77a14131c9e41b292f4901b41a228863d7 BUG: 1261463 Signed-off-by: Mohamed Ashiq Liyazudeen <mliyazud@redhat.com> Reviewed-on: https://review.gluster.org/12154 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: Karthik U S <ksubrahm@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: fix invalid memory reference returnedXavier Hernandez2017-09-131-2/+9
| | | | | | | | | | | Change-Id: I0823c7b33060b48040c1d86ad346a5f6e15bc190 BUG: 1490897 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: https://review.gluster.org/18263 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> Reviewed-by: Gaurav Yadav <gyadav@redhat.com>
* Command to identify client processhari gowtham2017-09-064-4/+229
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | command: gluster volume status <volname/all> client-list output: Client connections for volume v1 Name count ----- ------ fuse 2 tierd 1 total clients for volume v1 : 3 ----------------------------------------------------------------- Client connections for volume v2 Name count ----- ------ tierd 1 fuse.gsync 1 total clients for volume v2 : 2 ----------------------------------------------------------------- Updates: #178 Change-Id: I0ff2579d6adf57cc0d3bd0161a2ec6ac6c4747c0 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: https://review.gluster.org/18095 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* snapshot: clang compile warningSunny Kumar2017-09-041-1/+1
| | | | | | | | | | | | | | | | | | | | | Warning type: NO_EFFECT xlators/mgmt/glusterd/src/glusterd-snapshot.c:4004 : warning: comparison of constant -1 with expression of type 'gf_boolean_t' (aka 'enum _gf_boolean') is always false. solution : "timestamp" was formerly declared as gf_boolean_t, now changed to int. Change-Id: Ie461537f06fe2971e2b09a428a22331808c41a13 BUG: 1335251 Signed-off-by: Sunny Kumar <sunkumar@redhat.com> Reviewed-on: https://review.gluster.org/18062 Tested-by: Sunny Kumar Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: spelling errors reported by Debian maintainerKaleb S. KEITHLEY2017-09-042-4/+4
| | | | | | | | | | | | Reported-by: "Patrick Matthäi" <pmatthaei@debian.org> Change-Id: I0dd6b7d88ddf3c98e8083b75f8dd848babcfd30a BUG: 1487840 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/18185 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* debug/delay-gen: Implement delay-generation featurePranith Kumar K2017-08-313-26/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Background: I was working on a customer issue where the disks were responding some times after seconds. It was becoming very difficult to recreate the issues in our labs, so had to come up with this feature. Requirements: We need an xlator which can delay x% of ops for y micro seconds. We should be able to enable delays for specific fops. This feature is modeled after error-gen. Most of the logic is borrowed from that xlator. This is a minimum implementation of the feature which satisfied the requirements I had. May be in future with more requirements and understanding of the problem further we can improve upon this implementation. Here are the commands and what they do: Enable delay-gen: (This is similar to how err-gen is enabled on the brick side) - gluster volume set <volname> delay-gen posix Set the percentage of fops that need to be delayed - gluster volume set <volname> delay-gen.delay-percentage 50 Default is 10% Set the delay in micro seconds - gluster volume set <volname> delay-gen.delay-duration 500000 Default is 100000 Set comma separated fops to be delayed - gluster v set r2 delay-gen.enable read,write Default is all fops. Fixes #257 Change-Id: Ib547bd39cc024c9cdb63754d21e3aa62fc9d6473 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: https://review.gluster.org/17591 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* feature/posix: Enabled gfid2path by defaultKotresh HR2017-08-291-0/+1
| | | | | | | | | | | | | | | | Enable gfid2path feature by default. The basic performance tests are carried out and it doesn't show significant depreciation. The results are updated in issue. Updates: #139 Change-Id: I5f1949a608d0827018ef9d548d5d69f3bb7744fd Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: https://review.gluster.org/17950 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* libglusterfs: Add new fields to volume_options structKaushal M2017-08-294-182/+182
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The new fields are required to enable equivalent volume set and volgen features, and some more additional features in GD2. GD2 does not use a hard-coded volume options map like GD1, but builds such by reading the options tables directly from the xlators. The new fields being introduced into the volume options struct include the following, - op-version - version(s) the option was introduced in - deprecated - version(s) the option was deprecated in - flags - flags for the option (settable, client, global, force, doc etc.) - tags - descriptive tags that apply to this option, can be used to group options - validate_fn - custom option validation function Enums for currently available flags have also been defined. To avoid a naming clashes, the flag enums in GD1 have been renamed. Updates #302 Change-Id: Ic7e08aef9e051beb47e8dc17d7f7be211aed308a Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: https://review.gluster.org/18059 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* glusterd: disable rpc_clnt_t after relalance process disconnectionMilind Changire2017-08-241-1/+1
| | | | | | | | | | | | | | | | | | | | | Problem: glusterd continues to connect to rebalance process even after the socket connection has disconnected. Solution: rpc_clnt_disable() disables the rpc_clnt_t object and disarms all relevant timers and drops refs to the rpc_clnt_t object and the transport as well. Change-Id: I981d6f1cc0087037f1927062c2770a4d5026a619 BUG: 1484225 Signed-off-by: Milind Changire <mchangir@redhat.com> Reviewed-on: https://review.gluster.org/18114 Reviewed-by: MOHIT AGRAWAL <moagrawa@redhat.com> Tested-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>