summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-utils.c
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: volume get fixes for client-io-threads & quorum-typeRavishankar N2018-03-161-1/+26
| | | | | | | | | | | | | | | | | | | | | 1. If a replica volume created on glusterfs-3.8 was upgraded to glusterfs-3.12, `gluster vol get volname client-io-threads` displayed 'on' even though it wasn't and the xlator wasn't loaded on the client-graph. This was due to removing certain checks in glusterd_get_default_val_for_volopt as a part of commit 47604fad4c2a3951077e41e0c007ceb979bb2c24. Fix it. 2. Also, as a part of op-version bump-up, client-io-threads was being loaded on the clients during volfile regeneration. Prevent it. 3. AFR assumes quorum-type to be auto in newly created replic 3 (odd replica in general) volumes but `gluster vol get quorum-type` displays 'none'. Fix it. Change-Id: I19e586361ed1065c70fb378533d3b4dac1095df9 BUG: 1552404 Signed-off-by: Ravishankar N <ravishankar@redhat.com> (cherry picked from commit bd2c45fe3180fe36b042d5eabd348b6eaeb8d3e2)
* glusterd: import volumes in separate synctaskAtin Mukherjee2018-02-211-24/+142
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With brick multiplexing, to attach a brick to an existing brick process the prerequisite is to have the compatible brick to finish it's initialization and portmap sign in and hence the thread might have to go to a sleep and context switch the synctask to allow the brick process to communicate with glusterd. In normal code path, this works fine as glusterd_restart_bricks () is launched through a separate synctask. In case there's a mismatch of the volume when glusterd restarts, glusterd_import_friend_volume is invoked and then it tries to call glusterd_start_bricks () from the main thread which eventually may land into the similar situation. Now since this is not done through a separate synctask, the 1st brick will never be able to get its turn to finish all of its handshaking and as a consequence to it, all the bricks will fail to get attached to it. Solution : Execute import volume and glusterd restart bricks in separate synctask. Importing snaps had to be also done through synctask as there's a dependency of the parent volume need to be available for the importing snap functionality to work. >mainline patch : https://review.gluster.org/#/c/19357/ https://review.gluster.org/#/c/19536/ https://review.gluster.org/#/c/19539/ Change-Id: I290b244d456afcc9b913ab30be4af040d340428c BUG: 1543706 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> (cherry picked from commit cb0339f9229fc5c05d7ef4cfcc4ca9c4569f3755)
* glusterd: optimize glusterd import volumes code pathAtin Mukherjee2018-01-311-5/+7
| | | | | | | | | | | | | In case there's a version mismatch detected for one of the volumes glusterd was ending up with updating all the volumes which is a overkill. >mainline patch : https://review.gluster.org/#/c/19358/ Change-Id: I6df792db391ce3a1697cfa9260f7dbc3f59aa62d BUG: 1540554 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> (cherry picked from commit bb34b07fd2ec5e6c3eed4fe0cdf33479dbf5127b)
* glusterd: connect to an existing brick process when qourum status is ↵Atin Mukherjee2018-01-051-4/+9
| | | | | | | | | | | | | | NOT_APPLICABLE_QUORUM First of all, this patch reverts commit 635c1c3 as the same is causing a regression with bricks not coming up on time when a node is rebooted. This patch tries to fix the problem in a different way by just trying to connect to an existing running brick when quorum status is not applicable. Change-Id: I0efb5901832824b1c15dcac529bffac85173e097 BUG: 1509845 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* snapshot : after brick reset/replace snapshot creation failsSunny Kumar2017-12-291-9/+0
| | | | | | | | | | | | | Problem : after brick reset/replace snapshot creation fails Solution : During brick reset/replace when we validate and aggrigate dictionary data from another node it was rewriting 'mount_dir' value to NULL which is critical for snapshot creation. Change-Id: Iabefbfcef7d8ac4cbd2a241e821c0e51492c093e BUG: 1512451 Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
* rchecksum/fips: Replace MD5 usage to enable fips supportKotresh HR2017-12-211-1/+1
| | | | | | | | | rchecksum uses MD5 which is not fips compliant. Hence using sha256 for the same. Updates: #230 Change-Id: I7fad016fcc2a9900395d0da919cf5ba996ec5278 Signed-off-by: Kotresh HR <khiremat@redhat.com>
* fips: Replace md5sum usage to enable fips supportKotresh HR2017-12-191-3/+5
| | | | | | | | | | | | md5sum is not fips compliant. Using xxhash64 instead of md5sum for socket file generation in glusterd and changelog to enable fips support. NOTE: md5sum is 128 bit hash. xxhash used is 64 bit. Updates: #230 Change-Id: I1bf2ea05905b9151cd29fa951f903685ab0dc84c Signed-off-by: Kotresh HR <khiremat@redhat.com>
* glusterd: Fix buffer overflow in glusterd_get_volopt_contentXavier Hernandez2017-12-151-14/+45
| | | | | | | | | | | | The size of the description of each option in gluster has become greater than the hardcoded maximum size, causing a buffer overflow when requesting a list of all options. This patch uses dynamic memory to store all the information. Change-Id: I115cb9b55fc440434638bf468a50db055cdf271d BUG: 1526402 Signed-off-by: Xavier Hernandez <jahernan@redhat.com>
* glusterd : Fix glusterd mem leaksMohit Agrawal2017-12-121-0/+34
| | | | | | | | | | | | | | | Problem: glusterd eats a huge amount of meory during volume set/stop/start. Solution: At the time of compare graph topology create a graph and populate key values in the dictionary, after finished graph comparison we do destroy the new graph.At the time of construct graph we don't take any reference and for server xlators we do take reference in server_setvolume so in glusterd we do take reference after prepare a new graph while we do create a graph to compare graph topology. BUG: 1520245 Change-Id: I573133d57771b7dc431a04422c5001a06b7dda9a Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* glusterd: Allow volume stop to succeed if certain processes are already deadShreyas Siravara2017-12-081-4/+14
| | | | | | | | | | | | | | Summary: - Sometimes a the process that glusterd is trying to kill is already dead. - In that case, if it can't find the pid, it should just continue on and not fail the entire operation. Change-Id: Ic96952a8d31927446f648830ede6ccd82512663f BUG: 1522968 Reviewed-on: https://review.gluster.org/18234 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shreyas Siravara <sshreyas@fb.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Signed-off-by: Ana M. Neri <amnerip@fb.com>
* glusterd: Free up svc->conn on volume deleteAtin Mukherjee2017-12-071-0/+5
| | | | | | | | | | Daemons like snapd, tierd and gfproxyd are maintained on per volume basis and on a volume delete we should destroy the rpc connection established for them. Change-Id: Id1440e39da07b990fdb9b207df18da04b1ca8014 BUG: 1522775 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* Tier: Stop tierd for detach starthari gowtham2017-12-011-0/+10
| | | | | | | | | | | | | | | | | | | Problem: tierd was stopped only after detach commit This makes the detach take a longer time. The detach demotes the files to the cold brick and if the promotion frequency is hit, then the tierd starts to promote files to hot tier again. Fix: stop tierd after detach start so the files get demoted faster. Note: the is_tier_enabled was not maintained properly. That has been fixed too. some code clean up has been done. Signed-off-by: hari gowtham <hgowtham@redhat.com> Change-Id: I532f7410cea04fbb960105483810ea3560ca149b BUG: 1446381
* snapshot : snapshot creation failed after brick reset/replaceSunny Kumar2017-11-281-0/+9
| | | | | | | | | | | | Problem : snapshot creation was failing after brick reset/replace Fix : changed code to set mount_dir value in rsp_dict during prerequisites phase i.e glusterd_brick_op_prerequisites call and removed form prevalidate phase. Signed-off-by: Sunny Kumar <sunkumar@redhat.com> Change-Id: Ief5d0fafe882a7eb1a7da8535b7c7ce6f011604c BUG: 1512451
* glusterd: dead code coverity fixKartik_Burmee2017-11-141-2/+0
| | | | | | | | | | | | | | | | function: glusterd_volume_rebalance_use_rsp_dict problem: Execution cannot reach this statement: "goto out;" fix: removed the condition 'if(!ctx_dict)' and the corresponding action 'goto out;' because it will never be executed. reason: if the execution reaches this condition, then the value of '!ctx_dict' will always be false otherwise the execution will reach the 'out' label, skipping the execution of this conditional statement. html link of issue: https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-11-10-0f524f07/html/1/99glusterd-utils.c.html#error Change-Id: I7ab6b2386bb01c54edd872f9f83bb8d2a4cd499f BUG: 789278 Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
* glusterd: checked_return coverity fixKartik_Burmee2017-11-141-1/+1
| | | | | | | | | | issue: Calling "recursive_rmdir" without checking return value fix: typecasted return value of function 'recursive_rmdir' to void Change-Id: Ie95c2a2c503bb247afa69823d0043c3af5e036e8 BUG: 789278 Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
* glusterd: buffer_size_warning coverity fixKartik_Burmee2017-11-131-1/+1
| | | | | | | | | | | | | | function: glusterd_import_volinfo problem: Calling strncpy with a maximum size argument of 256 bytes on destination array "new_volinfo->parent_volname" of size 256 bytes might leave the destination string unterminated. fix: The third argument of strncpy specifies the number of characters to be copied from the source string to the destination string. To make sure that the final string in destination is always null terminated, we copy 1 less character than the total capacity of the destination array and the last element of the array will be filled by '/0' automatically by the strncpy function. html link of issue: https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-11-10-0f524f07/html/1/39glusterd-utils.c.html#error Change-Id: I76b0d10e6a932b0885531c9be3c4f4ce7239f3e1 BUG: 789278 Signed-off-by: Kartik_Burmee <kburmee@redhat.com>
* glusterd: display gluster volume status, when quorum type is serverSanju Rakonde2017-11-091-0/+6
| | | | | | | | | | | | | Problem: when server-quorum-type is server, after restarting glusterd in the node which is up, gluster volume status is giving incorrect information. Fix: check whether server is blank, before adding other keys into the dictionary. Change-Id: I926ebdffab330ccef844f23f6d6556e137914047 BUG: 1511339 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* Coverity Issue: PW.INCLUDE_RECURSION in several filesGirjesh Rajoria2017-11-091-1/+0
| | | | | | | | | | | | | | Coverity ID: 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 423, 424, 425, 426, 427, 428, 429, 436, 437, 438, 439, 440, 441, 442, 443 Issue: Event include_recursion Removed redundant, recursive includes from the files. Change-Id: I920776b1fa089a2d4917ca722d0075a9239911a7 BUG: 789278 Signed-off-by: Girjesh Rajoria <grajoria@redhat.com>
* glusterd: fix brick restart parallelismAtin Mukherjee2017-11-011-9/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | glusterd's brick restart logic is not always sequential as there is atleast three different ways how the bricks are restarted. 1. through friend-sm and glusterd_spawn_daemons () 2. through friend-sm and handling volume quorum action 3. through friend handshaking when there is a mimatch on quorum on friend import. In a brick multiplexing setup, glusterd ended up trying to spawn the same brick process couple of times as almost in fraction of milliseconds two threads hit glusterd_brick_start () because of which glusterd didn't have any choice of rejecting any one of them as for both the case brick start criteria met. As a solution, it'd be better to control this madness by two different flags, one is a boolean called start_triggered which indicates a brick start has been triggered and it continues to be true till a brick dies or killed, the second is a mutex lock to ensure for a particular brick we don't end up getting into glusterd_brick_start () more than once at same point of time. Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2 BUG: 1506513 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: persist brickinfo's port change into glusterd's storeGaurav Yadav2017-10-311-0/+19
| | | | | | | | | | | | | | | | | | Problem: Consider a case where node reboot is performed and prior to reboot brick was listening to 49153. Post reboot glusterd assigned 49152 to brick and started the brick process but the new port was never persisted. Now when glusterd restarts glusterd always read the port from its persisted store i.e 49153 however pmap signin happens with the correct port i.e 49152. Fix: Make sure when glusterd_brick_start is called, glusterd_store_volinfo is eventually invoked. Change-Id: Ic0efbd48c51d39729ed951a42922d0e59f7115a1 BUG: 1506589 Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
* Make sure attach_brick return a proper error codeMichael Scherer2017-10-261-1/+1
| | | | | | | | | | | | | | Coverity warn about "Using uninitialized value "ret".", which is indeed the case. If we can't attach after 15 tries, attach_brick return uninitiliazed return code, which is likely hard to trigger, but bad. I use -1 to follow the UNIX convetion, but there is no specific error code tested by code. Change-Id: I43ad60d25ab169f9cea0db600805ced7f77c37ba BUG: 789278 Signed-off-by: Michael Scherer <misc@redhat.com>
* gfproxyd: Let glusterd manage gfproxy daemonPoornima G2017-10-181-13/+3
| | | | | | | Updates: #242 BUG: 1428063 Change-Id: Iaaf2edf99b2ecc75f6d30762c752a6d445c1c826 Signed-off-by: Poornima G <pgurusid@redhat.com>
* gfproxy: Introduce new server-side daemon called GFProxyShreyas Siravara2017-10-101-0/+39
| | | | | | | | | | | | | | Summmary: Adds a new server-side daemon called gfproxyd & a new FUSE client called gfproxy-client Updates: #242 BUG: 1428063 Change-Id: I83210098d3a381922bc64fed1110ae1b76e6519f Tested-by: Shreyas Siravara <sshreyas@fb.com> Reviewed-by: Kevin Vigor <kvigor@fb.com> Signed-off-by: Shreyas Siravara <sshreyas@fb.com> Signed-off-by: Poornima G <pgurusid@redhat.com>
* glusterd : fix client io-threads option for replicate volumesRavishankar N2017-10-091-19/+1
| | | | | | | | | | | | | | | | | | | Problem: Commit ff075a3d6f9b142911d25c27fd209838782bfff0 disabled loading client-io-threads for replicate volumes (it was set to on by default in commit e068c1997314046658dd502e9118dab32decf879) due to performance issues but in doing so, inadvertently failed to load the xlator even if the user explicitly enabled the option using the volume set command. This was despite returning returning sucess for the volume set. Fix: Modify the check in perfxl_option_handler() and add checks in volume create/add-brick/remove-brick code paths, tying it all to GD_OP_VERSION_3_12_2. Change-Id: Ib612973a999a7da818cc926f5c2601b1f0794fcf BUG: 1498570 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* Command to identify client processhari gowtham2017-09-061-2/+208
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | command: gluster volume status <volname/all> client-list output: Client connections for volume v1 Name count ----- ------ fuse 2 tierd 1 total clients for volume v1 : 3 ----------------------------------------------------------------- Client connections for volume v2 Name count ----- ------ tierd 1 fuse.gsync 1 total clients for volume v2 : 2 ----------------------------------------------------------------- Updates: #178 Change-Id: I0ff2579d6adf57cc0d3bd0161a2ec6ac6c4747c0 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: https://review.gluster.org/18095 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: do not create .glusterfs/indicesPrashanth Pai2017-08-231-13/+13
| | | | | | | | | | | | | | | | | | | | | glusterd shouldn't concern itself with creating directories specific to certain xlators. The index xlator will now proceed creating './glusterfs/indices' dir only if the parent '.glusterfs' directory exists, which still fixes the original problem reported i.e 'volume start force' command shouldn't create brick path if it doesn't exist (BUG 1457202) This reverts most of the changes done by the commit b58a15948fb3fc37b6c0b70171482f50ed957f42 Change-Id: I7fc52ad64dce220e336c218fb4d85933ca2e61c0 Signed-off-by: Prashanth Pai <ppai@redhat.com> Reviewed-on: https://review.gluster.org/18003 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd: introduce max-port rangeAtin Mukherjee2017-08-171-1/+17
| | | | | | | | | | | | | | | | | glusterd.vol file always had an option (commented out) to indicate the base-port to start the portmapper allocation. This patch brings in the max-port configuration where one can limit the range of ports which gluster can be allowed to bind. Fixes: #305 Change-Id: Id7a864f818227b9530a07e13d605138edacd9aa9 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/18016 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Gaurav Yadav <gyadav@redhat.com>
* Infra to indentify processhari gowtham2017-08-161-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: currently we can't identify which process is running and how many instances of it are available. Fix: name the process when its spawned and send it to the server and save it in the client_t The processes that abide by this change from this patch are: 1) fuse mount, 2) rebalance, 3) selfheal, 4) tier, 5) quota, 6) snapshot, 7) brick. 8) gfapi (by default. gfapi.<processname> if processname is found) Note: fuse gets a process name as native-fuse-client by default. If the user gives a name for the fuse and spawns it, it will be of this type --process-name native-fuse-client.<name_specified>. This can be made use by the process like aux mount done by quota, geo-rep, etc by adding another option in the aux mount " -o process-name=gsync_mount" Updates: #178 Signed-off-by: hari gowtham <hgowtham@redhat.com> Change-Id: Ie4d02257216839338043737691753bab9a974d5e Reviewed-on: https://review.gluster.org/17957 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* glusterd: Gluster should keep PID file in correct locationGaurav Kumar Garg2017-08-111-8/+6
| | | | | | | | | | | | | | | | | | | | | | | Currently Gluster keeps process pid information of all the daemons and brick processes in Gluster configuration file directory (ie., /var/lib/glusterd/*). These pid files should be seperate from configuration files. Deletion of the configuration file directory might result into serious problems. Also, /var/run/gluster is the default placeholder directory for pid files. So, with this fix Gluster will keep all process pid information of all processes in /var/run/gluster/* directory. Change-Id: Idb09e3fccb6a7355fbac1df31082637c8d7ab5b4 BUG: 1258561 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com> Reviewed-on: https://review.gluster.org/13580 Tested-by: MOHIT AGRAWAL <moagrawa@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Block brick attach request till the brick's ctx is setMohit Agrawal2017-08-081-24/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: In multiplexing setup in a container environment we hit a race where before the first brick finishes its handshake with glusterd, the subsequent attach requests went through and they actually failed and glusterd has no mechanism to realize it. This resulted into all the such bricks not to be active resulting into clients not able to connect. Solution: Introduce a new flag port_registered in glusterd_brickinfo to make sure about pmap_signin finish before the subsequent attach bricks can be processed. Test: To reproduce the issue followed below steps 1) Create 100 volumes on 3 nodes(1x3) in CNS environment 2) Enable brick multiplexing 3) Reboot one container 4) Run below command for v in ‛gluster v list‛ do glfsheal $v | grep -i "transport" done After apply the patch command should not fail. Note: A big thanks to Atin for suggest the fix. BUG: 1478710 Change-Id: I8e1bd6132122b3a5b0dd49606cea564122f2609b Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/17984 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* logging: localtime logging, cmdline, volume set optionKaleb S. KEITHLEY2017-08-031-0/+7
| | | | | | | | | | | | | | | | | Despite the fact that appliances generally use UTC, some users really want log entries in localtime. fixes gluster/glusterfs#272 feature page: https://review.gluster.org/17807 Change-Id: I5fbf2c3eedd9eb128fb3f851dd67b2f4081c8bba Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/16911 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd : glusterd fails to start when peer's network interface is downGaurav Yadav2017-07-281-2/+9
| | | | | | | | | | | | | | | | | | | | | Problem: glusterd fails to start on nodes where glusterd tries to come up even before network is up. Fix: On startup glusterd tries to resolve brick path which is based on hostname/ip, but in the above scenario when network interface is not up, glusterd is not able to resolve the brick path using ip_address or hostname With this fix glusterd will use UUID to resolve brick path. Change-Id: Icfa7b2652417135530479d0aa4e2a82b0476f710 BUG: 1472267 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Reviewed-on: https://review.gluster.org/17813 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: fix compile warning in glusterd_volinfo_copy_brickinfo()Niels de Vos2017-07-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | gcc 7 (default in Fedora 26) complains about the following: glusterd-utils.c: In function ‘glusterd_volinfo_copy_brickinfo’: glusterd-utils.c:4279:54: warning: comparison between pointer and zero character constant [-Wpointer-compare] if (old_brickinfo->real_path == '\0') { ^~ glusterd-utils.c:4279:29: note: did you mean to dereference the pointer? if (old_brickinfo->real_path == '\0') { ^ Comparing a char* with a char is not correct in any case. Instead, compare it to NULL and the char[0] with '\0'. Change-Id: Ie5b925cd200416a1e2fa035046005f421994e641 Updates: #259 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: https://review.gluster.org/17847 Tested-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* posix: option to handle the shared bricks for statvfs()Amar Tumballi2017-07-241-0/+28
| | | | | | | | | | | | | | | | | | | | Currently 'storage/posix' xlator has an option called option `export-statfs-size no`, which exports zero as values for few fields in `struct statvfs`. In a case of backend brick shared between multiple brick processes, the values of these variables should be `field_value / number-of-bricks-at-node`. This way, even the issue of 'min-free-disk' etc at different layers would also be handled properly when the statfs() sys call is made. Fixes #241 Change-Id: I2e320e1fdcc819ab9173277ef3498201432c275f Signed-off-by: Amar Tumballi <amarts@redhat.com> Reviewed-on: https://review.gluster.org/17618 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: fix brick start raceAtin Mukherjee2017-07-201-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Another race where glusterd was restarted glusterd_brick_start () is called multiple times due to friend handshaking and in one instance when one of the brick was attempted to be attached to the existing brick process, send_attach_req failed as the first brick itself was still not up and then we did a synlock_unlock () followed by a sleep of 1 sec, before the same thread woke up, another thread tried to start the same brick process and then it assumed that it has to start a fresh brick process. Solution: 1. If brick is in starting phase (brickinfo->status == GF_BRICK_STARTING), no need for a reattempt to start the brick. 2. While initiating attach_req set brickinfo->status to GF_BRICK_STARTING Change-Id: Ib007b6199ec36fdab4214a1d37f99d7f65ef64da BUG: 1465559 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17840 Reviewed-by: Amar Tumballi <amarts@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* glusterd: Set default value for cluster.max-bricks-per-process to 0Samikshan Bairagya2017-07-191-9/+12
| | | | | | | | | | | | | | | | | | | When brick-multiplexing is enabled, and "cluster.max-bricks-per-process" isn't explicitly set, multiplexing happens without any limit set. But the default value set for that tunable is 1, which is confusing. This commit sets the default value to 0, and prevents the user from being able to set this value to 1 when brick-multiplexing is enbaled. The default value of 0 denotes that brick-multiplexing can happen without any limit on the number of bricks per process. Change-Id: I4647f7bf5837d520075dc5c19a6e75bc1bba258b BUG: 1472417 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17819 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: Introduce option to limit no. of muxed bricks per processSamikshan Bairagya2017-07-101-52/+342
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit introduces a new global option that can be set to limit the number of multiplexed bricks in one process. Usage: `# gluster volume set all cluster.max-bricks-per-process <value>` If this option is not set then multiplexing will happen for now with no limitations set; i.e. a brick process will have as many bricks multiplexed to it as possible. In other words the current multiplexing behaviour won't change if this option isn't set to any value. This commit also introduces a brick process instance that contains information about brick processes, like the number of bricks handled by the process (which is 1 in non-multiplexing cases), list of bricks, and port number which also serves as an unique identifier for each brick process instance. The brick process list is maintained in 'glusterd_conf_t'. Updates: #151 Change-Id: Ib987d14ab0a4f6034dac01b73a4b2839f7b0b695 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17469 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: mark brickinfo to started on successful attachAtin Mukherjee2017-06-281-5/+4
| | | | | | | | | | | | | | brickinfo's port & status should be filled up only when attach brick is successful. Change-Id: I68b181be37cb94d176f0f4692e8d9dac5493181c BUG: 1465559 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17640 Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: brick process fails to restart after gluster pod failureMohit Agrawal2017-06-271-10/+31
| | | | | | | | | | | | | | | | | | | | Problem: In container environment sometime after delete gluster pod and created new gluster pod brick process doesn't seem to come up. Solution: On the basis of logs it seems glusterd is trying to attach with non glusterfs process.Change the code of function glusterd_get_sock_from_brick_pid to fetch socketpath from argument of running brick process. BUG: 1464072 Change-Id: Ida6af00066341b683bbb4440d7a0d8042581656a Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/17601 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* index: Do not proceed with init if brick is not mountedRavishankar N2017-06-191-2/+16
| | | | | | | | | | | | | | | | | | | | | ..or else when a volume start force is given, we end up creating /brick-path/.glusterfs/indices folder and various subdirs under it and eventually starting the brick process. As a part of this patch, glusterd_get_index_basepath() is added in glusterd, who will then use it to create the basepath during volume-create, add-brick, replace-brick and reset-brick. It also uses this function to set the 'index-base' xlator option for the index translator. Change-Id: Id018cf3cb6f1e2e35b5c4cf438d1e939025cb0fc BUG: 1457202 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/17426 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd: fix brick start raceAtin Mukherjee2017-06-061-14/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit tries to handle a race where we might end up trying to spawn the brick process twice with two different set of ports resulting into glusterd portmapper having the same brick entry in two different ports which will result into clients to fail connect to bricks because of incorrect ports been communicated back by glusterd. In glusterd_brick_start () checking brickinfo->status flag to identify whether a brick has been started by glusterd or not is not sufficient as there might be cases where while glusterd restarts glusterd_restart_bricks () will be called through glusterd_spawn_daemons () in synctask and immediately glusterd_do_volume_quorum_action () with server-side-quorum set to on will again try to start the brick and in case if the RPC_CLNT_CONNECT event for the same brick hasn't been processed by glusterd by that time, brickinfo->status will still be marked as GF_BRICK_STOPPED resulting into a reattempt to start the brick with a different port and that would result portmap go for a toss and resulting clients to fetch incorrect port. Fix would be to introduce another enum value called GF_BRICK_STARTING in brickinfo->status which will be set when a brick start is attempted by glusterd and will be set to started through RPC_CLNT_CONNECT event. For brick multiplexing, on attach brick request given the brickinfo->status flag is marked to started directly this value will not have any effect. Also this patch removes started_here flag as it looks to be redundant as brickinfo->status. Change-Id: I9dda1a9a531b67734a6e8c7619677867b520dcb2 BUG: 1457981 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17447 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* glusterd: Fix regression wrt add-brick on replica count changeSamikshan Bairagya2017-06-011-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | tests/bugs/glusterd/bug-1406411-fail-add-brick-on-replica-count-change.t was failing on centos machines with brick multiplexing enabled. This is because detaching individual bricks manually from the backend like it is done in the regression test framework by 'kill_brick', fails to send a RPC_CLNT_DISCONNECT to glusterd when multiplexing is enabled. This causes the add-brick command to not fail when one of the bricks are killed using kill_brick in the regression test framework. To fix this, set the brick status to GF_BRICK_STOPPED on the glusterd end during portmap signout. This commit also sets the brick status in glusterd_brick_stop() function so that the brick status is correctly set to 'stopped' even when the function is called independently for individual bricks. Change-Id: I4d6f7b579069d0cfa53cb2b0cff78876e1f31594 BUG: 1456898 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17422 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: Eliminate race in brick compatibility checking stageSamikshan Bairagya2017-05-241-2/+5
| | | | | | | | | | | | | | | | | | | | | | In https://review.gluster.org/17307/, while looking for compatible bricks for multiplexing, it is checked if the brick pidfile exists before checking if the corresponding brick process is running. However checking if the brick process is running just after checking if the pidfile exists isn't enough since there might be race conditions where the pidfile has been created but hasn't been updated with a pid value yet. This commit solves that by making sure that we wait iteratively till the pid value is updated as well. Change-Id: Ib7a158f95566486f7c1f84b6357c9b89e4c797ae BUG: 1451248 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17375 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: Don't spawn new glusterfsds on node reboot with brick-muxSamikshan Bairagya2017-05-181-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | With brick multiplexing enabled, upon a node reboot new bricks were not being attached to the first spawned brick process even though there wasn't any compatibility issues. The reason for this is that upon glusterd restart after a node reboot, since brick services aren't running, glusterd starts the bricks in a "no-wait" mode. So after a brick process is spawned for the first brick, there isn't enough time for the corresponding pid file to get populated with a value before the compatibilty check is made for the next brick. This commit solves this by iteratively waiting for the pidfile to be populated in the brick compatibility comparison stage before checking if the brick process is alive. Change-Id: Ibd1f8e54c63e4bb04162143c9d70f09918a44aa4 BUG: 1451248 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17307 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: coverity fix for string overflowSakshi Bansal2017-05-121-2/+3
| | | | | | | | | | | | | | | coverity CID: 1124852 Change-Id: Ifb04ad36b0652474007d2768737722231a5c1df0 BUG: 789278 Signed-off-by: Sakshi Bansal <sabansal@redhat.com> Reviewed-on: https://review.gluster.org/9539 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* glusterd: Make reset-brick work correctly if brick-mux is onSamikshan Bairagya2017-05-101-11/+33
| | | | | | | | | | | | | | | | | | | Reset brick currently kills of the corresponding brick process. However, with brick multiplexing enabled, stopping the brick process would render all bricks attached to it unavailable. To handle this correctly, we need to make sure that the brick process is terminated only if brick-multiplexing is disabled. Otherwise, we should send the GLUSTERD_BRICK_TERMINATE rpc to the respective brick process to detach the brick that is to be reset. Change-Id: I69002d66ffe6ec36ef48af09b66c522c6d35ac58 BUG: 1446172 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17128 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: socketfile & pidfile related fixes for brick multiplexing featureMohit Agrawal2017-05-091-25/+97
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: While brick-muliplexing is on after restarting glusterd, CLI is not showing pid of all brick processes in all volumes. Solution: While brick-mux is on all local brick process communicated through one UNIX socket but as per current code (glusterd_brick_start) it is trying to communicate with separate UNIX socket for each volume which is populated based on brick-name and vol-name.Because of multiplexing design only one UNIX socket is opened so it is throwing poller error and not able to fetch correct status of brick process through cli process. To resolve the problem write a new function glusterd_set_socket_filepath_for_mux that will call by glusterd_brick_start to validate about the existence of socketpath. To avoid the continuous EPOLLERR erros in logs update socket_connect code. Test: To reproduce the issue followed below steps 1) Create two distributed volumes(dist1 and dist2) 2) Set cluster.brick-multiplex is on 3) kill glusterd 4) run command gluster v status After apply the patch it shows correct pid for all volumes BUG: 1444596 Change-Id: I5d10af69dea0d0ca19511f43870f34295a54a4d2 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/17101 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: cleanup pidfile on pmap signoutAtin Mukherjee2017-05-081-0/+62
| | | | | | | | | | | | | | | | This patch ensures 1. brick pidfile is cleaned up on pmap signout 2. pmap signout evemt is sent for all the bricks when a brick process shuts down. Change-Id: I7606a60775b484651d4b9743b6037b40323931a2 BUG: 1444596 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17168 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* Fixes quota aux mount failureSanoj Unnikrishnan2017-05-081-10/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The aux mount is created on the first limit/remove_limit/list command and it remains until volume is stopped / deleted / (quota is disabled) , where we do a lazy unmount. If the process is uncleanly terminated, then the mount entry remains and we get (Transport disconnected) error on subsequent attempts to run quota list/limit-usage/remove commands. Second issue, There is also a risk of inadvertent rm -rf on the /var/run/gluster causing data loss for the user. Ideally, /var/run is a temp path for application use and should not cause any data loss to persistent storage. Solution: 1) unmount the aux mount after each use. 2) clean stale mount before mounting, if any. One caveat with doing mount/unmount on each command is that we cannot use same mount point for both list and limit commands. The reason for this is that list command needs mount to be accessible in cli after response from glusterd, So it could be unmounted by a limit command if executed in parallel (had we used same mount point) Hence we use separate mount points for list and limit commands. Change-Id: I4f9e39da2ac2b65941399bffb6440db8a6ba59d0 BUG: 1433906 Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com> Reviewed-on: https://review.gluster.org/16938 NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Manikandan Selvaganesh <manikandancs333@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* build: conditionally build legacy gNFS server and associated sub-packagingKaleb S. KEITHLEY2017-04-281-5/+2
| | | | | | | | | | | | | | | | | | | Plus some additional logic in glusterd to ensure gnfs (glusterfs) daemons are never started if server/nfs xlator is not installed. As a service, nfs is still initialized. The glusterfs-gnfs RPM may be installed or uninstalled independent of anything else, including on a system where gluster is actively running, so the existence of the xlator is always tested before trying to start gnfs. Change-Id: I56743ad1cb36a84917226d7d26cb9d015d441e66 BUG: 1326219 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/16958 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>