summaryrefslogtreecommitdiffstats
path: root/cli/src/cli-rpc-ops.c
Commit message (Collapse)AuthorAgeFilesLines
* features/quota : Fix XML output for quota list command.Sachin Pandit2015-03-181-35/+126
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sample output: --------------- Sample 1) ---------- [root@snapshot-28 glusterfs]# gluster volume quota vol1 list /dir1 /dir4 /dir5 --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volQuota> <limit> <path>/dir1</path> <hard_limit>10.0MB</hard_limit> <soft_limit>80%</soft_limit> <used_space>0Bytes</used_space> <avail_space>10.0MB</avail_space> <hl_exceeded>No</hl_exceeded> <sl_exceeded>No</sl_exceeded> </limit> <limit> <path>/dir4</path> <path>No such file or directory</path> </limit> <limit> <path>/dir5</path> <path>No such file or directory</path> </limit> </volQuota> </cliOutput> Sample 2) --------- gluster volume quota vol1 list --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volQuota/> </cliOutput> <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <volQuota> <limit> <path>/dir</path> <hard_limit>10.0MB</hard_limit> <soft_limit>80%</soft_limit> <used_space>0Bytes</used_space> <avail_space>10.0MB</avail_space> <hl_exceeded>No</hl_exceeded> <sl_exceeded>No</sl_exceeded> </limit> <limit> <path>/dir1</path> <hard_limit>10.0MB</hard_limit> <soft_limit>80%</soft_limit> <used_space>0Bytes</used_space> <avail_space>10.0MB</avail_space> <hl_exceeded>No</hl_exceeded> <sl_exceeded>No</sl_exceeded> </limit> </volQuota> </cliOutput> Change-Id: I8a8d83cff88f778e5ee01fbca07d9f94c412317a BUG: 1200297 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9481 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9847 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd : release cluster wide locks in op-sm during failuresAtin Mukherjee2015-03-031-5/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | glusterd op-sm infrastructure has some loophole in handing error cases in locking/unlocking phases which ends up having stale locks restricting further transactions to go through. This patch still doesn't handle all possible unlocking error cases as the framework neither has retry mechanism nor the lock timeout. For eg - if unlocking fails in one of the peer, cluster wide lock is not released and further transaction can not be made until and unless originator node/the node where unlocking failed is restarted. Following test cases were executed (with the help of gdb) after applying this patch: * RPC timesout in lock cbk * Decoding of RPC response in lock cbk fails * RPC response is received from unknown peer in lock cbk * Setting peerinfo in dictionary fails while sending lock request for first peer in the list * Setting peerinfo in dictionary fails while sending lock request for other peers * Lock RPC could not be sent for peers For all above test cases the success criteria is not to have any stale locks Patch link : http://review.gluster.org/9012 Change-Id: Ia1550341c31005c7850ee1b2697161c9ca04b01a BUG: 1179136 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9012 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/9393 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd/snapshot: Snapshot should be deactivated when it is created.vmallika2014-12-181-2/+10
| | | | | | | | | | | | | | | | | | | | By default snapshot should be deactivated and this should be a configurable option. This behaviour can be configured by the command below: gluster snapshot config activate-on-create <enable|disable> Change-Id: I1911595c32beed43bb2fca4bf99f0d264b422513 BUG: 1170921 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8985 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9241 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* porting: OSX build fixesHarshavardhana2014-10-231-0/+2
| | | | | | | | | | | | - xml build - do not redefine AT_SYMLINK_FOLLOW Change-Id: I516b3713904a6bad946a30f76fe4821f2ac61fd3 BUG: 1130307 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8970 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* feature/snapshot : Interface to delete all snapshots belonging to a system ↵Sachin Pandit2014-09-231-4/+212
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | as-well-as to a particular volume Problem : With the current design we can only delete a single snapshot. And the deletion of volume which contains snapshot is not allowed. Because of that user might be forced to delete all the snapshots manually before he is allowed to delete a volume. Solution: Following is the interface with which user can delete all the snapshots of a system or belonging to a particular volume. Syntax : gluster snapshot delete all *To delete all the snapshots present in a system Syntax : gluster snapshot delete volume <volname> *To deletes all the snapshot present in a volume specified. ======================================================================== Sample Output: Case 1 : Deleting a single snapshot. [root@snapshot-24 glusterfs]# gluster snapshot delete snap1 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: snap1: snap removed successfully ----------------------------------------------------------------- Case 2 : Deleting all the snapshots in a Volume. [root@snapshot-24 glusterfs]# gluster snapshot delete volume vol1 Volume (vol1) contains 9 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap2: snap removed successfully snapshot delete: snap3: snap removed successfully snapshot delete: snap4: snap removed successfully snapshot delete: snap5: snap removed successfully . . . ----------------------------------------------------------------- Case 3 : Deleting all the snapshots in a system. [root@snapshot-24 glusterfs]# gluster snapshot delete all System contains 4 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap7: snap removed successfully snapshot delete: snap8: snap removed successfully snapshot delete: snap9: snap removed successfully snapshot delete: snap10: snap removed successfully ======================================================================== Change-Id: Ifec8e128ab2011cbbba208376b9c92cfbe7d8d71 BUG: 1145083 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8162 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8798 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
* cli/snapshot : gluster volume info should not show the options which are not ↵Sachin Pandit2014-09-231-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | set explicitly. Problem : Even though snap-max-hard-limit, snap-max-soft-limit and auto-delete values were not set explicitly, It was getting showed in the output of gluster volume info. Solution : Check if the value is already present in dictionary (That means, it is set), If value is not present then consider the default value, NOTE : This patch doesn't solve the problem where the values which is set globally are being displayed in gluster volume info Change-Id: I61445b3d2a12eb68c38a19bea53b9051ad028050 BUG: 1145020 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8191 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8793 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
* build: make GLUSTERD_WORKDIR rely on localstatedirHarshavardhana2014-09-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | Backport from master branch - http://review.gluster.org/#/c/8246/ - Break-way from '/var/lib/glusterd' hard-coded previously, instead rely on 'configure' value from 'localstatedir' - Provide 's/lib/db' as default working directory for gluster management daemon for BSD and Darwin based installations - loff_t is really off_t on Darwin - fix-off the warnings generated by clang on FreeBSD/Darwin - Now 'tests/*' use GLUSTERD_WORKDIR a common variable for all platforms. - Define proper environment for running tests, define correct PATH and LD_LIBRARY_PATH when running tests, so that the desired version of glusterfs is used, regardless where it is installed. (Thanks to manu@netbsd.org for this additional work) Change-Id: I06e684ac4c26d1e74c9daf76753403ad15f79276 BUG: 1130308 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8486 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: Xml output for geo-replication status commandndarshan2014-08-281-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds xml output for geo-replication status and status detail command. sample: -------------------------------------------------------------- <geoRep> <volume> <name>master</name> <sessions> <session> <session_slave>:2a301d66-b9d2-44b4-b827-d680d67123eb:ssh://XXXXXXXXXX::slave</session_slave> <pair> <master_node>localhost.localdomain</master_node> <master_node_uuid>2a301d66-b9d2-44b4-b827-d680d67123eb</master_node_uuid> <master_brick>/root/master_b1</master_brick> <slave>ssh://XXXXXXXXXXX::slave</slave> <status>faulty</status> <checkpoint_status>N/A</checkpoint_status> <crawl_status>N/A</crawl_status> </pair> </session> </sessions> </volume> </geoRep> ------------------------------------------------------------- Change-Id: Ia19dbe751c3ab1ec7cb8923cdd6c8b99c374072f BUG: 1133464 Signed-off-by: ndarshan <dnarayan@redhat.com> Reviewed-on: http://review.gluster.org/8089 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: ndarshan <dnarayan@redhat.com> Reviewed-on: http://review.gluster.org/8532 Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Improvements to peer identificationKaushal M2014-07-151-0/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch improves the peer identification mechanism in glusterd and lays down the framework for further improvements, including better multi network support in glusterd. This patch mainly does two things, 1. Extend the peerinfo object to store a list of addresses instead of a single hostname as it does now. This also includes changes to make the peer update behaviour of 'peer probe' to add to the list. 2. Improve glusterd_friend_find_by_hostname() to perform better matching of hostnames. glusterd_friend_find_by_hostname() now does and initial quick string compare against all the peer addresses known to glusterd, after which it tries a more thorough search using address resolution and matching the struc sockaddr's. The above two changes together improve the peer identification situation in glusterd a lot. More information regarding the problem this patch attempts to resolve and the approach chosen can be found at http://www.gluster.org/community/documentation/index.php/Features/Better_peer_identification This commit is a squashed commit of the following changes, the development branch of which can be viewed at, https://github.com/kshlm/glusterfs/tree/better-peer-identification or, https://forge.gluster.org/~kshlm/glusterfs-core/kshlms-glusterfs/commits/better-peer-identification commit 198f86e60fab74faf082eaa02657a4d8f60b92f0 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 15 14:34:06 2014 +0530 Update gluster.8 commit 35d597f3a6b3248373e727f7b7e889c92554d56c Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 15 09:01:01 2014 +0530 Address review comments https://review.gluster.org/#/c/8238/3 commit 47b5331e17304477322bd2daed5bbed503c34ca1 Merge: c71b12c 78128af Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 15 08:41:39 2014 +0530 Merge branch 'master' into better-peer-identification commit c71b12c164330e8d19d1df4734ab34ef9a8caad2 Merge: 57bc9de 0f5719a Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 19:50:19 2014 +0530 Merge branch 'master' into better-peer-identification commit 57bc9de9e4f49ff2b1620df9906cda50a3527a25 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 19:49:08 2014 +0530 More fixes to review comments commit 5482cc363a687a9e246a0780ec88acd53e218501 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 18:36:40 2014 +0530 Code refactoring in peer-utils based on review comments https://review.gluster.org/#/c/8238/2/xlators/mgmt/glusterd/src/glusterd-peer-utils.c commit 89b22c34757178f64d5fbaffa31e6302f841c060 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 12:30:00 2014 +0530 Hostnames in peer status commit 63ebf9485cf50d736cf640238a1ab241671fcaf1 Merge: c8c8fdd f5f9721 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 12:06:33 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit c8c8fdd2104b5b6b8a1af739b1dd952b74e6dd66 Author: Kaushal M <kaushal@redhat.com> Date: Wed Jul 9 18:35:27 2014 +0530 Hostnames in xml output commit 732a92a0167ad7b1d70edbc35ebd8307c2766ae1 Author: Kaushal M <kaushal@redhat.com> Date: Wed Jul 9 15:12:10 2014 +0530 Add hostnames to cli rsp dict during list-friends commit fcf43e3e317508f0c225024738a988a4af8e9205 Merge: c0e2624 72d96e2 Author: Kaushal M <kaushal@redhat.com> Date: Wed Jul 9 12:53:03 2014 +0530 Merge branch 'master' into better-peer-identification commit c0e262416728a3c536a8347a216e471eb2251535 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 16:11:19 2014 +0530 Use list_for_each_entry_safe when cleaning peer hostnames commit 6132e60224eb592f3657e535a12a3e72c772da42 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 15:52:19 2014 +0530 Fix crash in gd_add_friend_to_dict commit 88ffa9a508fd5aac0b2a76e6e76487ce0cab786a Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 13:19:44 2014 +0530 gd_peerinfo_destroy -> glusterd_peerinfo_destroy commit 4b36930a715b1e13cd1a77d136ef1cf78a06d574 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 12:50:12 2014 +0530 More refactoring commit ee559b081d608c6501c10ae22166f26eeb65690e Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 12:14:40 2014 +0530 Major refactoring of code based on review comments at https://review.gluster.org/#/c/8238/1/xlators/mgmt/glusterd/src/glusterd-peer-utils.h commit e96dbc7bbb05fad2a9c424de41a394b8023fe48d Merge: 2613d1d 83c09b7 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 09:47:05 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit 2613d1daebff0c56812de821c06ed4c16bb9d447 Merge: b242cf6 9a50211 Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 15:28:57 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit b242cf66d95dd3dd5e3975aa430baa6bd74b8a29 Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 15:08:18 2014 +0530 Fix a silly mistake, if (ctx->req) => if (ctx->req == NULL) commit c835ed26433830ceed57289143f596cf60421558 Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 14:58:23 2014 +0530 Fix reverse probe. commit 9ede17f9329b854b02e8ad159f173244789fd08c Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 13:31:32 2014 +0530 Fix friend import for existing peers commit 891bf74c7350064dfb008d1b7294bcec28d680fd Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 13:08:36 2014 +0530 Set first hostname in peerinfo->hostnames to peerinfo->hostname commit 9421d6a217381a7427a7d84f369280883ca4297a Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 12:21:40 2014 +0530 Fix gf_asprintf return val check in glusterd_store_peer_write commit defac978c1d94011ce8195e311839b9ffce057e7 Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 11:16:13 2014 +0530 Fix store_retrieve_peers to correctly cleanup. commit 00a799f5de1121b0cb7421da8285f9407063e1bd Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 10:52:11 2014 +0530 Update address list in glusterd_probe_cbk only when needed. commit 7a628e8a9c562d85709c69cfa13fb1774c521b75 Merge: d191985 dc46d5e Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 09:24:12 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit d1919858e6639d2b54d716a61f662d9752ec5ff1 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 18:59:49 2014 +0530 gf_compare_addrinfo -> gf_compare_sockaddr commit 31d8ef730d408f8d9ba8f504fa648f7dcd59da87 Merge: 93bbede 86ee233 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 18:16:13 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit 93bbedeac5181e29f59b2acd08f638146812ec41 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 18:15:16 2014 +0530 Improve glusterd_friend_find_by_hostname glusterd_friend_find_by_hostname will now do an initial quick search for the peerinfo performing string comparisions on the given host string. It follows it with a more thorough match, by resolving the addresses and comparing addrinfos instead of strings. commit 2542cdbc45aa9cfcaf1f174686158d5565cdd07b Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 17:21:10 2014 +0530 New utility gf_compare_addrinfo commit 338676e8389a44bd91136eebd110197429c2566c Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 14:55:56 2014 +0530 Use gd_peer_has_address instead of strcmp commit 28d45be51f594328741c44455bd80ac9d64ca501 Merge: 728266e 991dd5e Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 14:54:40 2014 +0530 Merge branch 'master' into better-peer-identification commit 728266eb16d5f5a4bf36266044425ae164337f99 Merge: 7d9b87b 2417de9 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 09:55:13 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit 7d9b87b84955ec17daeaf88a3e7462914039430f Merge: b890625 e02275c Author: Kaushal M <kshlmster@gmail.com> Date: Tue Jul 1 08:41:40 2014 +0530 Merge pull request #4 from vpshastry/better-peer-identification Better peer identification commit e02275c52fb83c72ad082c098fd3e432c2b9c526 Merge: 75ee90d b890625 Author: Varun Shastry <vshastry@redhat.com> Date: Mon Jun 30 16:44:29 2014 +0530 Merge branch 'better-peer-identification' of https://github.com/kshlm/glusterfs into better-peer-identification-kaushal-github commit 75ee90d2f272e49b94d24c9ca4571e89a83055ff Author: Varun Shastry <vshastry@redhat.com> Date: Mon Jun 30 15:36:10 2014 +0530 glusterd: add to the list if the probed uuid pre-exists Signed-off-by: Varun Shastry <vshastry@redhat.com> commit b890625d8164c660695daef3285c67979eef723e Merge: 04c5d60 187a7a9 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jun 30 11:44:13 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit 04c5d60cb938c8d94b214689580b40abb1b0ffcd Merge: 3a5bfa1 e01edb6 Author: Kaushal M <kshlmster@gmail.com> Date: Sat Jun 28 19:23:33 2014 +0530 Merge pull request #3 from vpshastry/better-peer-identification glusterd: search through the list of hostnames in the peerinfo commit 0c64f3346a977f9165ac55a84a1e03c40a7573a7 Merge: e01edb6 3a5bfa1 Author: Varun Shastry <vshastry@redhat.com> Date: Sat Jun 28 10:43:29 2014 +0530 Merge branch 'better-peer-identification' of https://github.com/kshlm/glusterfs into better-peer-identification-kaushal-github commit e01edb63153a1008db70b8fa76ae5b535e099326 Author: Varun Shastry <vshastry@redhat.com> Date: Fri Jun 27 12:29:36 2014 +0530 glusterd: search through the list of hostnames in the peerinfo Signed-off-by: Varun Shastry <vshastry@redhat.com> commit 3a5bfa15855e660db2bfde644727371dd2d618cc Merge: cda6d31 371ea35 Author: Kaushal M <kshlmster@gmail.com> Date: Fri Jun 27 11:31:17 2014 +0530 Merge pull request #1 from vpshastry/better-peer-identification glusterd: Add hostname to list instead of replaceing upon update commit 371ea354f198b4182382d5403c5960c0b2add6b6 Author: Varun Shastry <vshastry@redhat.com> Date: Fri Jun 27 11:24:54 2014 +0530 glusterd: Add hostname to list instead of replaceing upon update Signed-off-by: Varun Shastry <vshastry@redhat.com> commit cda6d3152886623ecbf46baf0048ebe0119b30b6 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 19:52:52 2014 +0530 Import address lists commit 6649b54aa0440130c08e827e0a1d1bbfb840eca9 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 19:15:37 2014 +0530 Implement export address list commit 55990034eead92bc9b936240029e460a4bf152d5 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 18:11:59 2014 +0530 Use first address in list to when setting up the peer RPC. commit a35fde8d19b9988eb04c652fb3a5e4f84d90ad00 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 18:03:04 2014 +0530 Properly free addresses on glusterd_peer_destroy commit 1988081db09ac9205f3dc7268cef8be267f3ce8b Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 17:52:35 2014 +0530 Restore peerinfo with address list implemented. commit 66f524d5749a12f4910dd6b06c9d91f37e1d831e Author: Kaushal M <kaushal@redhat.com> Date: Mon Jun 23 13:02:23 2014 +0530 Move out all peer related utilities from glusterd-utils to glusterd-peer-utils commit 14a2a326a4dff11b55490dca2a14f39320931340 Author: Kaushal M <kaushal@redhat.com> Date: Tue May 27 12:16:41 2014 +0530 Compilation fix commit c59cd351d0a102d0d5f3ea9001fd33c4edcb262f Author: Kaushal M <kaushal@redhat.com> Date: Mon May 5 12:51:11 2014 +0530 Add store support for hostname list commit b70325f0beb884ad12645ef40185f0bf6cedd741 Author: Kaushal M <kaushal@redhat.com> Date: Fri May 2 15:58:07 2014 +0530 Add a hostnames list to glusterd_peerinfo_t glusterd_peerinfo_new will now init this list and add the given hostname as the lists first member. Signed-off-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Varun Shastry <vshastry@redhat.com> Change-Id: Ief3c5d6d6f16571ee2fab0a45e638b9d6506a06e BUG: 1119547 Reviewed-on: http://review.gluster.org/8238 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli: Changed "rebalance start" outputVenkatesh Somyajulu2014-07-141-2/+4
| | | | | | | | | | | Change-Id: Ie87f1a2107b07a6e519ed894a74edf3b3e0a8340 BUG: 1063230 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/6946 Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/snapshot: provide --xml support for all snapshot commandRajesh Joseph2014-07-121-60/+110
| | | | | | | | | | | | | | Now --xml option can be used with all snapshot command. It will form the cli output in xml form. Change-Id: Ifc0ac31d2a9f91e136e87f3b51a629df7dba94e8 BUG: 1096610 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/7663 Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/glusterd: Added support for dispersed volumesXavier Hernandez2014-07-111-1/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Two new options have been added to the 'create' command of the cli interface: disperse [<count>] redundancy <count> Both are optional. A dispersed volume is created by specifying, at least, one of them. If 'disperse' is missing or it's present but '<count>' does not, the number of bricks enumerated in the command line is taken as the disperse count. If 'redundancy' is missing, the lowest optimal value is assumed. A configuration is considered optimal (for most workloads) when the disperse count - redundancy count is a power of 2. If the resulting redundancy is 1, the volume is created normally, but if it's greater than 1, a warning is shown to the user and he/she must answer yes/no to continue volume creation. If there isn't any optimal value for the given number of bricks, a warning is also shown and, if the user accepts, a redundancy of 1 is used. If 'redundancy' is specified and the resulting volume is not optimal, another warning is shown to the user. A distributed-disperse volume can be created using a number of bricks multiple of the disperse count. Change-Id: Iab93efbe78e905cdb91f54f3741599f7ea6645e4 BUG: 1118629 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/7782 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Format the hostname column properly in the 'pool list' o/pVijaikumar M2014-07-021-9/+26
| | | | | | | | | | | | | | | | | | | | In the pool list output, if the hostname is lengthier, then the indentation was not proper. Solution: 1) get the full list of hostnames first (prior to display) 2) Determine the maximum length of the hostname strings from that 3) Create an appropriate display padding amount, using the length from (2) Change-Id: Icc3724975a5e30b02b8e06db709930cbac5e0875 BUG: 1028871 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8127 Tested-by: Justin Clift <justin@gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* mgmt/glusterd: display snapd status as part of volume statusRaghavendra Bhat2014-06-301-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | * Made changes to save the port used by snapd in the info file for the volume i.e. <glusterd-working-directory>/vols/<volname>/info This is how the gluster volume status of a volume would look like for which the uss feature is enabled. [root@tatooine ~]# gluster volume status vol Status of volume: vol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick tatooine:/export1/vol 49155 Y 5041 Snapshot Daemon on localhost 49156 Y 5080 NFS Server on localhost 2049 Y 5087 Task Status of Volume vol ------------------------------------------------------------------------------ There are no active volume tasks Change-Id: I8f3e5d7d764a728497c2a5279a07486317bd7c6d BUG: 1111041 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/8114 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/snapshot: cli error message correctedRajesh Joseph2014-06-231-7/+8
| | | | | | | | | | | | | | | snapshot delete on failure used to give invalid error message. Change-Id: I65d6edf8004c9a1bb91f28fa987b2d1629134013 BUG: 1111603 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/8137 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Fix snap delete cleanupAvra Sengupta2014-06-131-8/+9
| | | | | | | | | | | | | | | | | | | During commit the first thing that should happen is the snaps should be marked as GD_SNAP_STATUS_DECOMMISSION. So that if the node goes down the marked snap can be cleaned up when the node is coming back. Also during snap handshake while accepting snapshot from peer, we should check if the snap in question is decomissioned. In that case we shouldn't be accepting the peer data. Change-Id: Ib4e38d1b6bf49411928623fbc9f72f2b37b72086 BUG: 1104714 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7996 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot : Provide enable/disable option for snapshot auto-delete ↵Sachin Pandit2014-06-131-2/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | feature. This patch provides an interface to enable or disable the auto-delete feature. Syntax : gluster snapshot config auto-delete <enable/disable> DETAILS : 1) When auto-delete feature is disabled, If the the soft-limit is reached then user is given a warning about exceeding soft-limit along with successful snapshot creation message (oldest snapshot is not deleted). And upon reaching hard-limit further snapshot creation is not allowed. Example : ------------------------------------------------------------------ |Case - 1: Upon reaching soft-limit | |Snapshot create : snap successfully created. |Warning : soft-limit of volume (vol) is reached. Snapshot creation |is not possible once hard-limit is reached. | |----------------------------------------------------- |Case - 2: Upon reaching hard-limit | |Snapshot create : snap creation failed. |Error : hard-limit of volume (vol) is reached, Hence it is not |possible to take further snapshots. Please delete few snapshots |of the volume (vol) before taking another snapshot. ------------------------------------------------------------------ 2) When auto-delete feature is enabled, then as soon as the soft-limit is reached the oldest snapshot is deleted for every successful snapshot creation (same as existing method), With this it is made sure that number of snapshot created is not more than snap-max-hard-limit. Change-Id: Ie3ca64bbd2c763371f541cd2e378314e73b695b4 BUG: 1105415 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8017 Tested-by: Justin Clift <justin@gluster.org> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* gsyncd / geo-rep: FSH recommended log locationsVenky Shankar2014-06-101-0/+1
| | | | | | | | | | | | | | | | Upgrading "working_dir" on the fly is a bit unclean yet (though it works) as currently config upgrade does not support "old" values to be expanded by using configuration variables. Change-Id: I44ed65c281f2e0ce3b6b467addc5c1c88ac674e7 BUG: 1077516 Signed-off-by: Venky Shankar <vshankar@redhat.com> Signed-off-by: Kotresh H R <khiremat@redhat.com> Signed-off-by: Aravinda VK <avishwan@redhat.com> Signed-off-by: Ajeet Jha <ajha@redhat.com> Reviewed-on: http://review.gluster.org/7375 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/status : First fetch the snapcount and then send the rpc callSachin Pandit2014-06-031-62/+216
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | for individual snapshots for snapshot status Problem : Initially, we used to do all the calculation in the glusterd side, once all the information related to snap was fetched, it was aggregated into one dictionary and that was sent back to CLI. Problem with this approach was, when number of snapshots are very high then CLI will timeout. Solution: First fetch snapcount and snapname from glusterd, then make a individual calls using the snapname fetched. This will resolve the timeout problem. Change-Id: I32609b3898ed227c804dd4d8ee4516f081240756 BUG: 1087676 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7456 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli: 'Snapshot Volume: yes/no' for volume info needs to be removedVijaikumar M2014-05-291-12/+0
| | | | | | | | | | | | | | | | | | | | With initial design where the snap volume used to be displayed in gluster volume info, we used "Snap Volume: yes/on" to distinguish the volume whether its a snap volume or the original volume. But with new design the snap volumes are not listed in the volume info, hence this entry (snap volume:) doesn't make sense to show. Change-Id: Ic5b9948bf4ef74e89a611742c74a8989cb406866 BUG: 1098910 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7794 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Sachin Pandit <spandit@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* geo-rep/glusterd: Pause and Resume feature for geo-replicationKotresh H R2014-05-131-0/+14
| | | | | | | | | | | | | | | | | This patch introduces pause and resume cli command for geo-replication. Signed-off-by: Kotresh H R <khiremat@redhat.com> Change-Id: I4f5e58e9175fe85077d56088473252391fb57de7 BUG: 1093602 Signed-off-by: Kotresh H R <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/7643 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd/snapshot: Activation and De-activation of snapshotJoseph Fernandes2014-05-021-0/+48
| | | | | | | | | | | | | | | | | | | | | Previously, snapshots by default were activated on creation and there was no option to activate or deactivate them on demand. This will allow the user to activate and deactivate on demand. The CLI goes as follows 1) Activate the snap using a command "gluster snapshot activate <snapname> [force]" 2) Deactivate the snap using a command "gluster snapshot deactivate <snapname>" Note: Even now the snapshot will be activated during creation. Change-Id: I0946d800780f26c63fa1fcaf29aabc900140448f BUG: 1061685 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/7476 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* [glusterd/snapshot]: snapshot create CLI msg correctionJoseph Fernandes2014-04-301-2/+2
| | | | | | | | | | | | Uniformity in cli output while creating snapshots Change-Id: Ic0fd09bbde9a1f55c441e1745f93c588d2e4c1a1 BUG: 1090041 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/7518 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: Add a cli command to enable/disable barrierKaushal M2014-04-291-0/+63
| | | | | | | | | | | | | | | This patch adds a new 'gluster volume barrier <VOLNAME> {enable|disable}' cli command. This helps in testing the brick op code path when testing the barrier xlator. This patch can be reverted later if not required for end users. Change-Id: Icd86a2d13e7f276dda1ecbb2593d60638ece7dcd BUG: 1060002 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/6958 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* build: MacOSX Porting fixesHarshavardhana2014-04-241-5/+5
| | | | | | | | | | | | | | | | | | | | | git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs Working functionality on MacOSX - GlusterD (management daemon) - GlusterCLI (management cli) - GlusterFS FUSE (using OSXFUSE) - GlusterNFS (without NLM - issues with rpc.statd) Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac BUG: 1089172 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Signed-off-by: Dennis Schafroth <dennis@schafroth.com> Tested-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Dennis Schafroth <dennis@schafroth.com> Reviewed-on: http://review.gluster.org/7503 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gluster: GlusterFS Volume Snapshot FeatureAvra Sengupta2014-04-111-1/+1040
| | | | | | | | | | | | | | | | | | | | | | | | | This is the initial patch for the Snapshot feature. Current patch includes following features: * Snapshot create * Snapshot delete * Snapshot restore * Snapshot list * Snapshot info * Snapshot status * Snapshot config Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9 BUG: 1061685 Signed-off-by: shishir gowda <sgowda@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Signed-off-by: Joseph Fernandes <josferna@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7128 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Made spelling changes to 19 filesAkshataDM2014-03-161-7/+7
| | | | | | | | | | Change-Id: If91cf44578fe0b5176ea01ae5c5962e31606f640 BUG: 1075417 Signed-off-by: AkshataDM <oxta28@gmail.com> Reviewed-on: http://review.gluster.org/7280 Reviewed-by: Varun Shastry <vshastry@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com> Tested-by: Anand Avati <avati@redhat.com>
* quota : changes in quota list commandAnuradha2014-02-081-26/+59
| | | | | | | | | | | | | | | Changes are made to quota list command such that it also shows whether hard-limit and soft-limit are exceeded or not. A test case to check the same is added. Change-Id: Idb365acfc5d1f2d9f3373dd5f98573d5fe87b50f BUG: 1038598 Signed-off-by: Anuradha <atalur@redhat.com> Signed-off-by: Anuradha Talur <atalur@redhat.com> Reviewed-on: http://review.gluster.org/6441 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli/cli-xml : skipped files should be treated as failures forSusant Palai2014-02-051-3/+15
| | | | | | | | | | | | | | | | | | | remove-brick operation. Fix: For remove-brick operation skipped count is included into failure count. clixml-output : skipped count would be zero always for remove-brick status. Change-Id: Ic0bb23b89e0cf5b884b6d1ae42bbf98deedc9173 BUG: 1060209 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: http://review.gluster.org/6889 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: Add options to the CLI that let the user control the reset ofDawit Alemu2014-01-241-12/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | stats "volume profile info" automatically clears incremental stats. There isn't a command to: - fetch stats without clearing incremental stats and - clear cumulative and incremental stats This change introduces two arguments (i.e. peek and clear). 'clear' will wipe both incremental and cumulative stats. 'peek' fetches stats without wiping incremental stats. 'volume profile info peek' - fetches incremental and cumulative stats without wiping incremental stats 'volume profile info incremental peek' - fetches incremental stats without wiping incremental stats 'volume profile info clear' - clears both incremental and cumultiave stats Change-Id: I91834515ad672eca5f882809941147d7d997c4c9 BUG: 1047416 Signed-off-by: Dawit Alemu <dalemu@redhat.com> Reviewed-on: http://review.gluster.org/6620 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/geo-rep: more glusterd and cli fixes for geo-rep.Ajeet Jha2013-12-121-197/+61
| | | | | | | | | | | | | | | | | | | | | | | | -> handle option validation cases in reset case. -> Creating valid conf path when glusterd restarts. -> Reading the gsyncd worker thread status and displaying it. -> Displaying status-detail per worker. -> Fetch checkpoint info in geo-rep status. -> use-tarssh value validation added. misc: misc geo-rep fixes based on cluster, logrotate etc.. -> cluster/dht: fix 'stime' getxattr getting overwritten. -> cluster/afr: return max of 'stime' values in subvol. -> geo-rep-logrotate: Sending SIGHUP to geo-rep auxiliary. -> cluster/dht: fix convoluted logic while aggregating. -> cluster/*: fix 'stime' min/max fetch logic. Change-Id: I811acea0bbd6194797a3e55d89295d1ea021ac85 BUG: 1036552 Signed-off-by: Ajeet Jha <ajha@redhat.com> Reviewed-on: http://review.gluster.org/6405 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@gmail.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Add an option to fetch just incremental or cumulative I/0Dawit Alemu2013-12-031-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | information 'volume profile info' fetches both cumulative and incremental I/O statistics. There isn't a way to fetch just cumulative or incremental statistics. This change introduces two optional arguments, namely "incremental" and "cumulative", that can be tacked on to 'volume profile info'. In other words, the new command format is volume profile <VOLNAME> {start | info [incremental | cumulative] | stop} [nfs] 'volume profile info incremental' - fetches incremental stats 'volume profile info cumulative' - fetches cumulative stats 'volume profile info' - fetches incremental and cumulative stats Change-Id: I5ddb45d990542ea611d23d251efebfec46f472d0 BUG: 1030580 Signed-off-by: Dawit Alemu <dalemu@redhat.com> Reviewed-on: http://review.gluster.org/6264 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli: More checks in rebalance status outputKaushal M2013-12-031-4/+16
| | | | | | | | | | Change-Id: Ibd2edc5608ae6d3370607bff1c626c8347c4deda BUG: 1031887 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/6337 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli, glusterd: More quota fixes ...Krutika Dhananjay2013-11-301-18/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ... which may be grouped under the following categories: 1. Fix incorrect cli exit status for 'quota list' cmd 2. Print appropriate error message on quota parse errors in cli Authored by: Anuradha Talur <atalur@redhat.com> 3. glusterd: Improve quota validation during stage-op 4. Fix peer probe issues resulting from quota conf checksum mismatches 5. Enhancements to CLI output in the event of quota command failures Authored by: Kaushal Madappa <kmadappa@redhat.com> 7. Move aux mount location from /tmp to /var/run/gluster Authored by: Krishnan Parthasarathi <kparthas@redhat.com> 8. Fix performance issues in quota limit-usage Authored by: Krutika Dhananjay <kdhananj@redhat.com> Note: Some functions that were used in earlier version of quota, that aren't called anymore have been removed. Change-Id: I9d874f839ae5fdcfbe6d4f2d727eac091f27ac57 BUG: 969461 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6366 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/glusterd: Changes to quota command Quota featureRaghavendra G2013-11-261-139/+370
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | re-work. Following are the cli commands that are new/re-worked: ====================================================== volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} | volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} | volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>} volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [detail|clients|mem|inode|fd|callpool] volume statedump <VOLNAME> [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history] glusterd changes: ================= * Quota limits are now set as extended attributes by glusterd from the aux mount created by the cli. * The gfids of the directories on which quota limits are set for a given volume are stored in /var/lib/glusterd/vols/<volname>/quota.conf file in binary format, and whose cksum and version is stored in /var/lib/glusterd/vols/<volname>/quota.cksum. Original-author: Krutika Dhananjay <kdhananj@redhat.com> Original-author: Krishnan Parthasarathi <kparthas@redhat.com> BUG: 969461 Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/6003 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: List only nodes which have rebalance started in rebalance statusKaushal M2013-11-201-209/+127
| | | | | | | | | | | | | | | | Listing the nodes on which rebalance hasn't been started is just giving out extraneous information. Also, refactor the rebalance status printing code into a single function and use it for both rebalance and remove-brick status. BUG: 1031887 Change-Id: I47bd561347dfd6ef76c52a1587916d6a71eac369 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/6300 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* bd: posix/multi-brick support to BD xlatorM. Mohan Kumar2013-11-131-0/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current BD xlator (block backend) has a few limitations such as * Creation of directories not supported * Supports only single brick * Does not use extended attributes (and client gfid) like posix xlator * Creation of special files (symbolic links, device nodes etc) not supported Basic limitation of not allowing directory creation is blocking oVirt/VDSM to consume BD xlator as part of Gluster domain since VDSM creates multi-level directories when GlusterFS is used as storage backend for storing VM images. To overcome these limitations a new BD xlator with following improvements is suggested. * New hybrid BD xlator that handles both regular files and block device files * The volume will have both POSIX and BD bricks. Regular files are created on POSIX bricks, block devices are created on the BD brick (VG) * BD xlator leverages exiting POSIX xlator for most POSIX calls and hence sits above the POSIX xlator * Block device file is differentiated from regular file by an extended attribute * The xattr 'user.glusterfs.bd' (BD_XATTR) plays a role in mapping a posix file to Logical Volume (LV). * When a client sends a request to set BD_XATTR on a posix file, a new LV is created and mapped to posix file. So every block device will have a representative file in POSIX brick with 'user.glusterfs.bd' (BD_XATTR) set. * Here after all operations on this file results in LV related operations. For example opening a file that has BD_XATTR set results in opening the LV block device, reading results in reading the corresponding LV block device. When BD xlator gets request to set BD_XATTR via setxattr call, it creates a LV and information about this LV is placed in the xattr of the posix file. xattr "user.glusterfs.bd" used to identify that posix file is mapped to BD. Usage: Server side: [root@host1 ~]# gluster volume create bdvol host1:/storage/vg1_info?vg1 host2:/storage/vg2_info?vg2 It creates a distributed gluster volume 'bdvol' with Volume Group vg1 using posix brick /storage/vg1_info in host1 and Volume Group vg2 using /storage/vg2_info in host2. [root@host1 ~]# gluster volume start bdvol Client side: [root@node ~]# mount -t glusterfs host1:/bdvol /media [root@node ~]# touch /media/posix It creates regular posix file 'posix' in either host1:/vg1 or host2:/vg2 brick [root@node ~]# mkdir /media/image [root@node ~]# touch /media/image/lv1 It also creates regular posix file 'lv1' in either host1:/vg1 or host2:/vg2 brick [root@node ~]# setfattr -n "user.glusterfs.bd" -v "lv" /media/image/lv1 [root@node ~]# Above setxattr results in creating a new LV in corresponding brick's VG and it sets 'user.glusterfs.bd' with value 'lv:<default-extent-size' [root@node ~]# truncate -s5G /media/image/lv1 It results in resizig LV 'lv1'to 5G New BD xlator code is placed in xlators/storage/bd directory. Also add volume-uuid to the VG so that same VG can't be used for other bricks/volumes. After deleting a gluster volume, one has to manually remove the associated tag using vgchange <vg-name> --deltag <trusted.glusterfs.volume-id:<volume-id>> Changes from previous version V5: * Removed support for delayed deleting of LVs Changes from previous version V4: * Consolidated the patches * Removed usage of BD_XATTR_SIZE and consolidated it in BD_XATTR. Changes from previous version V3: * Added support in FUSE to support full/linked clone * Added support to merge snapshots and provide information about origin * bd_map xlator removed * iatt structure used in inode_ctx. iatt is cached and updated during fsync/flush * aio support * Type and capabilities of volume are exported through getxattr Changes from version 2: * Used inode_context for caching BD size and to check if loc/fd is BD or not. * Added GlusterFS server offloaded copy and snapshot through setfattr FOP. As part of this libgfapi is modified. * BD xlator supports stripe * During unlinking if a LV file is already opened, its added to delete list and bd_del_thread tries to delete from this list when a last reference to that file is closed. Changes from previous version: * gfid is used as name of LV * ? is used to specify VG name for creating BD volume in volume create, add-brick. gluster volume create volname host:/path?vg * open-behind issue is fixed * A replicate brick can be added dynamically and LVs from source brick are replicated to destination brick * A distribute brick can be added dynamically and rebalance operation distributes existing LVs/files to the new brick * Thin provisioning support added. * bd_map xlator support retained * setfattr -n user.glusterfs.bd -v "lv" creates a regular LV and setfattr -n user.glusterfs.bd -v "thin" creates thin LV * Capability and backend information added to gluster volume info (and --xml) so that management tools can exploit BD xlator. * tracing support for bd xlator added TODO: * Add support to display snapshots for a given LV * Display posix filename for list-origin instead of gfid Change-Id: I00d32dfbab3b7c806e0841515c86c3aa519332f2 BUG: 1028672 Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com> Reviewed-on: http://review.gluster.org/4809 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* bd_map: Remove bd_map xlatorM. Mohan Kumar2013-11-131-156/+0
| | | | | | | | | | | Remove bd_map xlator and CLI related changes. Change-Id: If7086205df1907127c1a1fa4ba603f1c48421d09 BUG: 1028672 Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com> Reviewed-on: http://review.gluster.org/5747 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Set the o/p width of hostname to 8 charactersVijaykumar M2013-11-111-1/+1
| | | | | | | | | Change-Id: I91dcb19ba4d31c17e6041155c0e59af457b87f1b BUG: 1028871 Signed-off-by: Vijaykumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/6245 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: write 'volume rebalance' error message in xml format whenDawit Alemu2013-11-101-4/+12
| | | | | | | | | | | | | | | | | | | --xml is specified When 'volume rebalance' encounters an error the cli prints the error message in plain text independent of whether --xml is specified. This throws off client application that expect xml output (as mentioned in bz1026143). Now, if the --xml flag is supplied, the cli print 'volume rebalance' error messages in xml format. Change-Id: I16c6a7a4cdd2819eb73422ab849125986dc299a6 BUG: 1026143 Signed-off-by: Dawit Alemu <dalemu@redhat.com> Reviewed-on: http://review.gluster.org/6242 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd : Improved quota volume reset commandAnuradha2013-10-281-2/+2
| | | | | | | | | | | | | | | | | | | | | | | Quota volume reset command without "force" option fixed, doesn't fail anymore. It resets unprotected fields and not the protected ones. Also, an appropriate message is provided to the user for the following cases : 1. only unprotected fields are reset, "force" option should be used to reset protected fields. 2. Both protected and unprotected fields are reset. 3. No field was reset, "force" option required. Test case for the same also added. Change-Id: I24e8f1be87b79ccd81bf6f933e00608b861c7a16 BUG: 1022905 Signed-off-by: Anuradha <atalur@redhat.com> Reviewed-on: http://review.gluster.org/6135 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: [Feature] Command implementation to get heal-countVenkatesh Somyajulu2013-10-141-0/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently to know the number of files to be healed, either user has to go to backend and check the number of entries present in indices/xattrop directory. But if a volume consists of large number of bricks, going to each backend and counting the number of entries is a time-taking task. Otherwise user can give gluster volume heal vol-name info command but with this approach if no. of entries are very hugh in the indices/ xattrop directory, it will comsume time. So as a feature, new command is implemented. Command 1: gluster volume heal vn statistics heal-count This command will get the number of entries present in every brick of a volume. The output displays only entries count. Command 2: gluster volume heal vn statistics heal-count replica 192.168.122.1:/home/user/brickname Here if we are concerned with just one replica. So providing any one of the brick of a replica will get the number of entries to be healed for that replica only. Example: Replicate volume with replica count 2. Backend status: -------------- [root@dhcp-0-17 xattrop]# ls -lia | wc -l 1918 NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy entries so actual no. of entries to be healed are 1916. [root@dhcp-0-17 xattrop]# pwd /home/user/2ty/.glusterfs/indices/xattrop Command output: -------------- Gathering count of entries to be healed on volume volume3 has been successful Brick 192.168.122.1:/home/user/22iu Status: Brick is Not connected Entries count is not available Brick 192.168.122.1:/home/user/2ty Number of entries: 1916 Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290 BUG: 1015990 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/6044 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr : Implementation of command "gluster volume heal vn statistics"Venkatesh Somyajulu2013-10-141-2/+110
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "gluster volume heal volumename statistics" command gives the summary of the afr crawl done based on the entries present in the xattrop directory. Whenever afr crawls are attempted, the beginning time of crawl, end time of crawl, no of files healed, heal-failed count and number of files in split brain are shown along with the type of the crawl. If crawl is already in progress then it will give the number of files healed, heal failed count and number of files in split-brain from the beginning of the crawl and instead of telling the end time of the crawl, "CRAWL IN PROGRESS" message will be shown. Output format: command: "gluster volume heal volume-name statistics" Output: Gathering afr crawl statistics crawl statistics on volume volume-name has been successful ------------------------------------------------ Crawl statistics for brick no 0 Hostname of brick 192.168.122.248 Starting time of crawl: Wed Jul 10 15:52:38 2013 Ending time of crawl: Wed Jul 10 15:52:38 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 Starting time of crawl: Wed Jul 10 15:52:38 2013 Ending time of crawl: Wed Jul 10 15:52:38 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 ------------------------------------------------ Crawl statistics for brick no 1 Hostname of brick 192.168.122.1 Starting time of crawl: Wed Jul 10 15:52:42 2013 Ending time of crawl: Wed Jul 10 15:52:42 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 Starting time of crawl: Wed Jul 10 15:52:42 2013 Ending time of crawl: Wed Jul 10 15:52:42 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 -------------------------------------------------- Change-Id: I10bf9d10b005741db9973fb1352e0dd59ed99aa9 BUG: 949400 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/4790 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Implement 'volume status tasks'Krutika Dhananjay2013-10-081-46/+109
| | | | | | | | | | | | | | | | | | | oVirt's Gluster Integration needs an inexpensive command that can be executed every 10 seconds to monitor async tasks and their parameters, for all volumes. The solution involves adding a 'tasks' sub-command to 'volume status' to fetch only the async task IDs, type and other relevant parameters. Only the originator glusterd participates in this command as all the information needed is available on all the nodes. This is to make the command suitable for being executed every 10 seconds. Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1 BUG: 1012346 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6006 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli: add node uuid in rebalance and remove brick status xml outputBala.FA2013-10-031-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds node uuid in rebalance/remove-brick status xml output. Output XML will look like <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRebalance> <op>3</op> <nodeCount>1</nodeCount> <node> <nodeName>localhost</nodeName> ==>> <id>883626f8-4d29-4d02-8c5d-c9f48c5b2445</id> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </node> <aggregate> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </aggregate> </volRebalance> </cliOutput> Change-Id: I5a1d4f9043b33b9e88150647a243ddb16154e843 BUG: 1012296 Signed-off-by: Bala.FA <barumuga@redhat.com> Reviewed-on: http://review.gluster.org/6005 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli/glusterd: improve rebalance fix-layout status reportingRavishankar N2013-09-191-3/+9
| | | | | | | | | | | | | | | | | | | Problem: Currenly the CLI rebalance status command output does not indicate the 'type' of rebalance, i.e. whether a full rebalance or only a fix-layout was carried out. Fix: After the rebalance status of all peers is received by the originator glusterd, alter it to reflect the type of rebalance before passing it on to the CLI process. Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3 BUG: 1004744 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/5826 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Add statusStr xml tag to task list and rebalance/remove brick statusAravinda VK2013-09-121-19/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | New xml tag statusStr added to following gluster cli commands gluster volume status all --xml (For Task status) gluster volume rebalance <VOLNAME> status --xml gluster volume remove-brick <VOLNAME> <BRICK1..> status --xml Example(volume status all): <task> <type>Rebalance</type> <id>82d8d122-8738-4144-8507-d93fc98b61df</id> <status>3</status> <statusStr>completed</statusStr> </task> Example(volume rebalance <VOL> status) <node> <nodeName>localhost</nodeName> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </node> Also modified task status as string instead of showing number in gluster volume status all Example: Status of volume: gv1 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick sumne.sumne:/gfs/b1 49154 Y 15489 Brick sumne.sumne:/gfs/b2 49155 Y 15493 NFS Server on localhost N/A N 15913 Task ID Status ---- -- ------ Rebalance 82d8d122-8738-4144-8507-d93fc98b61df completed BUG: 1003521 Change-Id: Ib283016af4c18132fb13fb33d44075782d77823c Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/5739 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: Fix 'status all' xml output when volumes are not startedKaushal M2013-09-111-10/+12
| | | | | | | | | | | | CLI now only outputs one XML document for 'status all' only containing those volumes which are started. BUG: 1004218 Change-Id: Id4130fe59b3b74475d8bd1cc8134ac59a28f1b7e Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5773 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/cli: Geo-Replication "status detail" cmdVenky Shankar2013-09-041-138/+333
| | | | | | | | | | | | | | | | | | | | | | Provides detailed status info in the following format MASTER <master-vol> SLAVE <slave-vol> NODE HEALTH UPTIME FILES SYNCD FILES PENDING BYTES PENDING DELETES PENDING ----------------------------------------------------------------------------------- This patch introdues "status detail" command to show crawl related information in CLI. These values are "pulled" from gsyncd when "status detail" is executed. Change-Id: I1fdaf7180eacce054a864d34971dc160bd7301e1 BUG: 990420 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5590 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Saving geo-rep session details in a more specific pathVenky Shankar2013-09-041-60/+8
| | | | | | | | | | | | | | | Now saving the session details in /var/lib/glusterd/geo-replication/<mastervol>_<slaveip>_<slavevol> repo to distinguish between two master-slave sessions where the slavename is same across two different clusters. Change-Id: I57c93f55cc9bd4fe2bffe579028aaf5e4335b223 BUG: 991501 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>