summaryrefslogtreecommitdiffstats
path: root/cli/src/cli-rpc-ops.c
Commit message (Collapse)AuthorAgeFilesLines
* snapshot/restore : Snapshot restore changes.Sachin Pandit2014-03-101-40/+20
| | | | | | | | | | | | | | | | This Patch includes cli change and few backend changes. Syntax : gluster snapshot restore <snap-name> ** Also removed unwanted snapshot remove parsing code. Change-Id: Ie32590ccd4080da9409fd16c543866c14fae28f5 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7191 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd/snapshot: Snapshot create and delete changesVijaikumar M2014-03-061-3/+3
| | | | | | | | | | | | | | | | | | | With the snap driven approach, While creating the snapshot, We have to mention the snap-name first and then the volumes to be associated with that. Corresponding changes has been made in glusterd. While deleting the snapshot, we have to mention only the snapname. Corresponding changes has been made in glusterd. CLI changes for the same can be found here: http://review.gluster.org/#/c/6947/ Change-Id: I8bd8f471da5b728165da5f331faad3dde3486823 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7123 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Rajesh Joseph <rjoseph@redhat.com>
* cli/snapshot : snapshot list CLISachin Pandit2014-03-051-3/+45
| | | | | | | | | | | | | syntax: gluster snapshot list [volname] This will list all the snapshots (or) snapshots of a particular volume. Change-Id: If879e06fe13caf2236f48df345857f833ae83c5b Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7143 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Rajesh Joseph <rjoseph@redhat.com>
* CLI/snapshot : Snapshot info CLI changesSachin Pandit2014-03-051-365/+284
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | snapshot info [(snapname | volume <volname>)]. Snapshot info will list all the basic information. Syntax : ** gluster snapshot info ** This will list all the snap object along with that it also prints the snaps volume name, UUID and status. ** gluster snapshot info <snap-name> ** This will list only the mentioned snap object and also snap volume information along with that ** gluster snapshot info volume <volname> ** This will list all the snaps present in the mentioned volume. Change-Id: I1e92774cb08eaebbfe141b9b47d1a887d76916a4 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/6996 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Rajesh Joseph <rjoseph@redhat.com>
* Merge "cli: fix displaying of different delete message" into developmentRajesh Joseph2014-01-151-1/+1
|\
| * cli: fix displaying of different delete messageRaghavendra Bhat2014-01-071-1/+1
| | | | | | | | | | | | Change-Id: Ife2395a92997168bb147a7db4bba346d3adc916b BUG: 1048126 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* | Merge "CLI/snapshot : Aligning the snapshot list output." into developmentRajesh Joseph2014-01-151-19/+47
|\ \
| * | CLI/snapshot : Aligning the snapshot list output.Sachin Pandit2014-01-151-19/+47
| |/ | | | | | | | | Change-Id: Id98d6e2d0436621486c311889f128077558e59f8 Signed-off-by: Sachin Pandit <spandit@redhat.com>
* / Snapshot: Gluster snapshot restore featureRajesh Joseph2014-01-151-2/+2
|/ | | | | | | | | | | Implemented gluster snapshot restore feature. The restore is done by replacing the origin volume with the snap volume. TODO: After the restore the snapshot volume should be deleted. As of now the deletion work is pending. Change-Id: Ib137fb6bb84a74030607ffa47f89cd705dc7e1ff Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd/snapshot: Defining snap-max-soft-limit as a percentage of ↵Avra Sengupta2014-01-071-37/+67
| | | | | | | | | | | | | snap-max-hard-limit. This patch also prohibits configuration of snap-max-hard-limit and snap-max-soft-limit for snap volumes. Also displaying the snapshot configs by reading data only from local node, as all config data will be in sync across the cluster. Change-Id: I635b925c02ed5b108cd10c7193b154ad82d5afad BUG: 1043792 Signed-off-by: Avra Sengupta <asengupt@redhat.com>
* glusterd/snapshot: Introducing snap-max-hard-limit and snap-max-soft-limitAvra Sengupta2014-01-061-44/+103
| | | | | | | | | Note: Manually adding this patch again as this patch got missed in git reset option done on remote development branch Change-Id: I9e81c5ec003c1e1722d0fcb27dd87c365ee43ff4 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd/snapshot : Fix for CG ID and Name not getting displayed.Sachin Pandit2014-01-061-2/+2
| | | | | | | | | | | | | CG ID was not getting initiated during snapshot create, hence there was problem in listing the CG ID and CG Name. Note: Manually adding this patch again as this patch got missed in git reset option done on remote development branch Change-Id: I81951b42292912c98bab5964fc732b630ff66d14 BUG: 1040435 Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
* cli/glusterd: implement the snap and cg delete functionalitiesRaghavendra Bhat2013-12-121-0/+61
| | | | | Change-Id: Icdb66c89acdd043d0d6368c48ce2e01b1a40966f Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* mgmt/glusterd : snapshot list, minor fixes.Sachin Pandit2013-11-281-22/+59
| | | | | | | | | This patch fixes the below mentioned issue. Snapshot list : Listing number of snaps available. Display proper message if snapshot not present. Change-Id: Iabfc47430a9c89fb5114e33e9feb7ef21973fc6a Signed-off-by: Sachin Pandit <spandit@redhat.com>
* cli/snapshot : Minox fix, string literal.Sachin Pandit2013-11-151-1/+1
| | | | | Change-Id: I2a0b7e244256f1df82beb3e4815d6cacfee50603 Signed-off-by: Sachin Pandit <spandit@redhat.com>
* mgmt/glusterd : Printing error message if volume does not exist.Sachin Pandit2013-11-151-1/+12
| | | | | | | | | If user tries to list the snap details of volumes which does not exist, then corresponding error message is displayed. Change-Id: I205738be3dc632ccb074b639a2088cdd44aa35a7 Signed-off-by: Sachin Pandit <spandit@redhat.com>
* glusterd/Jarvis: Added aggr rsp dict in mgmt frameworkAvra Sengupta2013-11-151-4/+5
| | | | | | | Also fixes snapshot config output Change-Id: Ia50d94492009cf73dbb99ba20117b9fa4c41048a Signed-off-by: Avra Sengupta <asengupt@redhat.com>
* snapshot: Snapshot restoreRajesh Joseph2013-11-151-0/+32
| | | | | | | | | | | | GL-31: Ability to restore snapshot Implemented snapshot restore for thin logical volume. As of now snapshot restore for CG is not tested. Testing for snapshot restore of a volume is done by changing the snapshot create process to create a thick snapshot. This is done because --merge option to restore thin volume is not working in the latest kernel. Change-Id: Ia3ded7e6c4da5957a74e269a25ba3200e6fb2d8b Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
* mgmt/glusterd: changes to create consistency group out of volumesRaghavendra Bhat2013-11-151-0/+12
| | | | | | | | | * Also send the proper error back to cli incase of any failure * Before taking the snap check whether a snap with the requested name already exists. Change-Id: I0830b31b1f095dd1d3d968c4f8b3cf46dc32d259 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* mgmt/glusterd: snapshot config changesshishir gowda2013-11-151-0/+49
| | | | | | | | Also refactored code in glusterd for create command Additionally, removed brick-op func from mgmt_iniate_all_phases Change-Id: Iddcc332009c5716adee7f2b04c93b352fb983446 Signed-off-by: shishir gowda <sgowda@redhat.com>
* CLI : Snapshot List,Integration with glusterdSachin Pandit2013-11-151-27/+39
| | | | | | | | | | | | | | | | | | Change in Naming convention: "snap_details", "snap_count" and so on is replaced by "snap-details", "snap-count" so on. Total snapcount introduced. Separate check is made for repeated Volume Name Ex : "gluster snapshot list vol1 vol2 vol1 vol2" is considered as "gluster snapshot list vol1 vol2" *This is still a work in progress* *have to test CG list once CG Store is ready* Change-Id: I45e2904eb8bdbf78de8665f20ba9605c38320307 Signed-off-by: Sachin Pandit <spandit@redhat.com>
* CLI : snapshot list cli interfaceSachin Pandit2013-11-151-39/+330
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | $gluster snapshot list *prints snaps of all volume* $gluster snapshot list -d *prints snaps of all volume with details* $gluster snapshot list vol1 *prints snaps of volume "vol1"* $gluster snapshot list vol1 -d *prints snaps of volume "vol1" with details* $gluster snapshot list vol1 vol2 *prints snaps of volume "vol1" & "vol2" $gluster snapshot list vol1 vol2 -d *prints snaps of volume "vol1" & "vol2" with details* $gluster snapshot list -c cgname *prints snaps of all volume present in the group "cgname"* $gluster snapshot list -c cgname -d *prints snaps of all volume present in the group "cgname" with details* ** As of now you wont be able to see any output as actual snap create is not integrated ** Change-Id: I60eeafc715a51f1c564a270bb4124368038012b1 Signed-off-by: Sachin Pandit <spandit@redhat.com>
* cli: snapshot create cli interface.Avra Sengupta2013-11-151-0/+122
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | $ gluster snapshot help snapshot help - display help for snapshot commands snapshot create <volnames> [-n <snap-name/cg-name>] [-d <description>] - Snapshot Create. $ gluster snapshot create vol1 snapshot create: ???: snap created successfully $ gluster snapshot create vol1 vol2 snapshot create: ???: consistency group created successfully (The ??? will be replaced by the glusterd snap create command with the generated snap-name or cg-name) $ gluster snapshot create vol1 vol2 -n CG1 snapshot create: CG1: consistency group created successfully $ gluster snapshot create vol1 -n snap1 -d Description snapshot create: snap1: snap created successfully $ gluster snapshot create vol1 -n snap1 -d "Description can have -d within quotes" snapshot create: snap1: snap created successfully $ gluster snapshot create vol1 -n snap1 -d Description cant have -d without quotes snapshot create: failed: Options(-n/-d) are not valid descriptions Usage: snapshot create <volnames> [-n <snap-name/cg-name>] [-d <description>] $ gluster snapshot create vol1 -n "Multi word snap name" -d Description snapshot create: failed: Invalid snap name Usage: snapshot create <volnames> [-n <snap-name/cg-name>] [-d <description>] $ gluster snapshot create vol1 -d Description -n "-d" snapshot create: failed: Options(-n/-d) are not valid snap names Usage: snapshot create <volnames> [-n <snap-name/cg-name>] [-d <description>] $ gluster snapshot create vol1 -d -n snap1 snapshot create: failed: No description provided Usage: snapshot create <volnames> [-n <snap-name/cg-name>] [-d <description>] Change-Id: I74b5a8406d72282fbb7ba7d07e0c7fe395148d38 Signed-off-by: Avra Sengupta <asengupt@redhat.com>
* bd: posix/multi-brick support to BD xlatorM. Mohan Kumar2013-11-131-0/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current BD xlator (block backend) has a few limitations such as * Creation of directories not supported * Supports only single brick * Does not use extended attributes (and client gfid) like posix xlator * Creation of special files (symbolic links, device nodes etc) not supported Basic limitation of not allowing directory creation is blocking oVirt/VDSM to consume BD xlator as part of Gluster domain since VDSM creates multi-level directories when GlusterFS is used as storage backend for storing VM images. To overcome these limitations a new BD xlator with following improvements is suggested. * New hybrid BD xlator that handles both regular files and block device files * The volume will have both POSIX and BD bricks. Regular files are created on POSIX bricks, block devices are created on the BD brick (VG) * BD xlator leverages exiting POSIX xlator for most POSIX calls and hence sits above the POSIX xlator * Block device file is differentiated from regular file by an extended attribute * The xattr 'user.glusterfs.bd' (BD_XATTR) plays a role in mapping a posix file to Logical Volume (LV). * When a client sends a request to set BD_XATTR on a posix file, a new LV is created and mapped to posix file. So every block device will have a representative file in POSIX brick with 'user.glusterfs.bd' (BD_XATTR) set. * Here after all operations on this file results in LV related operations. For example opening a file that has BD_XATTR set results in opening the LV block device, reading results in reading the corresponding LV block device. When BD xlator gets request to set BD_XATTR via setxattr call, it creates a LV and information about this LV is placed in the xattr of the posix file. xattr "user.glusterfs.bd" used to identify that posix file is mapped to BD. Usage: Server side: [root@host1 ~]# gluster volume create bdvol host1:/storage/vg1_info?vg1 host2:/storage/vg2_info?vg2 It creates a distributed gluster volume 'bdvol' with Volume Group vg1 using posix brick /storage/vg1_info in host1 and Volume Group vg2 using /storage/vg2_info in host2. [root@host1 ~]# gluster volume start bdvol Client side: [root@node ~]# mount -t glusterfs host1:/bdvol /media [root@node ~]# touch /media/posix It creates regular posix file 'posix' in either host1:/vg1 or host2:/vg2 brick [root@node ~]# mkdir /media/image [root@node ~]# touch /media/image/lv1 It also creates regular posix file 'lv1' in either host1:/vg1 or host2:/vg2 brick [root@node ~]# setfattr -n "user.glusterfs.bd" -v "lv" /media/image/lv1 [root@node ~]# Above setxattr results in creating a new LV in corresponding brick's VG and it sets 'user.glusterfs.bd' with value 'lv:<default-extent-size' [root@node ~]# truncate -s5G /media/image/lv1 It results in resizig LV 'lv1'to 5G New BD xlator code is placed in xlators/storage/bd directory. Also add volume-uuid to the VG so that same VG can't be used for other bricks/volumes. After deleting a gluster volume, one has to manually remove the associated tag using vgchange <vg-name> --deltag <trusted.glusterfs.volume-id:<volume-id>> Changes from previous version V5: * Removed support for delayed deleting of LVs Changes from previous version V4: * Consolidated the patches * Removed usage of BD_XATTR_SIZE and consolidated it in BD_XATTR. Changes from previous version V3: * Added support in FUSE to support full/linked clone * Added support to merge snapshots and provide information about origin * bd_map xlator removed * iatt structure used in inode_ctx. iatt is cached and updated during fsync/flush * aio support * Type and capabilities of volume are exported through getxattr Changes from version 2: * Used inode_context for caching BD size and to check if loc/fd is BD or not. * Added GlusterFS server offloaded copy and snapshot through setfattr FOP. As part of this libgfapi is modified. * BD xlator supports stripe * During unlinking if a LV file is already opened, its added to delete list and bd_del_thread tries to delete from this list when a last reference to that file is closed. Changes from previous version: * gfid is used as name of LV * ? is used to specify VG name for creating BD volume in volume create, add-brick. gluster volume create volname host:/path?vg * open-behind issue is fixed * A replicate brick can be added dynamically and LVs from source brick are replicated to destination brick * A distribute brick can be added dynamically and rebalance operation distributes existing LVs/files to the new brick * Thin provisioning support added. * bd_map xlator support retained * setfattr -n user.glusterfs.bd -v "lv" creates a regular LV and setfattr -n user.glusterfs.bd -v "thin" creates thin LV * Capability and backend information added to gluster volume info (and --xml) so that management tools can exploit BD xlator. * tracing support for bd xlator added TODO: * Add support to display snapshots for a given LV * Display posix filename for list-origin instead of gfid Change-Id: I00d32dfbab3b7c806e0841515c86c3aa519332f2 BUG: 1028672 Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com> Reviewed-on: http://review.gluster.org/4809 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* bd_map: Remove bd_map xlatorM. Mohan Kumar2013-11-131-156/+0
| | | | | | | | | | | Remove bd_map xlator and CLI related changes. Change-Id: If7086205df1907127c1a1fa4ba603f1c48421d09 BUG: 1028672 Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com> Reviewed-on: http://review.gluster.org/5747 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Set the o/p width of hostname to 8 charactersVijaykumar M2013-11-111-1/+1
| | | | | | | | | Change-Id: I91dcb19ba4d31c17e6041155c0e59af457b87f1b BUG: 1028871 Signed-off-by: Vijaykumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/6245 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: write 'volume rebalance' error message in xml format whenDawit Alemu2013-11-101-4/+12
| | | | | | | | | | | | | | | | | | | --xml is specified When 'volume rebalance' encounters an error the cli prints the error message in plain text independent of whether --xml is specified. This throws off client application that expect xml output (as mentioned in bz1026143). Now, if the --xml flag is supplied, the cli print 'volume rebalance' error messages in xml format. Change-Id: I16c6a7a4cdd2819eb73422ab849125986dc299a6 BUG: 1026143 Signed-off-by: Dawit Alemu <dalemu@redhat.com> Reviewed-on: http://review.gluster.org/6242 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd : Improved quota volume reset commandAnuradha2013-10-281-2/+2
| | | | | | | | | | | | | | | | | | | | | | | Quota volume reset command without "force" option fixed, doesn't fail anymore. It resets unprotected fields and not the protected ones. Also, an appropriate message is provided to the user for the following cases : 1. only unprotected fields are reset, "force" option should be used to reset protected fields. 2. Both protected and unprotected fields are reset. 3. No field was reset, "force" option required. Test case for the same also added. Change-Id: I24e8f1be87b79ccd81bf6f933e00608b861c7a16 BUG: 1022905 Signed-off-by: Anuradha <atalur@redhat.com> Reviewed-on: http://review.gluster.org/6135 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: [Feature] Command implementation to get heal-countVenkatesh Somyajulu2013-10-141-0/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently to know the number of files to be healed, either user has to go to backend and check the number of entries present in indices/xattrop directory. But if a volume consists of large number of bricks, going to each backend and counting the number of entries is a time-taking task. Otherwise user can give gluster volume heal vol-name info command but with this approach if no. of entries are very hugh in the indices/ xattrop directory, it will comsume time. So as a feature, new command is implemented. Command 1: gluster volume heal vn statistics heal-count This command will get the number of entries present in every brick of a volume. The output displays only entries count. Command 2: gluster volume heal vn statistics heal-count replica 192.168.122.1:/home/user/brickname Here if we are concerned with just one replica. So providing any one of the brick of a replica will get the number of entries to be healed for that replica only. Example: Replicate volume with replica count 2. Backend status: -------------- [root@dhcp-0-17 xattrop]# ls -lia | wc -l 1918 NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy entries so actual no. of entries to be healed are 1916. [root@dhcp-0-17 xattrop]# pwd /home/user/2ty/.glusterfs/indices/xattrop Command output: -------------- Gathering count of entries to be healed on volume volume3 has been successful Brick 192.168.122.1:/home/user/22iu Status: Brick is Not connected Entries count is not available Brick 192.168.122.1:/home/user/2ty Number of entries: 1916 Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290 BUG: 1015990 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/6044 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr : Implementation of command "gluster volume heal vn statistics"Venkatesh Somyajulu2013-10-141-2/+110
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "gluster volume heal volumename statistics" command gives the summary of the afr crawl done based on the entries present in the xattrop directory. Whenever afr crawls are attempted, the beginning time of crawl, end time of crawl, no of files healed, heal-failed count and number of files in split brain are shown along with the type of the crawl. If crawl is already in progress then it will give the number of files healed, heal failed count and number of files in split-brain from the beginning of the crawl and instead of telling the end time of the crawl, "CRAWL IN PROGRESS" message will be shown. Output format: command: "gluster volume heal volume-name statistics" Output: Gathering afr crawl statistics crawl statistics on volume volume-name has been successful ------------------------------------------------ Crawl statistics for brick no 0 Hostname of brick 192.168.122.248 Starting time of crawl: Wed Jul 10 15:52:38 2013 Ending time of crawl: Wed Jul 10 15:52:38 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 Starting time of crawl: Wed Jul 10 15:52:38 2013 Ending time of crawl: Wed Jul 10 15:52:38 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 ------------------------------------------------ Crawl statistics for brick no 1 Hostname of brick 192.168.122.1 Starting time of crawl: Wed Jul 10 15:52:42 2013 Ending time of crawl: Wed Jul 10 15:52:42 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 Starting time of crawl: Wed Jul 10 15:52:42 2013 Ending time of crawl: Wed Jul 10 15:52:42 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 -------------------------------------------------- Change-Id: I10bf9d10b005741db9973fb1352e0dd59ed99aa9 BUG: 949400 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/4790 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Implement 'volume status tasks'Krutika Dhananjay2013-10-081-46/+109
| | | | | | | | | | | | | | | | | | | oVirt's Gluster Integration needs an inexpensive command that can be executed every 10 seconds to monitor async tasks and their parameters, for all volumes. The solution involves adding a 'tasks' sub-command to 'volume status' to fetch only the async task IDs, type and other relevant parameters. Only the originator glusterd participates in this command as all the information needed is available on all the nodes. This is to make the command suitable for being executed every 10 seconds. Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1 BUG: 1012346 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6006 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli: add node uuid in rebalance and remove brick status xml outputBala.FA2013-10-031-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds node uuid in rebalance/remove-brick status xml output. Output XML will look like <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRebalance> <op>3</op> <nodeCount>1</nodeCount> <node> <nodeName>localhost</nodeName> ==>> <id>883626f8-4d29-4d02-8c5d-c9f48c5b2445</id> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </node> <aggregate> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </aggregate> </volRebalance> </cliOutput> Change-Id: I5a1d4f9043b33b9e88150647a243ddb16154e843 BUG: 1012296 Signed-off-by: Bala.FA <barumuga@redhat.com> Reviewed-on: http://review.gluster.org/6005 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli/glusterd: improve rebalance fix-layout status reportingRavishankar N2013-09-191-3/+9
| | | | | | | | | | | | | | | | | | | Problem: Currenly the CLI rebalance status command output does not indicate the 'type' of rebalance, i.e. whether a full rebalance or only a fix-layout was carried out. Fix: After the rebalance status of all peers is received by the originator glusterd, alter it to reflect the type of rebalance before passing it on to the CLI process. Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3 BUG: 1004744 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/5826 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Add statusStr xml tag to task list and rebalance/remove brick statusAravinda VK2013-09-121-19/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | New xml tag statusStr added to following gluster cli commands gluster volume status all --xml (For Task status) gluster volume rebalance <VOLNAME> status --xml gluster volume remove-brick <VOLNAME> <BRICK1..> status --xml Example(volume status all): <task> <type>Rebalance</type> <id>82d8d122-8738-4144-8507-d93fc98b61df</id> <status>3</status> <statusStr>completed</statusStr> </task> Example(volume rebalance <VOL> status) <node> <nodeName>localhost</nodeName> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </node> Also modified task status as string instead of showing number in gluster volume status all Example: Status of volume: gv1 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick sumne.sumne:/gfs/b1 49154 Y 15489 Brick sumne.sumne:/gfs/b2 49155 Y 15493 NFS Server on localhost N/A N 15913 Task ID Status ---- -- ------ Rebalance 82d8d122-8738-4144-8507-d93fc98b61df completed BUG: 1003521 Change-Id: Ib283016af4c18132fb13fb33d44075782d77823c Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/5739 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: Fix 'status all' xml output when volumes are not startedKaushal M2013-09-111-10/+12
| | | | | | | | | | | | CLI now only outputs one XML document for 'status all' only containing those volumes which are started. BUG: 1004218 Change-Id: Id4130fe59b3b74475d8bd1cc8134ac59a28f1b7e Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5773 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/cli: Geo-Replication "status detail" cmdVenky Shankar2013-09-041-138/+333
| | | | | | | | | | | | | | | | | | | | | | Provides detailed status info in the following format MASTER <master-vol> SLAVE <slave-vol> NODE HEALTH UPTIME FILES SYNCD FILES PENDING BYTES PENDING DELETES PENDING ----------------------------------------------------------------------------------- This patch introdues "status detail" command to show crawl related information in CLI. These values are "pulled" from gsyncd when "status detail" is executed. Change-Id: I1fdaf7180eacce054a864d34971dc160bd7301e1 BUG: 990420 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5590 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Saving geo-rep session details in a more specific pathVenky Shankar2013-09-041-60/+8
| | | | | | | | | | | | | | | Now saving the session details in /var/lib/glusterd/geo-replication/<mastervol>_<slaveip>_<slavevol> repo to distinguish between two master-slave sessions where the slavename is same across two different clusters. Change-Id: I57c93f55cc9bd4fe2bffe579028aaf5e4335b223 BUG: 991501 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/dht: Treat migration failures due to space constraints as skippedshishir gowda2013-07-301-12/+28
| | | | | | | | | | | | | | | | Currently rebalance/remove-brick op's display migration failed count even for files which failed due to space issues (not enough space for file, or migration leading to cluster imbalance) These will now be counted as skipped, and rebalance/remove-brick status will display the additional counter Change-Id: I674904d380b5f8300e9ca9e6af557c3d30d6cff4 BUG: 989846 Signed-off-by: shishir gowda <sgowda@redhat.com> Reviewed-on: http://review.gluster.org/5399 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Fixing create force issues while it returned true everytime.Avra Sengupta2013-07-291-5/+1
| | | | | | | | | | | | | | | | | | | | Now geo-rep create force will return true if a node is down, and log an appropriate message. It will also return true with an appropriate log message if the slave verification fails. However it will not return true if the config file is deleted, ot corrupted, so as not to get the state_file's path. It will also fail if the slave url is invalid. If the push-pem option is given and /var/lib/glusterd/geo-replication/common_secret.pem.pub is not present, then also the create force command will fail. Change-Id: Ie7532a0884ddf9c3008bd30832d171d5b53b540e BUG: 988314 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5405 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli, glusterd: Cleanup logging of bd op commands.Vijay Bellur2013-07-271-4/+4
| | | | | | | | | | | | | | This patch prevents messages of the form "bd op: %s : SUCCESS" from being logged in .cmd_log_history. Change-Id: Iebeb7e26d409bf99b9c8df0a5c1c5a5d30d78a61 BUG: 823081 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/4871 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: M. Mohan Kumar <mohan@in.ibm.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/cli changes for distributed geo-repAvra Sengupta2013-07-261-36/+550
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commands: gluster system:: execute gsec_create gluster volume geo-rep <master> <slave-url> create [push-pem] [force] gluster volume geo-rep <master> <slave-url> start [force] gluster volume geo-rep <master> <slave-url> stop [force] gluster volume geo-rep <master> <slave-url> delete gluster volume geo-rep <master> <slave-url> config gluster volume geo-rep <master> <slave-url> status The geo-replication is distributed. The session will be created, and gsyncd will be spawned on all relevant nodes, instead of only one node. geo-rep: Collecting status detail related data Added persistent store for saving information about TotalFilesSynced, TotalSyncTime, TotalBytesSynced Changes in the status information in socket: Existing(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01; New(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978; TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640; Persistent details stored in /var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111 BUG: 847839 Original Author: Avra Sengupta <asengupt@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Csaba Henk <csaba@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5132 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* cli :remove-brick process output leads to ambiguitysusant2013-07-241-3/+6
| | | | | | | | | | | | | The output of remove-brick status as "Not started" leads to ambiguity.We should not show the status of the Server nodes which do not participate in the remove-brick process. Change-Id: I85fea40deb15f3e2dd5487d881f48c9aff7221de BUG: 986896 Signed-off-by: susant <spalai@redhat.com> Reviewed-on: http://review.gluster.org/5383 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: gluster volume heal commands are more elaborativeVenkatesh Somyajulu2013-07-241-5/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | 1. "gluster volume heal volume-name" output :Launching heal operation to perform index self heal on volume volume-name has been successful 2. "gluster volume heal volume-name full" Output :Launching heal operation to perform full self heal on volume volume-name has been successful 3. "gluster volume heal volume-name info" Output :Gathering list of entries to be healed on volume volume-name has been successful 4. "gluster volume heal volume-name info healed" Output :Gathering list of healed entries on volume volume-name has been successful 5. "gluster volume heal volume-name info split-brain" Output :Gathering list of split brain entries on volume volume-name has been successful 6. "gluster volume heal volume-name info heal-failed" Output :Gathering list of heal failed entries on volume volume-name has been successful Change-Id: I74c90e8129d23d513ddb7879358a9d21c94a5c0d BUG: 978936 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/5286 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: check for null in is_server_debug_xlator()Ravishankar N2013-07-121-0/+2
| | | | | | | | | | | | | | | | | Command: gluster volume set <volname> diagnostics.client-log-level trace Expected output: "volume set: failed: option log-level trace: 'trace' is not valid (possible options are DEBUG, WARNING, ERROR, INFO, CRITICAL, NONE, TRACE.)" Current output: gluster cli receives a segmentation fault Fix: check for NULL before calling strstr Change-Id: If4c7a85a635849a388cf122543e12349c109643c BUG: 982174 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/5298 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: Fix remove brick cli out for wrong volume nameVenkatesh Somyajulu2013-07-041-6/+3
| | | | | | | | | | | | | | | | | | Problem: gluster volume remove-brick command, was not printing the error in case of volume-name specified is wrong. Fix: Fix will print error message to indicate that provided volume name is invalid. Although patch for bug 961669 http://review.gluster.org/#/c/4975/ does print cli-output now, but still xml is unable to use the response values Change-Id: I2ee1df86c1e756fb8e93b4d6bbdd102b4f368f87 BUG: 961307 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/4972 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Fix in letter case in volume heal outputVenkatesh Somyajulu2013-07-031-2/+2
| | | | | | | | | | Change-Id: I25d13444c2cbff9b26642e91677ad1e09e77aa1e BUG: 978936 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/5259 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Log peer op status at the appropriate timeKrutika Dhananjay2013-06-181-142/+38
| | | | | | | | | | Change-Id: Ia8e1af082078f2f791708ba4faa4992bf291dd6e BUG: 961339 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/5023 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Add a cmd for getting uuid of local nodeKrishnan Parthasarathi2013-06-101-0/+106
| | | | | | | | | | | | | | | | | | | Usage: gluster system:: uuid get This is needed since we generate uuid of a node in a lazy manner. ie, we generate a uuid for the node only on the first volume or peer operation, when the node needs an external identity. With this command, we can force[1] the uuid generation, without a volume or peer operation performed. [1]: Querying for uuid (or uuid get), forces uuid to come into existence. Change-Id: I62c8b6754117756aa4d773dd48af4ddeb1a1d878 BUG: 971661 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/5175 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli: Remove unused port info from peer status.Venkatesh Somyajulu2013-06-051-15/+2
| | | | | | | | | | | | | | | Problem: "gluster peer status" on some nodes gives port info and fails to give on other. But it is a hard coded value. Fix: Removing the port info from command Change-Id: I919f0349f252e658bfc13e60bb8e171da32eaf25 BUG: 964026 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/5027 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: set min-op-version and max-op-version for getspecJeff Darcy2013-05-301-0/+34
| | | | | | | | | Change-Id: I2185df5d6b560d9367ae404c91812048e1655180 BUG: 969193 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/5119 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>