summaryrefslogtreecommitdiffstats
path: root/cli/src/cli-xml-output.c
Commit message (Collapse)AuthorAgeFilesLines
* build: MacOSX Porting fixesHarshavardhana2014-04-241-1/+1
| | | | | | | | | | | | | | | | | | | | | git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs Working functionality on MacOSX - GlusterD (management daemon) - GlusterCLI (management cli) - GlusterFS FUSE (using OSXFUSE) - GlusterNFS (without NLM - issues with rpc.statd) Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac BUG: 1089172 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Signed-off-by: Dennis Schafroth <dennis@schafroth.com> Tested-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Dennis Schafroth <dennis@schafroth.com> Reviewed-on: http://review.gluster.org/7503 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gluster: GlusterFS Volume Snapshot FeatureAvra Sengupta2014-04-111-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | This is the initial patch for the Snapshot feature. Current patch includes following features: * Snapshot create * Snapshot delete * Snapshot restore * Snapshot list * Snapshot info * Snapshot status * Snapshot config Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9 BUG: 1061685 Signed-off-by: shishir gowda <sgowda@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Signed-off-by: Joseph Fernandes <josferna@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7128 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/cli-xml : skipped files should be treated as failures forSusant Palai2014-02-051-18/+17
| | | | | | | | | | | | | | | | | | | remove-brick operation. Fix: For remove-brick operation skipped count is included into failure count. clixml-output : skipped count would be zero always for remove-brick status. Change-Id: Ic0bb23b89e0cf5b884b6d1ae42bbf98deedc9173 BUG: 1060209 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: http://review.gluster.org/6889 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: Add options to the CLI that let the user control the reset ofDawit Alemu2014-01-241-13/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | stats "volume profile info" automatically clears incremental stats. There isn't a command to: - fetch stats without clearing incremental stats and - clear cumulative and incremental stats This change introduces two arguments (i.e. peek and clear). 'clear' will wipe both incremental and cumulative stats. 'peek' fetches stats without wiping incremental stats. 'volume profile info peek' - fetches incremental and cumulative stats without wiping incremental stats 'volume profile info incremental peek' - fetches incremental stats without wiping incremental stats 'volume profile info clear' - clears both incremental and cumultiave stats Change-Id: I91834515ad672eca5f882809941147d7d997c4c9 BUG: 1047416 Signed-off-by: Dawit Alemu <dalemu@redhat.com> Reviewed-on: http://review.gluster.org/6620 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Addition of new child elements under brick in volume info xml.ndarshan2014-01-161-0/+10
| | | | | | | | | | | | | Added new child elements; name and hostUuid under brick in volume info xml where name and host uuid of the bricks are stored. This does not break backward compatibility as the old value under brick is not removed. Change-Id: Ib9e388889c8dc0c7cd34dcc1871a59003f982f36 Signed-off-by: ndarshan <dnarayan@redhat.com> Reviewed-on: http://review.gluster.org/6604 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: Fix xml output for volume statusKaushal M2013-12-251-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The XML output for volume status was malformed when one of the nodes is down, leading to outputs like ------- <node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <peerid>63ca3d2f-8c1f-4b84-b797-b4baddab81fb</peerid> <status>1</status> <port>2049</port> <pid>2130</pid> </node> ----- This was happening because we were starting the <node> element before determining if node was present, and were not closing it or clearing it when not finding the node in the dict. To fix this, the <node> element is only started once a node has been found in the dict. Change-Id: I6b6205f14b27a69adb95d85db7b48999aa48d400 BUG: 1046020 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/6571 Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Aggregate tasks status in 'volume status [tasks]'Kaushal M2013-12-041-4/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, glusterd used to just send back the local status of a task in a 'volume status [tasks]' command. As the rebalance operation is distributed and asynchronus, this meant that different peers could give different status values for a rebalance or remove-brick task. With this patch, all the peers will send back the tasks status as a part of the 'volume status' commit op, and the origin peer will aggregate these to arrive at a final status for the task. The aggregation is only done for rebalance or remove-brick tasks. The replace-brick task will have the same status on all the peers (see comment in glusterd_volume_status_aggregate_tasks_status() for more information) and need not be aggregated. The rebalance process has 5 states, NOT_STARTED - rebalance process has not been started on this node STARTED - rebalance process has been started and is still running STOPPED - rebalance process was stopped by a 'rebalance/remove-brick stop' command COMPLETED - rebalance process completed successfully FAILED - rebalance process failed to complete successfully The aggregation is done using the following precedence, STARTED > FAILED > STOPPED > COMPLETED > NOT_STARTED The new changes make the 'volume status tasks' command a distributed command as we need to get the task status from all peers. The following tests were performed, - Start a remove-brick task and do a status command on a peer which doesn't have the brick being removed. The remove-brick status was given correctly as 'in progress' and 'completed', instead of 'not started' - Start a rebalance task, run the status command. The status moved to 'completed' only after rebalance completed on all nodes. Also, change the CLI xml output code for rebalance status to use the same algorithm for status aggregation. Change-Id: Ifd4aff705aa51609a612d5a9194acc73e10a82c0 BUG: 1027094 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/6230 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli: Add an option to fetch just incremental or cumulative I/0Dawit Alemu2013-12-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | information 'volume profile info' fetches both cumulative and incremental I/O statistics. There isn't a way to fetch just cumulative or incremental statistics. This change introduces two optional arguments, namely "incremental" and "cumulative", that can be tacked on to 'volume profile info'. In other words, the new command format is volume profile <VOLNAME> {start | info [incremental | cumulative] | stop} [nfs] 'volume profile info incremental' - fetches incremental stats 'volume profile info cumulative' - fetches cumulative stats 'volume profile info' - fetches incremental and cumulative stats Change-Id: I5ddb45d990542ea611d23d251efebfec46f472d0 BUG: 1030580 Signed-off-by: Dawit Alemu <dalemu@redhat.com> Reviewed-on: http://review.gluster.org/6264 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli: xml: Rebalance status(xml) was empty when a glusterd downAravinda VK2013-12-021-1/+4
| | | | | | | | | | | | | | | | When a glusterd is down in cluster rebalance/remove-brick status --xml will fail to get status and returns null. This patch skips collecting status if glusterd is down, and collects status from all the other up nodes. Change-Id: I6df0feef41b5cc817cc8d7820ee2acac95176a98 BUG: 1036564 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/6391 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: List only nodes which have rebalance started in rebalance statusKaushal M2013-11-201-5/+10
| | | | | | | | | | | | | | | | Listing the nodes on which rebalance hasn't been started is just giving out extraneous information. Also, refactor the rebalance status printing code into a single function and use it for both rebalance and remove-brick status. BUG: 1031887 Change-Id: I47bd561347dfd6ef76c52a1587916d6a71eac369 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/6300 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* Fix xml compilation errorM. Mohan Kumar2013-11-191-1/+6
| | | | | | | | | | | | | | | Compiling GlusterFS without xml package results in following build error cli-rpc-ops.o: In function `gf_cli_status_cbk': /home/mohan/Work/glusterfs/cli/src/cli-rpc-ops.c:6430: undefined reference to `cli_xml_output_vol_status_tasks_detail' Change-Id: I49b3c46ac3340c40e372bef4690cedb41df20e8a Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com> Reviewed-on: http://review.gluster.org/6295 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: add peerid to volume status xml outputBala.FA2013-11-141-0/+10
| | | | | | | | | | | | This patch adds <peerid> tag to bricks and nfs/shd like services to volume status xml output. BUG: 955548 Change-Id: I9aaa9266e4d56f632235eaeef565e92d757c0694 Signed-off-by: Bala.FA <barumuga@redhat.com> Reviewed-on: http://review.gluster.org/6162 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* bd: posix/multi-brick support to BD xlatorM. Mohan Kumar2013-11-131-1/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current BD xlator (block backend) has a few limitations such as * Creation of directories not supported * Supports only single brick * Does not use extended attributes (and client gfid) like posix xlator * Creation of special files (symbolic links, device nodes etc) not supported Basic limitation of not allowing directory creation is blocking oVirt/VDSM to consume BD xlator as part of Gluster domain since VDSM creates multi-level directories when GlusterFS is used as storage backend for storing VM images. To overcome these limitations a new BD xlator with following improvements is suggested. * New hybrid BD xlator that handles both regular files and block device files * The volume will have both POSIX and BD bricks. Regular files are created on POSIX bricks, block devices are created on the BD brick (VG) * BD xlator leverages exiting POSIX xlator for most POSIX calls and hence sits above the POSIX xlator * Block device file is differentiated from regular file by an extended attribute * The xattr 'user.glusterfs.bd' (BD_XATTR) plays a role in mapping a posix file to Logical Volume (LV). * When a client sends a request to set BD_XATTR on a posix file, a new LV is created and mapped to posix file. So every block device will have a representative file in POSIX brick with 'user.glusterfs.bd' (BD_XATTR) set. * Here after all operations on this file results in LV related operations. For example opening a file that has BD_XATTR set results in opening the LV block device, reading results in reading the corresponding LV block device. When BD xlator gets request to set BD_XATTR via setxattr call, it creates a LV and information about this LV is placed in the xattr of the posix file. xattr "user.glusterfs.bd" used to identify that posix file is mapped to BD. Usage: Server side: [root@host1 ~]# gluster volume create bdvol host1:/storage/vg1_info?vg1 host2:/storage/vg2_info?vg2 It creates a distributed gluster volume 'bdvol' with Volume Group vg1 using posix brick /storage/vg1_info in host1 and Volume Group vg2 using /storage/vg2_info in host2. [root@host1 ~]# gluster volume start bdvol Client side: [root@node ~]# mount -t glusterfs host1:/bdvol /media [root@node ~]# touch /media/posix It creates regular posix file 'posix' in either host1:/vg1 or host2:/vg2 brick [root@node ~]# mkdir /media/image [root@node ~]# touch /media/image/lv1 It also creates regular posix file 'lv1' in either host1:/vg1 or host2:/vg2 brick [root@node ~]# setfattr -n "user.glusterfs.bd" -v "lv" /media/image/lv1 [root@node ~]# Above setxattr results in creating a new LV in corresponding brick's VG and it sets 'user.glusterfs.bd' with value 'lv:<default-extent-size' [root@node ~]# truncate -s5G /media/image/lv1 It results in resizig LV 'lv1'to 5G New BD xlator code is placed in xlators/storage/bd directory. Also add volume-uuid to the VG so that same VG can't be used for other bricks/volumes. After deleting a gluster volume, one has to manually remove the associated tag using vgchange <vg-name> --deltag <trusted.glusterfs.volume-id:<volume-id>> Changes from previous version V5: * Removed support for delayed deleting of LVs Changes from previous version V4: * Consolidated the patches * Removed usage of BD_XATTR_SIZE and consolidated it in BD_XATTR. Changes from previous version V3: * Added support in FUSE to support full/linked clone * Added support to merge snapshots and provide information about origin * bd_map xlator removed * iatt structure used in inode_ctx. iatt is cached and updated during fsync/flush * aio support * Type and capabilities of volume are exported through getxattr Changes from version 2: * Used inode_context for caching BD size and to check if loc/fd is BD or not. * Added GlusterFS server offloaded copy and snapshot through setfattr FOP. As part of this libgfapi is modified. * BD xlator supports stripe * During unlinking if a LV file is already opened, its added to delete list and bd_del_thread tries to delete from this list when a last reference to that file is closed. Changes from previous version: * gfid is used as name of LV * ? is used to specify VG name for creating BD volume in volume create, add-brick. gluster volume create volname host:/path?vg * open-behind issue is fixed * A replicate brick can be added dynamically and LVs from source brick are replicated to destination brick * A distribute brick can be added dynamically and rebalance operation distributes existing LVs/files to the new brick * Thin provisioning support added. * bd_map xlator support retained * setfattr -n user.glusterfs.bd -v "lv" creates a regular LV and setfattr -n user.glusterfs.bd -v "thin" creates thin LV * Capability and backend information added to gluster volume info (and --xml) so that management tools can exploit BD xlator. * tracing support for bd xlator added TODO: * Add support to display snapshots for a given LV * Display posix filename for list-origin instead of gfid Change-Id: I00d32dfbab3b7c806e0841515c86c3aa519332f2 BUG: 1028672 Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com> Reviewed-on: http://review.gluster.org/4809 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Implement 'volume status tasks'Krutika Dhananjay2013-10-081-1/+30
| | | | | | | | | | | | | | | | | | | oVirt's Gluster Integration needs an inexpensive command that can be executed every 10 seconds to monitor async tasks and their parameters, for all volumes. The solution involves adding a 'tasks' sub-command to 'volume status' to fetch only the async task IDs, type and other relevant parameters. Only the originator glusterd participates in this command as all the information needed is available on all the nodes. This is to make the command suitable for being executed every 10 seconds. Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1 BUG: 1012346 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6006 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli: add node uuid in rebalance and remove brick status xml outputBala.FA2013-10-031-1/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds node uuid in rebalance/remove-brick status xml output. Output XML will look like <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRebalance> <op>3</op> <nodeCount>1</nodeCount> <node> <nodeName>localhost</nodeName> ==>> <id>883626f8-4d29-4d02-8c5d-c9f48c5b2445</id> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </node> <aggregate> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </aggregate> </volRebalance> </cliOutput> Change-Id: I5a1d4f9043b33b9e88150647a243ddb16154e843 BUG: 1012296 Signed-off-by: Bala.FA <barumuga@redhat.com> Reviewed-on: http://review.gluster.org/6005 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: runtime in xml output of rebalance/remove-brick statusAravinda VK2013-10-031-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "runtime in secs" is available in the CLI output of rebalance status and remove-brick status, but not available in xml output when --xml is passed. runtime in aggregate section will be max of all nodes runtimes. Example output: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRebalance> <op>3</op> <nodeCount>1</nodeCount> <node> <nodeName>localhost</nodeName> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <skipped>0</skipped> <runtime>1.00</runtime> <status>3</status> <statusStr>completed</statusStr> </node> <aggregate> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <skipped>0</skipped> <runtime>1.00</runtime> <status>3</status> <statusStr>completed</statusStr> </aggregate> </volRebalance> </cliOutput> BUG: 1012773 Change-Id: I8deaba08922a53cd2d3b411e097a7b3cf591b127 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/5997 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: skipped tag in xml output of rebalance/remove-brick statusAravinda VK2013-10-031-3/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Skipped files count is available in CLI output of rebalance status and remove-brick status, but not available in xml output. Example output: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRebalance> <op>3</op> <nodeCount>1</nodeCount> <node> <nodeName>localhost</nodeName> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <skipped>0</skipped> <status>0</status> <statusStr>completed</statusStr> </node> <aggregate> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <skipped>0</skipped> <status>0</status> <statusStr>completed</statusStr> </aggregate> </volRebalance> </cliOutput> BUG: 1012772 Change-Id: I05191293403e66e0d681f0cd0422aa3c78a2d91d Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/6000 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: add aggregate status for rebalance and remove-brick status xml outputTimothy Asir2013-09-181-1/+15
| | | | | | | | | | | | | | | | Add aggregate status information in <aggregate> section of gluster volume 'rebalance status' and 'remove-brick status cli xml output. The aggregate status determined based on the most critical level and the aggregate status will have 'Complete' only when all individual status are 'Complete'. Change-Id: Ie805b9dd52fd82fd277c3da9ee91cc8b6dea8212 BUG: 1006813 Signed-off-by: Timothy Asir <tjeyasin@redhat.com> Reviewed-on: http://review.gluster.org/4950 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli,glusterd: Task parameters in xml outputKaushal M2013-09-131-1/+95
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces task parameters for the asynchronus task shown in volume status. The parameters are only given for xml output. The parameters shown currently are, - source and destination bricks for replace-brick tasks ...... <tasks> <task> <type>Replace brick</type> <id>3d1a1005-9d2e-4ae0-bd62-577bc1d333a3</id> <status>1</status> <params> <srcBrick>archm:/export/test4</srcBrick> <dstBrick>archm:/export/test-replace1</dstBrick> </params> </task> </tasks> ...... - list of bricks being removed for remove-brick tasks ...... <tasks> <task> <type>Remove brick</type> <id>901c20ca-0da2-41de-8669-5f0caca6b846</id> <status>1</status> <params> <brick>archm:/export/test2</brick> <brick>archm:/export/test3</brick> </params> </task> </tasks> ...... The changes for non-xml output will be done in a subsequent patch. Change-Id: I322afe2f83ed8adeddb99f7962c25911204dc204 BUG: 916577 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5771 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* cli: Add statusStr xml tag to task list and rebalance/remove brick statusAravinda VK2013-09-121-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | New xml tag statusStr added to following gluster cli commands gluster volume status all --xml (For Task status) gluster volume rebalance <VOLNAME> status --xml gluster volume remove-brick <VOLNAME> <BRICK1..> status --xml Example(volume status all): <task> <type>Rebalance</type> <id>82d8d122-8738-4144-8507-d93fc98b61df</id> <status>3</status> <statusStr>completed</statusStr> </task> Example(volume rebalance <VOL> status) <node> <nodeName>localhost</nodeName> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <status>3</status> <statusStr>completed</statusStr> </node> Also modified task status as string instead of showing number in gluster volume status all Example: Status of volume: gv1 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick sumne.sumne:/gfs/b1 49154 Y 15489 Brick sumne.sumne:/gfs/b2 49155 Y 15493 NFS Server on localhost N/A N 15913 Task ID Status ---- -- ------ Rebalance 82d8d122-8738-4144-8507-d93fc98b61df completed BUG: 1003521 Change-Id: Ib283016af4c18132fb13fb33d44075782d77823c Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/5739 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli: Add server uuid into volume brick info xmlTimothy Asir2013-08-181-3/+23
| | | | | | | | | | | | | | | | | Add server uuid as an attribute to the existing brick details in the volume info cli xml output. Currently, when a node has more than one ip, the oVirt-engine fails to map the corresponding server using the ip alone. If we get the host uuid along with brick details in volume info command it will be easy for ovirt-engine to find out the server and thereby we can avoid confusion in finding the server. Change-Id: I3c9c9acea80e10e0b2977477759d9af045e48959 BUG: 955588 Signed-off-by: Timothy Asir <tjeyasin@redhat.com> Reviewed-on: http://review.gluster.org/4875 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli,glusterd: Fix when tasks are shown in 'volume status'Kaushal M2013-08-031-1/+5
| | | | | | | | | | | | | Asynchronous tasks are shown in 'volume status' only for a normal volume status request for either all volumes or a single volume. Change-Id: I9d47101511776a179d213598782ca0bbdf32b8c2 BUG: 888752 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5308 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Remove unused port info from peer status.Venkatesh Somyajulu2013-06-051-13/+0
| | | | | | | | | | | | | | | Problem: "gluster peer status" on some nodes gives port info and fails to give on other. But it is a hard coded value. Fix: Removing the port info from command Change-Id: I919f0349f252e658bfc13e60bb8e171da32eaf25 BUG: 964026 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/5027 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: add a command 'gluster pool list [--xml]'Niels de Vos2013-04-261-13/+14
| | | | | | | | | | | | | | | | | * unlike 'gluster peer status', which lists only info about peers, this command lists localhost also in the list, so the sorted output from all the nodes should match. * made the output script friendly by keeping it one output per line. Change-Id: I853656753b35c617debbcceecbb71c8d6dd3c334 BUG: 764638 Original-review: http://review.gluster.org/4221 Original-author: Amar Tumballi <amarts@redhat.com> Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/4862 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Address a double free with volume info.Vijay Bellur2013-04-081-2/+2
| | | | | | | | | | | | | Crash is observed when volume info is performed on a non-exisiting volume name and the output format is xml. Change-Id: I88aa5d9dc954b1352f5cc3b5b38742c832bc1bb8 BUG: 949298 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/4785 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: output xml in pretty formatKaushal M2013-01-161-58/+52
| | | | | | | | | | | | | Gluster cli now prints XML outputs in 'pretty' format. This solves the problem of empty elements occuring as two tags instead of being collapsed into one. Change-Id: Iab7aeadcff29c18ae388b58e446a16e937ac09ed BUG: 849293 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4355 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Fix task-id xml compilationKaushal M2013-01-081-0/+2
| | | | | | | | | Change-Id: I92ada7d5ba1eb46024f47c4f32c517db27ada576 BUG: 857330 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4342 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* core: remove all the 'inner' functions in codebaseAmar Tumballi2012-12-191-22/+34
| | | | | | | | | | | | | | | | * move 'dict_keys_join()' from api/glfs_fops.c to libglusterfs/dict.c - also added an argument which is treated as a filter function if required, currently useful for fuse. * now 'make CFLAGS="-std=gnu99 -pedantic" 2>&1 | grep nested' gives no output. Change-Id: I4e18496fbd93ae1d3942026ef4931889cba015e8 Signed-off-by: Amar Tumballi <amarts@redhat.com> BUG: 875913 Reviewed-on: http://review.gluster.org/4187 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd, cli: Task id's for async tasksKaushal M2012-12-191-0/+101
| | | | | | | | | | | | | | | | | | | | | | This patch introduces task-id's for async tasks like rebalance, remove-brick and replace-brick. An id is generated for each task when it is started and displayed to the user in cli output. The status of running tasks is also included in the output of "volume status" along with its id, so that a user can easily track the progress of an async task. Also, * added tests for this feature into the regression test suite. * added a python script for creating files, 'create-files.py', courtesy Vijaykumar Koppad (vkoppad@redhat.com) into the test suite. This patch reverts the revert commit 698deb33d731df6de84da8ae8ee4045e1543a168. BUG: 857330 Change-Id: Id43d7cb629a38f47f733fbc18cb4c5f2f0327c7a Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4294 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Fixing the xml output in failure cases for gluster peer probeAvra Sengupta2012-12-181-6/+12
| | | | | | | | | Change-Id: I9ebec995cbf47d9ced7140e37787e74ff9c63769 BUG: 879490 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4301 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* Revert "glusterd, cli: Task id's for async tasks"Anand Avati2012-12-041-101/+0
| | | | | | | | | | | This reverts commit ed15521d4e5af2b52b78fd33711e7562f5273bc6 Strangely, the test scripts are "silently" passing for failures too. Reverting patch for now. Change-Id: I802ec1634c7863dc373cc7dc4a47bd4baa72764e Reviewed-on: http://review.gluster.org/4267 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd, cli: Task id's for async tasksKaushal M2012-12-041-0/+101
| | | | | | | | | | | | | | | | | | | | This patch introduces task-id's for async tasks like rebalance, remove-brick and replace-brick. An id is generated for each task when it is started and displayed to the user in cli output. The status of running tasks is also included in the output of "volume status" along with its id, so that a user can easily track the progress of an async task. Also, * added tests for this feature into the regression test suite. * added a python script for creating files, 'create-files.py', courtesy Vijaykumar Koppad (vkoppad@redhat.com) into the test suite. Change-Id: Ib0c0d12e0d6c8f72ace48d303d7ff3102157e876 BUG: 857330 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/3942 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: fix incorrect xml output of brick tag.JulesWang2012-12-031-5/+0
| | | | | | | | | | | | | | | | | | | | Incorrect output: <brick> <brick> </brick> </brick> Expected output: <brick></brick> <brick></brick> Change-Id: I40c30feda4e6f3f1ec5da4122f8965b61c511ae6 BUG: 880993 Signed-off-by: JulesWang <w.jq0722@gmail.com> Reviewed-on: http://review.gluster.org/4262 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: Kaushal M <kaushal@redhat.com>
* cli: Fix build when libxml2 is absentKaushal M2012-12-031-0/+29
| | | | | | | | | | | | Also added a note to the top of cli-xml-output.c, explaining the style of coding to followed when adding code to it. Change-Id: I7f90a9c075adb49a9e071771d136b6f01ea68d11 BUG: 882780 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4256 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: XML output for "gluster volume geo-replication status"Avra Sengupta2012-11-271-1/+79
| | | | | | | | | | Change-Id: I1a64bd5eb9b1040a2a2d9b97bfe9cc756835596e BUG: 864506 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4227 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: XML output for "geo-replication <VOL> {start|stop}"Kaushal M2012-11-231-0/+78
| | | | | | | | | Change-Id: I8d2213cc00deb458fb765b848d0e3452574cc98f BUG: 864499 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4217 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Mark port as N/A in volume status when process is not onlineKrutika Dhananjay2012-10-301-9/+21
| | | | | | | | | Change-Id: Ie11c7331e3bc58c0f934f424dde4341cdffb9e2c BUG: 861542 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/4048 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Changes and enhancements to XML outputKaushal M2012-10-111-47/+590
| | | | | | | | | | | | | | | | | | | | This patch contains several xml related changes which fix some bugs and introduce xml output for commands which were missing it. These include, * XML output for rebalance & remove-brick status * XML output for replace-brick * XML output for 'volume status all' in on xml document * proper XML output for "volume {create|start|stop|delete}" * type & status of a volume in 'volume info' is now given as a string as well This patch also cleans up the '#if (HAVE_LIB_XML)' sections from the code-base, so that it is not littered around. Change-Id: I5bb022adf0fedf7e3ead92b4b79bfa02b0b5fef5 BUG: 828131 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/3869 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* All: License message changeVarun Shastry2012-09-131-7/+6
| | | | | | | | | | | | License message changed for server-side, dual license GPLV2 and LGPLv3+. Change-Id: Ia9e53061b9d2df3b3ef3bc9778dceff77db46a09 BUG: 852318 Signed-off-by: Varun Shastry <vshastry@redhat.com> Reviewed-on: http://review.gluster.org/3940 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* libglusterfs/dict: make 'dict_t' a opaque objectAmar Tumballi2012-09-061-27/+25
| | | | | | | | | | | | | | | * ie, don't dereference dict_t pointer, instead use APIs everywhere * other than dict_t only 'data_t' should be the valid export from dict.h * added 'dict_foreach_fnmatch()' API * changed dict_lookup() to use data_t, instead of data_pair_t Change-Id: I400bb0dd55519a7c5d2a107e67c8e7a7207228dc Signed-off-by: Amar Tumballi <amarts@redhat.com> BUG: 850917 Reviewed-on: http://review.gluster.org/3829 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli, glusterd: Changes to 'peer status' xml outputKaushal M2012-09-011-4/+16
| | | | | | | | | | | | | | Glusterd now returns the status of a peer as both a string and a number. The xml output for peer status has been modified, such that the <status> element now contains the status number and a new <statusStr> element contains the status string. Change-Id: I0d4b74b84a991893d1029b8408d66ff078bbd254 BUG: 847760 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/3868 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* All: License message changeVarun Shastry2012-08-281-14/+5
| | | | | | | | | | | | | | | | | | The license message is changed to Copyright (c) 2008-2012 Red Hat, Inc. <http://www.redhat.com> This file is part of GlusterFS. This file is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. Change-Id: I07d2b63ed5fbbbd1884f1e74f2dd56013d15b0f4 BUG: 852318 Signed-off-by: Varun Shastry <vshastry@redhat.com> Reviewed-on: http://review.gluster.org/3858 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Proper xml output for "gluster peer status"Kaushal M2012-08-201-0/+114
| | | | | | | | | | Change-Id: I5d72d3844aba0417652498f4dc706b4a68d36bd8 BUG: 847760 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/3814 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <krishnan.parthasarathi@gmail.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli-xml-output.c: avoid NULL-deref upon OOMJim Meyering2012-07-131-1/+1
| | | | | | | | | | | | Fix typo: s/buf/*buf/ in test after *buf = xmlBufferPtr(... Spotted by coverity. Change-Id: I92cd317832818647009a74f2a8407b5d3ecb958c BUG: 789278 Signed-off-by: Jim Meyering <meyering@redhat.com> Reviewed-on: http://review.gluster.com/3670 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* remove useless if-before-free (and free-like) functionsJim Meyering2012-07-131-2/+1
| | | | | | | | | | | | See comments in http://bugzilla.redhat.com/839925 for the code to perform this change. Signed-off-by: Jim Meyering <meyering@redhat.com> BUG: 839925 Change-Id: I10e4ecff16c3749fe17c2831c516737e08a3205a Reviewed-on: http://review.gluster.com/3661 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* localtime and ctime are not MT-SAFEKaleb S. KEITHLEY2012-06-291-15/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are a number of nit-level issues throughout the source with the use of localtime and ctime. While they apparently aren't causing too many problems, apart from the one in bz 828058, they ought to be fixed. Among the "real" problems that are fixed in this patch: 1) general localtime and ctime not MT-SAFE. There's a non-zero chance that another thread calling localtime (or ctime) will over-write the static data about to be used in another thread 2) localtime(& <64-bit-type>) or ctime(& <64-bit-type>) generally not a problem on 64-bit or little-endian 32-bit. But even though we probably have zero users on big-ending 32-bit platforms, it's still incorrect. 3) multiple nested calls passed as params. Last one wins, i.e. over- writes result of prior calls. 4) Inconsistent error handling. Most of these calls are for logging, tracing, or dumping. I submit that if an error somehow occurs in the call to localtime or ctime, the log/trace/dump still should still occur. 5) Appliances should all have their clocks set to UTC, and all log entries, traces, and dumps should use GMT. 6) fix strtok(), change to strtok_r() Other things this patch fixes/changes (that aren't bugs per se): 1) Change "%Y-%m-%d %H:%M:%S" and similar to their equivalent shorthand, e.g. "%F %T" 2) change sizeof(timestr) to sizeof timestr. sizeof is an operator, not a function. You don't use i +(32), why use sizeof(<var>). (And yes, you do use parens with sizeof(<type>).) 3) change 'char timestr[256]' to 'char timestr[32]' where appropriate. Per-thread stack is limited. Time strings are never longer than ~20 characters, so why waste 220+ bytes on the stack? Things this patch doesn't fix: 1) hodgepodge of %Y-%m-%d %H:%M:%S versus %Y/%m/%d-%H%M%S and other variations. It's not clear to me whether this ever matters, not to mention 3rd party log filtering tools may already rely on a particular format. Still it would be nice to have a single manifest constant and have every call to localtime/strftime consistently use the same format. Change-Id: I827cad7bf53e57b69c0173f67abe72884249c1a9 BUG: 832173 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.com/3568 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: more volume status improvementsKaushal M2012-03-291-10/+32
| | | | | | | | | | | | | | | | | | | | | | The major changes are, * "volume status" now supports getting details of the self-heal daemon processes for replica volumes. A new cli options "shd", similar to "nfs", has been introduced for this. "detail", "fd" and "clients" status ops are not supported for self-heal daemons. * The default/normal ouput of "volume status" has been enhanced to contain information about nfs-server and self-heal daemon processes as well. Some tweaks have been done to the cli output to show appropriate output. Also, changes have been done to rebalance/remove-brick status, so that hostnames are displayed instead of uuids. Change-Id: I3972396dcf72d45e14837fa5f9c7d62410901df8 BUG: 803676 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.com/3016 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kp@gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* cli: More xml output changesKaushal M2012-03-131-38/+245
| | | | | | | | | | | | | | | | * Added xml output for "volume quota" which was missing. * Fixed xml output for "volume info all" so that it contains only one xml document * Fixed no xml output for normal "volume status" * Fixed normal output for "volume set" Change-Id: I3d85b6800e428226f2970d669e38e4331c99a218 BUG: 799957 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.com/2868 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* mempool: add more counters to understand the usage scenarios properlyAmar Tumballi2012-02-271-0/+21
| | | | | | | | | | | | | | current design of mempool is to fallback to standard calloc/free if all the buffers in pool are exhausted. Understanding more about those numbers will help us to tune mempool parameters properly over time. Change-Id: I2c94373186f7c6a486caff2611c2d9df2c37db3c Signed-off-by: Amar Tumballi <amarts@redhat.com> BUG: 797730 Reviewed-on: http://review.gluster.com/2804 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* cli: Enable output in XMLKaushal M2012-02-151-0/+2258
This patch enables gluster cli to output data in xml format. XML output can be obtained by passing "--xml" as an argument. A new "volume list" command, which lists the volumes present in a cluster, has been added. This can be used for obtaining a quick list of volumes. Several commands, including "volume top", "volume profile", "volume status" and "volume info", "volume list", have custom XML output routines. Other commands use either one of the 2 generic output routines, cli_xml_output_str() & cli_xml_output_dict(). NOTE: When using "all" for "volume status" and "volume info" the XML output will have multiple roots. Change-Id: I6117baa02ec06fda116177dbd401f66521263ac6 BUG: 790713 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.com/2753 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>