summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd-utils.h
Commit message (Collapse)AuthorAgeFilesLines
* glusterd/snapshot: Compare and update snapshots during peer handshakeAvra Sengupta2014-04-281-1/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During a peer-handshake, after the volumes have synced, and the list of missed snapshots have synced, the node will perform the pending deletes and restores on this list. At this point, the current snapshot list in the node will be updated, and hence in case of conflicts arising during snapshot handshake, the peer hosting the bricks will be given precedence Likewise, if there will be a conflict, and both peers will be in the same state, i.e either both would be hosting bricks or both would not be hosting bricks, then a decision can't be taken and a peer-reject will happen. glusterd_compare_and_update_snap() implements the following algorithm to perform the above task: Step 1: Start. Step 2: Check if the peer is missing a delete on the said snap. If yes, goto step 6. Step 3: Check if there is a conflict between the peer's data and the local snap. If no, goto step 5. Step 4: As there is a conflict, check if both the peer and the local nodes are hosting bricks. Based on the results perform the following: Peer Hosts Bricks Local Node Hosts Bricks Action Yes Yes Goto Step 7 No No Goto Step 7 Yes No Goto Step 8 No Yes Goto Step 6 Step 5: Check if the local node is missing the peer's data. If yes, goto step 9. Step 6: It's a no-op. Goto step 10 Step 7: Peer Reject. Goto step 10 Step 8: Delete local node's data. Step 9: Accept Peer Data. Step 10: Stop Change-Id: I79be0f0f5f2a4f5c72277a4e77c2be732af432e1 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7525 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Rename the export dictionary as peer_dataAvra Sengupta2014-04-281-5/+6
| | | | | | | | | | | | | | | | | | During a glusterd handshake, a dictionary is passed among the peers which contains, info of volumes, global opts, and now also info of snaps and list of missed snaps As it now contains more than just volume specific data, renaming the dict in the code-base from "vols" to "peer_data" Change-Id: Ib457172789ddd0d8978b08bceab0988c48e9eea7 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7524 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Recreate the mount dirs and mount the lvm snapshots on ↵Avra Sengupta2014-04-281-0/+3
| | | | | | | | | | | | | | | | | | node reboot. The lvm snapshots of the bricks are mounted at /var/run/gluster/snaps/ or /run/gluster/snaps. These paths being on a tempfs, on reboot are removed. So when glusterd starts, we need to recreate these paths, activate the respective logical volumes (lvm snapshots of the bricks), and mount these logical volumes at their respective paths. Change-Id: Ic5ef61e79a25d9830df717c592391965fe09db62 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7452 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Perform missed snap deletes and restores.Avra Sengupta2014-04-281-0/+5
| | | | | | | | | | | | | | | | | Replacing is_volume_restored(gf_boolean_t) with restored_from_snap(uuid_t) in glusterd_volinfo_ Also removed gd_restore_snap_volume from glusterd-volgen.c to glusterd-snapshot.c Change-Id: Ic615a1658cfaffa98d4590506ac82f20bf709ad6 BUG: 1089906 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7455 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Adding snap_vol_id and snap_uuid to missed_snap_listAvra Sengupta2014-04-271-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Persisting missing snapshot info on disk as well as in memory in the following format: -------------NODE-UUID--------------:--------------SNAP-UUID-------------=---------SNAP-VOL-ID------------:BRICKNUM:-------BRICKPATH--------:OPERATION:STATUS 927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=a17b4fe42c5a45f7a916438643edaa13: 3 :/brick/brick-dirs/brick3: 1 : 1 927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=a17b4fe42c5a45f7a916438643edaa13: 3 :/brick/brick-dirs/brick3: 3 : 1 927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=83a3cc05453b46b2a7eda4c9a9208638: 3 :/brick/brick-dirs/brick3: 1 : 1 This data will be stored on disk at /var/lib/glusterd/snaps/missed_snaps_list In memory we maintain the data as a list of glusterd_missed_snap_info in conf, the key for this list are the first two fields, i.e NODE-UUID:SNAP-UUID. For every NODE-UUID:SNAP-UUID, there can be multiple operations missed on multiple bricks. So we maintain a list of glusterd_snap_op_t for every node of glusterd_missed_snap_info This list is maintained or updated during snapshot create, delete, and restore operations which are the only operations that if missed, are recorded in this list. During snapshot create, if a node is down, or a brick is down, we don't receive their mount point infos. snap_status of such bricks is marked as -1, and their brick details are added to this list. During snapshot delete, we check from originator node, if any other nodes, holding bricks of the said snap are down. Those are also added to the list. Also if the node is up, but the snapshot was pending for a snap brick, and its snap_status is -1, we add that to the list too. When a subsequent delete entry is processed for an already existing create entry, we just mark the create entries status as done (2), and don't add the delete entry to the list. During snapshot restore, we check from originator node, if any other nodes, holding bricks of the said snap are down. Those are also added to the list. Also if the node is up, but the snapshot was pending for a snap brick, and its snap_status is -1, we add that to the list too. Like delete when a subsequent restore entry is processed for an already existing create entry, we just mark the create entries status as done (2), and don't add the restore entry to the list. Change-Id: I54f63e28d3c40555d0f84528f38227103171f594 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7454 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot-handshake: Perform handshake of missed_snaps_list.Avra Sengupta2014-04-251-0/+6
| | | | | | | | | | | | | | | | In a handshake, create a union of the missed_snap_lists of the two peers. If an entry is present, its no op. If an entry is pendng, and the peer entry is done, mark own entry as done. If an entry is done, and the peer ertry is pending, its a no-op. If its a new entry, add it. Change-Id: Idbfa49cc34871631ba8c7c56d915666311024887 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7453 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* build: MacOSX Porting fixesHarshavardhana2014-04-241-3/+5
| | | | | | | | | | | | | | | | | | | | | git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs Working functionality on MacOSX - GlusterD (management daemon) - GlusterCLI (management cli) - GlusterFS FUSE (using OSXFUSE) - GlusterNFS (without NLM - issues with rpc.statd) Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac BUG: 1089172 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Signed-off-by: Dennis Schafroth <dennis@schafroth.com> Tested-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Dennis Schafroth <dennis@schafroth.com> Reviewed-on: http://review.gluster.org/7503 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Improve detach check validationKaushal M2014-04-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch improves the validation for the 'peer detach' command. A check for if volumes exist with some bricks on the peer being detached validation is added in peer detach code flow (even force would have this validation). This patch also gurantees that peer detach doesn't fail for a volume with all its brick on the peer which is getting detached and there are no other bricks on this peer. The following steps need to be followed for removing a downed and unrecoverable peer. * If a replacement system is available - add it to the cluster - use replace-brick to migrate bricks of the downed peer to the new peer (since data cannot be recovered anyway use the 'replace-brick commit force' command) or, If no replacement system is available, - remove bricks of the downed peer using 'remove-brick' Change-Id: Ie85ac5b66e87bec365fdedd8352b645bb25e1c33 BUG: 983590 Signed-off-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/5325 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* gluster: GlusterFS Volume Snapshot FeatureAvra Sengupta2014-04-111-1/+58
| | | | | | | | | | | | | | | | | | | | | | | | | This is the initial patch for the Snapshot feature. Current patch includes following features: * Snapshot create * Snapshot delete * Snapshot restore * Snapshot list * Snapshot info * Snapshot status * Snapshot config Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9 BUG: 1061685 Signed-off-by: shishir gowda <sgowda@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Signed-off-by: Joseph Fernandes <josferna@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7128 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: op-version check for brickops.Ravishankar N2014-03-241-0/+3
| | | | | | | | | | | | | | | | | | | | | | cluster op-version must be atleast 4 for add/remove brick to proceed. This change is required for the new afr-changelog xattr changes that will be done for glusterFS 3.6 (http://review.gluster.org/#/c/7155/). In add-brick, the check is done only when replica count is increased because only that will affect the AFR xattrs. In remove-brick, the check is unconditional failing which there will be inconsistencies in the client xlator names amongst the volfiles of different peers. Change-Id: If981da2f33899aed585ab70bb11c09a093c9d8e6 BUG: 1066778 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/7122 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: persistent client xlator/ afr changelog namesRavishankar N2014-03-241-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | -Add a unique brick-id field to glusterd_brickinfo_t -Persist the id to the brickinfo file -Use the brick-id as the client xlator name during vol create, add-brick and replace-brick operations. -For older volumes,generate the id in-memory during glusterd restore but defer writing it to the brickinfo file until the next volume set operation. -send and receive the brick-ids during peer probe. Feature page: www.gluster.org/community/documentation/index.php/Features/persistent-AFR-changelog-xattributes Related patch: http://review.gluster.org/#/c/7122 Change-Id: Ib7f1570004e33f4144476410eec2b84df4e41448 BUG: 1066778 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/7155 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Volume locks and transaction specific opinfosAvra Sengupta2014-02-101-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With this patch we are replacing the existing cluster-wide lock taken on glusterds across the cluster, with volume locks which are also taken on glusterds across the cluster, but are volume specific. So with the volume locks we are able to perform more than one gluster operation at the same time, as long as the operations are being performed on different volumes. We maintain a global list of volume-locks (using a dict for a list) where the key is the volume name, and which saves the uuid of the originator glusterd. These locks are held and released per volume transaction. In order to acheive multiple gluster operations occuring at the same time, we also separate opinfos in the op-state-machine, as a part of this patch. To do so, we generate a unique transaction-id (uuid) per gluster transaction. An opinfo is then associated with this transaction id, which is used throughout the transaction. We maintain a run-time global list(using a dict) of transaction-ids, and their respective opinfos to achieve this. Upstream Feature Page: http://www.gluster.org/community/documentation/index.php/Features/glusterd-volume-locks Change-Id: Iaad505a854bac8de8f83beec0357eb6cde3f7ea8 BUG: 1011470 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5994 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: add volinfo to the list data structure in an orderVijaykumar M2014-02-081-0/+3
| | | | | | | | | | | | | | | | Currently volinfo is added at the end of the list while creating a volume. On gluster restart, readdir will not provide the ordered list and the data is populated in the same order as readdir. Solution is to insert the volinfo to the list in an order Change-Id: I1716ac6abbd7dd301a7125425fc413c6833f7a48 BUG: 1039912 Signed-off-by: Vijaykumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/6472 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: make volinfo a refcnt'ed object.Krishnan Parthasarathi2013-12-201-0/+6
| | | | | | | | | | | | | Add glusterd_volinfo_remove(..) which removes @volinfo from the list of volumes in the cluster and performs an unref on @volinfo Change-Id: I5f546ca58f61bc334ab1bab4c51c4a21e1f66161 BUG: 1038051 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/6521 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rpc,glusterd: Use rpc_clnt notifyfn to cleanup mydataKaushal M2013-12-161-0/+3
| | | | | | | | | | | | | | | | | | | | | | | rpc: - On a RPC_TRANSPORT_CLEANUP event, rpc_clnt_notify calls the registered notifyfn with a RPC_CLNT_DESTROY event. The notifyfn should properly cleanup the saved mydata on this event. - Break the reconnect chain when an rpc client is disabled. This will prevent new disconnect events which can lead to crashes. glusterd: - Added support for RPC_CLNT_DESTROY in glusterd_brick_rpc_notify - Use a common glusterd_rpc_clnt_unref() function throught glusterd in place of rpc_clnt_unref(). This function correctly gives up the big-lock before performing the unref. Change-Id: I93230441c5089039643fc9f5632477ef1b695348 BUG: 962619 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5512 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/geo-rep: more glusterd and cli fixes for geo-rep.Ajeet Jha2013-12-121-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | -> handle option validation cases in reset case. -> Creating valid conf path when glusterd restarts. -> Reading the gsyncd worker thread status and displaying it. -> Displaying status-detail per worker. -> Fetch checkpoint info in geo-rep status. -> use-tarssh value validation added. misc: misc geo-rep fixes based on cluster, logrotate etc.. -> cluster/dht: fix 'stime' getxattr getting overwritten. -> cluster/afr: return max of 'stime' values in subvol. -> geo-rep-logrotate: Sending SIGHUP to geo-rep auxiliary. -> cluster/dht: fix convoluted logic while aggregating. -> cluster/*: fix 'stime' min/max fetch logic. Change-Id: I811acea0bbd6194797a3e55d89295d1ea021ac85 BUG: 1036552 Signed-off-by: Ajeet Jha <ajha@redhat.com> Reviewed-on: http://review.gluster.org/6405 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@gmail.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Improve rebalance handling during volume syncKaushal M2013-12-101-0/+3
| | | | | | | | | | | | | | | | Glusterd will now correctly copy existing rebalance information when a volinfo is updated during volume sync. If the existing rebalance information was stale, then any existing rebalance process will be termimnated. A new rebalance process will be started only if there is no existing rebalance process. The rebalance process will not be started if the existing rebalance session had completed, failed or been stopped. Change-Id: I68c5984267c188734da76770ba557662d4ea3ee0 BUG: 1036464 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/6334 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Aggregate tasks status in 'volume status [tasks]'Kaushal M2013-12-041-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, glusterd used to just send back the local status of a task in a 'volume status [tasks]' command. As the rebalance operation is distributed and asynchronus, this meant that different peers could give different status values for a rebalance or remove-brick task. With this patch, all the peers will send back the tasks status as a part of the 'volume status' commit op, and the origin peer will aggregate these to arrive at a final status for the task. The aggregation is only done for rebalance or remove-brick tasks. The replace-brick task will have the same status on all the peers (see comment in glusterd_volume_status_aggregate_tasks_status() for more information) and need not be aggregated. The rebalance process has 5 states, NOT_STARTED - rebalance process has not been started on this node STARTED - rebalance process has been started and is still running STOPPED - rebalance process was stopped by a 'rebalance/remove-brick stop' command COMPLETED - rebalance process completed successfully FAILED - rebalance process failed to complete successfully The aggregation is done using the following precedence, STARTED > FAILED > STOPPED > COMPLETED > NOT_STARTED The new changes make the 'volume status tasks' command a distributed command as we need to get the task status from all peers. The following tests were performed, - Start a remove-brick task and do a status command on a peer which doesn't have the brick being removed. The remove-brick status was given correctly as 'in progress' and 'completed', instead of 'not started' - Start a rebalance task, run the status command. The status moved to 'completed' only after rebalance completed on all nodes. Also, change the CLI xml output code for rebalance status to use the same algorithm for status aggregation. Change-Id: Ifd4aff705aa51609a612d5a9194acc73e10a82c0 BUG: 1027094 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/6230 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli/glusterd: Changes to quota command Quota featureRaghavendra G2013-11-261-4/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | re-work. Following are the cli commands that are new/re-worked: ====================================================== volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} | volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} | volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>} volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [detail|clients|mem|inode|fd|callpool] volume statedump <VOLNAME> [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history] glusterd changes: ================= * Quota limits are now set as extended attributes by glusterd from the aux mount created by the cli. * The gfids of the directories on which quota limits are set for a given volume are stored in /var/lib/glusterd/vols/<volname>/quota.conf file in binary format, and whose cksum and version is stored in /var/lib/glusterd/vols/<volname>/quota.cksum. Original-author: Krutika Dhananjay <kdhananj@redhat.com> Original-author: Krishnan Parthasarathi <kparthas@redhat.com> BUG: 969461 Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/6003 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Start rebalance only where requiredKaushal M2013-11-201-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | Gluster was starting rebalance processes on peers where it wasn't required in two cases. - For a normal rebalance command on a volume, rebalance processes were started on all peers instead of just the peers which contain bricks of the volume - For rebalance process being restarted by a volume sync, caused by a new peer being probed or a peer restarting, rebalance processes were started on all peers, for both a normal rebalance and for remove-brick needing rebalance. This patch adds a new check before starting rebalance process in the above two cases. - For rebalance process required by a rebalance command, each peer will check if it contains atleast one brick of the volume - For rebalance process required by a remove-brick command, each peer will check if it contains atleast one of the bricks being removed Change-Id: I512da16994f0d5482889c3a009c46dc20a8a15bb BUG: 1031887 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/6301 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gNFS: RFE for NFS connection behaviorSantosh Kumar Pradhan2013-11-141-0/+6
| | | | | | | | | | | | | | | Implement reconfigure() for NFS xlator so that volume set/reset wont restart the NFS server process. But few options can not be reconfigured dynamically e.g. nfs.mem-factor, nfs.port etc which needs NFS to be restarted. Change-Id: Ic586fd55b7933c0a3175708d8c41ed0475d74a1c BUG: 1027409 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/6236 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* mgmt/glusterd: Relax extended attribute checks for volume create and add ↵Vijay Bellur2013-10-171-1/+1
| | | | | | | | | | | | | | brick force. Expectation with force is that user is aware of the consequences of sanity checks not being triggered. Change-Id: I79dfeed16a23829a7217cef33ab83f9f0ffae336 Signed-off-by: Vijay Bellur <vbellur@redhat.com> BUG: 1007509 Reviewed-on: http://review.gluster.org/5746 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Implement 'volume status tasks'Krutika Dhananjay2013-10-081-0/+2
| | | | | | | | | | | | | | | | | | | oVirt's Gluster Integration needs an inexpensive command that can be executed every 10 seconds to monitor async tasks and their parameters, for all volumes. The solution involves adding a 'tasks' sub-command to 'volume status' to fetch only the async task IDs, type and other relevant parameters. Only the originator glusterd participates in this command as all the information needed is available on all the nodes. This is to make the command suitable for being executed every 10 seconds. Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1 BUG: 1012346 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6006 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Allowing root@hostname::slave georep sessions to be created.Venky Shankar2013-09-041-1/+1
| | | | | | | | | | | | | | non-root@hostname::slave-vol geo-rep sessions are not supported. only hostname and root@hostname sessions are supported, and are treated as the same. Change-Id: I87551e1bd4ff4e0e6520c34eb3d944587cc65476 BUG: 998933 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5659 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd : initiating gsyncd restart during add-brickAvra Sengupta2013-07-311-0/+16
| | | | | | | | | | | | | | | | | | | | | | | During add-brick, when a new brick is added in one of the nodes that was already a part of the existing volume, and gsyncd was already running on that node, then all gsyncd processes running on that node, for that particular master and any slave sessions will be restarted If a new brick is added in a new node, then after adding the brick, the user has to perform the following steps: 1. gluster system:: execute gsec_create 2. gluster volume geo-replication <master-vol> <slave-vol> create push-pem force 3. gluster volume geo-replication <master-vol> <slave-vol> start force Change-Id: I4b9633e176c80e4a7cf33f42ebfa47ab8fc283f1 BUG: 989532 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5416 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/cli changes for distributed geo-repAvra Sengupta2013-07-261-1/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commands: gluster system:: execute gsec_create gluster volume geo-rep <master> <slave-url> create [push-pem] [force] gluster volume geo-rep <master> <slave-url> start [force] gluster volume geo-rep <master> <slave-url> stop [force] gluster volume geo-rep <master> <slave-url> delete gluster volume geo-rep <master> <slave-url> config gluster volume geo-rep <master> <slave-url> status The geo-replication is distributed. The session will be created, and gsyncd will be spawned on all relevant nodes, instead of only one node. geo-rep: Collecting status detail related data Added persistent store for saving information about TotalFilesSynced, TotalSyncTime, TotalBytesSynced Changes in the status information in socket: Existing(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01; New(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978; TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640; Persistent details stored in /var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111 BUG: 847839 Original Author: Avra Sengupta <asengupt@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Csaba Henk <csaba@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5132 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/common-utils: move hostname helper functions to common-utilsKrishnan Parthasarathi2013-07-041-3/+0
| | | | | | | | | Change-Id: If47e209cb61ea0eb74ee2d6ef9e9342b2d6ee13a BUG: 980838 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/5261 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: More checks before starting rebalance/remove-brickKaushal M2013-07-021-0/+4
| | | | | | | | | | | | Check if a previous remove-brick operation has been committed before starting a new rebalance/remove-brick task. Change-Id: I553e5ba64a6a352ca91032ab1a17997051a4494e BUG: 963541 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5019 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Disable transport before cleaning up rpc objectKrishnan Parthasarathi2013-06-181-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | Problem: rpc_transport object, which is part of rpc_clnt, is destroyed prematurely. This is because, rpc_transport object is ref'd by socket layer and rpc layer. These ref's, until the synctask'izing of operations, were unref'd sequentially in the epoll thread. With more threads at play, the sequential unref guarantee is off. Fix: Shutting down the transport before proceeding with cleaning up of rpc_clnt object would serialize the unref's on the rpc_transport object and thus eliminating the race. Also, we don't store the address of brickinfo in brick's rpc notify function, to avoid the possibility of referring a freed brickinfo. Instead we use a string based id to 'reach' the corresponding brickinfo. Change-Id: If2739e2eeaee1e8b071ab2b6754b7ea0f81cfceb BUG: 962619 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/5000 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* mgmt/glusterd: Make sure peerinfo->uuid_str is assignedPranith Kumar K2013-05-311-0/+2
| | | | | | | | | Change-Id: I9e2743ab61c8baee92a1dfd376ec4bb145776176 BUG: 963524 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/5016 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Log hostname of the peer where there is cksum/version mismatchKrutika Dhananjay2013-05-021-1/+1
| | | | | | | | | Change-Id: I08065aaa3c140d4b02af4ca38f5f4d00d7f0c2bb BUG: 958739 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/4937 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Introduce volume op-versionsKaushal M2013-04-261-0/+3
| | | | | | | | | | | | | | | | | | | | | | | Each volume is now associated with two op-versions, * op_version - the op-version of the highest op-versioned feature enabled * client_op_version - the op-version of the highest op-versioned feature enabled which affects the clients only. These two op-versions are generated dynamically and kept updated during runtime. Glusterd now uses the respective volumes' client-op-version during getspec requests. To achieve the above a new field in the vme table is introduced, client_option, this boolean field tells if the option is a client side option. Change-Id: I12c83b1dd29ab506026efd50d448cebbcee53c27 BUG: 907311 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4584 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: big lock - a coarse-grained locking to prevent racesKrishnan Parthasarathi2013-04-121-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are primarily three lists that are part of glusterd process, that are concurrently accessed. Namely, priv->volumes, priv->peers and volinfo->bricks_list. Big-lock approach ----------------- WHAT IS IT? Big lock is a coarse-grained lock which protects all three lists, mentioned above, from racy access. HOW DOES IT WORK? At any given point in time, glusterd's thread(s) are in execution _iff_ there is a preceding, inbound network event. Of course, the sigwaiter thread and timer thread are exceptions. A network event is an external trigger to glusterd, via the epoll thread, in the form of POLLIN and POLLERR. As long as we take the big-lock at all such entry points and yield it when we are done, we are guaranteed that all the network events, accessing the global lists, are serialised. This amounts to holding the big lock at - all the handlers of all the actors in glusterd. (POLLIN) - all the cbks in glusterd. (POLLIN) - rpc_notify (DISCONNECT event), if we access/modify one of the three lists. (POLLERR) In the case of synctask'ized volume operations, we must remember that, if we held the big lock for the entire duration of the handler, we may block other non-synctask rpc actors from executing. For eg, volume-start would block in PMAP SIGNIN, if done incorrectly. To prevent this, we need to yield the big lock, when we yield the synctask, and reacquire on waking up of the synctask. Change-Id: Ib929f9905b55fb6c3fc27fefb497a26dba058e4f BUG: 948686 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4784 Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: changes in 'volume create' behaviourKrutika Dhananjay2013-04-091-2/+7
| | | | | | | | | | | | | This patch incorporates all the changes suggested on the behaviour of 'volume create' command in http://review.gluster.org/#change,4214 (comment #14, to be precise). Change-Id: Iaac524a59738b177415595b18aa8a136090d3d25 BUG: 948729 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/4740 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Added the validation function for subvols-per-directoryAvra Sengupta2013-02-281-0/+3
| | | | | | | | | Change-Id: Ie2259023b9001311a2032792639c3093054f6750 BUG: 896431 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4552 Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: allow multiple instances of glusterd on one machineJeff Darcy2013-02-211-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is needed to support automated testing of cluster-communication features such as probing and quorum. In order to use this, you need to do the following preparatory steps. * Copy /var/lib/glusterd to another directory for each virtual host * Ensure that each virtual host has a different UUID in its glusterd.info Now you can start each copy of glusterd with the following xlator-options. * management.transport.socket.bind-address=$ip_address * management.working-directory=$unique_working_directory You can use 127.x.y.z addresses for binding without needing to assign them to interfaces explicitly. Note that you must use addresses, not names, because of some stuff in the socket code that's not worth fixing just for this usage, but after that you can use names in /etc/hosts instead. At this point you can issue CLI commands to a specific glusterd using the --remote-host option. So far probe, volume create/start/stop, mount, and basic I/O all seem to work as expected with multiple instances. Change-Id: I1beabb44cff8763d2774bc208b2ffcda27c1a550 BUG: 913555 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/4556 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Made volume-quota use synctask framework.Avra Sengupta2013-02-161-1/+1
| | | | | | | | | Change-Id: I4c275253144ed3ac11a701a56dd1116c002471ba BUG: 852147 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4495 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd : Made volume clear-locks use synctask framework.Avra Sengupta2013-02-081-0/+2
| | | | | | | | | Change-Id: Ia1fe3d0500d999c1f95b43c9e53947834e39d680 BUG: 852147 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4490 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Moved node rsp functions to glusterd-utils.cKrishnan Parthasarathi2013-02-031-0/+15
| | | | | | | | | | Change-Id: Ib4c4794563a5a694fab16f17c642f788399462f6 BUG: 852147 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4295 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: make 'glusterd_is_local_addr' return boolJulesWang2013-01-261-1/+1
| | | | | | | | | Change-Id: Id3bd0bfc4802c166f7a32b0cc6a726aeb5617b5d BUG: 890618 Signed-off-by: JulesWang <w.jq0722@gmail.com> Reviewed-on: http://review.gluster.org/4427 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd, cli: Task id's for async tasksKaushal M2012-12-191-0/+6
| | | | | | | | | | | | | | | | | | | | | | This patch introduces task-id's for async tasks like rebalance, remove-brick and replace-brick. An id is generated for each task when it is started and displayed to the user in cli output. The status of running tasks is also included in the output of "volume status" along with its id, so that a user can easily track the progress of an async task. Also, * added tests for this feature into the regression test suite. * added a python script for creating files, 'create-files.py', courtesy Vijaykumar Koppad (vkoppad@redhat.com) into the test suite. This patch reverts the revert commit 698deb33d731df6de84da8ae8ee4045e1543a168. BUG: 857330 Change-Id: Id43d7cb629a38f47f733fbc18cb4c5f2f0327c7a Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4294 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* Revert "glusterd, cli: Task id's for async tasks"Anand Avati2012-12-041-6/+0
| | | | | | | | | | | This reverts commit ed15521d4e5af2b52b78fd33711e7562f5273bc6 Strangely, the test scripts are "silently" passing for failures too. Reverting patch for now. Change-Id: I802ec1634c7863dc373cc7dc4a47bd4baa72764e Reviewed-on: http://review.gluster.org/4267 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd, cli: Task id's for async tasksKaushal M2012-12-041-0/+6
| | | | | | | | | | | | | | | | | | | | This patch introduces task-id's for async tasks like rebalance, remove-brick and replace-brick. An id is generated for each task when it is started and displayed to the user in cli output. The status of running tasks is also included in the output of "volume status" along with its id, so that a user can easily track the progress of an async task. Also, * added tests for this feature into the regression test suite. * added a python script for creating files, 'create-files.py', courtesy Vijaykumar Koppad (vkoppad@redhat.com) into the test suite. Change-Id: Ib0c0d12e0d6c8f72ace48d303d7ff3102157e876 BUG: 857330 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/3942 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* mgmt/glusterd: Consider nodesvc to be running after onlinePranith Kumar K2012-11-291-2/+2
| | | | | | | | | | | | | | | | | | | | | Definition of online in the message below is that the RPC_CLNT_CONNECT event arrives for the nfs/self-heal-daemon process. For automated tests, sometimes the script needs to wait until self-heal-daemon comes online, so that the relevant commands can be executed. Gluster volume status before this change printed whether the self-heal-daemon is running or not based on the lock availability on the pidfile. But there is a small window where the lock on pid file is present but the process is still not online. So the commands that were depending on this kept failing in the test script. Change-Id: I0e44e18b08d7b653d34fa170c1f187d91c888cd9 BUG: 858212 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/4236 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* geo-rep / gsyncd,glusterd: do not hardcode socket pathCsaba Henk2012-11-281-0/+2
| | | | | | | | | | | | ... in gsyncd python code. Indeed, use the configuration mechanism to set it suitably from glusterd. Change-Id: I9fe2088b14d28588d1e64fe892740cc5755b8365 BUG: 868877 Signed-off-by: Csaba Henk <csaba@redhat.com> Reviewed-on: http://review.gluster.org/4143 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* mgmt/glusterd: Implementation of server-side quorumPranith Kumar K2012-11-231-5/+26
| | | | | | | | | | | | Feature-page: http://www.gluster.org/community/documentation/index.php/Features/Server-quorum Change-Id: I747b222519e71022462343d2c1bcd3626e1f9c86 BUG: 839595 Signed-off-by: Pranith Kumar K <pranithk@gluster.com> Reviewed-on: http://review.gluster.org/3811 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: 'volume set' changes for op-version supportKaushal M2012-10-311-0/+6
| | | | | | | | | | | | An op-version check is performed for the given keys during stage. The commit phase moves the cluster op-version to the required version if needed. Change-Id: Id5c387094dbec723df736b2ecdc49ff93c179e0e BUG: 814534 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/3780 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: volume-start, add-brick and remove-brick to use synctask frameworkKrishnan Parthasarathi2012-10-111-4/+6
| | | | | | | | | | | | | | - Added volume-id validation to glusterd-syncop code. - All daemons are restarted using synctasks in init(). - glusterd_brick_start has wait/nowait variants to support volume commands using synctask framework and those that aren't. Change-Id: Ieec26fe1ea7e5faac88cc7798d93e4cc2b399d34 BUG: 862834 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/3969 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Moved peer rsp handling functions to glusterd-utilsKrishnan Parthasarathi2012-10-111-0/+15
| | | | | | | | | | | - Moved inner functions used in conjunction with synctask, 'out'. Change-Id: I7fbfd9881ea58645c4295a9fa7163ddd15a45d2f BUG: 862834 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4066 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: glusterd_brick_stop should be race free wrt pmapKrishnan Parthasarathi2012-10-101-2/+4
| | | | | | | | | | | | This is important for the effort to make glusterd use synctask framework. Change-Id: I0affb10a342df99df8daccfd6eef8fa6dd63928c BUG: 862834 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4057 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>