summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd/src/glusterd.c
Commit message (Collapse)AuthorAgeFilesLines
* glusterd/mgmt_v3 locks: Generalization of the volume wide locks.Avra Sengupta2014-03-071-2/+2
| | | | | | | | | | Renaming volume locks as mgmt_v3 locks Change-Id: I2a324e2b8e1772d7b165fe96ce8ba5b902c2ed9a Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7187 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Rajesh Joseph <rjoseph@redhat.com>
* gluster/snapshot: Create missing backend snapshot folderRajesh Joseph2014-03-071-37/+72
| | | | | | | | | | | | | | | | Snapshot volume bricks are mounted under /var/run/gluster/snaps folder. In the latest machines /var/run is a symbolic link to /run folder. In such cases if we use /var/run then there would be mismatch between mtab entry for the mount and the folder where we actually mount. Therefore this patch will get the correct folder and also create it if the folder is missing. Change-Id: I267aa7f3e171b486c5b3bb2a9f88cbd4be0e47ea BUG: 1072253 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/7140
* glusterd/snapshot: store location for snap driven changesVijaikumar M2014-03-031-1/+1
| | | | | | | | | | | | | | | | Currently snapshot volfiles are stored at: <workdir>/vols/<volname>/snaps/<snapvol> With snap driven approach we need to store the volfiles at: <workdir>/snaps/<snapname>/<snapvol> Change-Id: I8efdd5db29833b2b06b64a900cbb4c9b9a5d36b6 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7006 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd/vol-locks: Dict_ref while adding req_ctx->dict to txn_opinfosAvra Sengupta2014-01-211-2/+2
| | | | | | | | | | | Introducing a wrapper function glusterd_txn_opinfo_init(), to initialize the opinfo to be set in the txn_id engine. Removed glusterd_op_fini_ctx() as the txn opinfo should only be cleared by glusterd_clear_txn_opinfo(). Change-Id: I17e85a162d6a3bca79941f8603d0c2b579f0d194 Signed-off-by: Avra Sengupta <asengupt@redhat.com>
* mgmt/glusterd : Having a separate list for snapshots.Sachin Pandit2014-01-151-0/+1
| | | | | | | | | Creating a separate list for snaps taken, as cluttering snaps in the volume list does not look neat. Change-Id: Ida4a183e95e8694b85ebb5a680d06b7d29a460a0 BUG: 1040947 Signed-off-by: Sachin Pandit <spandit@redhat.com>
* glusterd/snapshot: Introducing snap-max-hard-limit and snap-max-soft-limitAvra Sengupta2014-01-061-3/+0
| | | | | | | | | Note: Manually adding this patch again as this patch got missed in git reset option done on remote development branch Change-Id: I9e81c5ec003c1e1722d0fcb27dd87c365ee43ff4 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd/snapshot: Added trace statements and handled snapshot create commit ↵Avra Sengupta2013-11-201-0/+1
| | | | | | | | | | | failure. Also handles empty string(not NULL) in gd_syncop_mgmt_brick_op() and adds "Snapshot" in operation list used for printing op during logging. Change-Id: Icac9dce6bf1c087ab2aace9953e2af3a0fb81be6 Signed-off-by: Avra Sengupta <asengupt@redhat.com>
* mgmt/glusterd: snapshot config changesshishir gowda2013-11-151-0/+3
| | | | | | | | Also refactored code in glusterd for create command Additionally, removed brick-op func from mgmt_iniate_all_phases Change-Id: Iddcc332009c5716adee7f2b04c93b352fb983446 Signed-off-by: shishir gowda <sgowda@redhat.com>
* mgmt/glusterd: handle snap volume store and actual volume store separatelyRaghavendra Bhat2013-11-151-0/+10
| | | | | Change-Id: I8b88fe94d0f9ee1089cafdda037abcf2f7a180ca Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* mgmt/glusterd: snapshot create commandRaghavendra Bhat2013-11-151-0/+42
| | | | | | | | | | | | | | | | | This is still a work in progress. As of now, these things are done: * Take the snapshot of the backend brick * Create the new volume for the snapshot * Create the brick and the client volfiles * Store the snapshot related info in /var/lib/glusterd * Create the snap object representing the snapshot TODO: Start the brick processes for the snapshot Change-Id: I26fbb0f8e5cf004d4c1dbca51819bab1cd1bac15 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
* mgmt/glusterd: Introduce snapshot infrastructureshishir gowda2013-11-151-0/+9
| | | | | | | | API's for creating, adding, finding, removing snapshots and consistency groups are provided. Change-Id: Ic28da69a075b062aefdf14754c68259ca58bd427 Signed-off-by: shishir gowda <sgowda@redhat.com>
* mgmt/glusterd: add option to specify a different base-portKaleb S. KEITHLEY2013-10-311-8/+18
| | | | | | | | | | | | | | | | | | This is (arguably) a hack to work around a bug in libvirt which is not well behaved wrt to using TCP ports in the unreserved space between 49152-65535. (See RFC 6335) Normally glusterd starts and binds to the first available port in range, usually 49152. libvirt's live migration also tries to use ports in this range, but has no fallback to use (an)other port(s) when the one it wants is already in use. Change-Id: Id8fe35c08b6ce4f268d46804bbb6dddab7a6b7bb BUG: 1018178 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/6210 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Changes to cli-glusterd communicationKaushal M2013-10-171-6/+155
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Glusterd changes: With this patch, glusterd creates a socket file in DATADIR/run/glusterd.socket , and listen on it for cli requests. It listens for 2 rpc programs on the socket file, - The glusterd cli rpc program, for all cli commands - A reduced glusterd handshake program, just for the 'system:: getspec' command The location of the socket file can be changed with the glusterd option 'glusterd-sockfile'. To retain compatibility with the '--remote-host' cli option, glusterd also listens for the cli requests on port 24007. But, for the sake of security, it listens using a reduced cli rpc program on the port. The reduced rpc program only contains read-only procs used for 'volume (info|list|status)', 'peer status' and 'system:: getwd' cli commands. CLI changes: The gluster cli now uses the glusterd socket file for communicating with glusterd by default. A new option '--gluster-sock' has been added to allow specifying the sockfile used to connect. Using the '--remote-host' option will make cli connect to the given host & port. Tests changes: cluster.rc has been modified to make use of socket files and use different log files for each glusterd. Some of the tests using cluster.rc have been fixed. Change-Id: Iaf24bc22f42f8014a5fa300ce37c7fc9b1b92b53 BUG: 980754 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5280 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Saving geo-rep session details in a more specific pathVenky Shankar2013-09-041-4/+4
| | | | | | | | | | | | | | | Now saving the session details in /var/lib/glusterd/geo-replication/<mastervol>_<slaveip>_<slavevol> repo to distinguish between two master-slave sessions where the slavename is same across two different clusters. Change-Id: I57c93f55cc9bd4fe2bffe579028aaf5e4335b223 BUG: 991501 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/cli changes for distributed geo-repAvra Sengupta2013-07-261-5/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commands: gluster system:: execute gsec_create gluster volume geo-rep <master> <slave-url> create [push-pem] [force] gluster volume geo-rep <master> <slave-url> start [force] gluster volume geo-rep <master> <slave-url> stop [force] gluster volume geo-rep <master> <slave-url> delete gluster volume geo-rep <master> <slave-url> config gluster volume geo-rep <master> <slave-url> status The geo-replication is distributed. The session will be created, and gsyncd will be spawned on all relevant nodes, instead of only one node. geo-rep: Collecting status detail related data Added persistent store for saving information about TotalFilesSynced, TotalSyncTime, TotalBytesSynced Changes in the status information in socket: Existing(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01; New(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978; TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640; Persistent details stored in /var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111 BUG: 847839 Original Author: Avra Sengupta <asengupt@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Csaba Henk <csaba@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5132 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* gsyncd: distribute the crawling loadAvra Sengupta2013-07-261-1/+11
| | | | | | | | | | | | | | | | | | | * also consume changelog for change detection. * Status fixes * Use new libgfchangelog done API * process (and sync) one changelog at a time Change-Id: I24891615bb762e0741b1819ddfdef8802326cb16 BUG: 847839 Original Author: Csaba Henk <csaba@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5131 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* store: move glusterd_store functions from mgmt/glusterd to libglusterfsNiels de Vos2013-06-201-1/+1
| | | | | | | | | | | | | Making the glusterd_store_* functions re-usable will help with future changes that need to read/write lists of items. BUG: 904065 Change-Id: I99fb8eced76d12d5a254567eccff9790b43d8da3 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/4676 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Start bricks on glusterd startup, only onceKrishnan Parthasarathi2013-05-151-2/+1
| | | | | | | | | | | | | | | The restarting of bricks has been deffered until the cluster 'stabilizes' itself volumes' view. Since glusterd_spawn_daemons is executed everytime a peer 'joins' the cluster, it may inadvertently restart bricks that were taken offline for say, maintenance purposes. This fix avoids that. Change-Id: Ic2a0a9657eb95c82d03cf5eb893322cf55c44eba BUG: 960190 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4973 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Avoided deadlock in single node cluster, glusterd restartKrishnan Parthasarathi2013-04-161-0/+3
| | | | | | | | | | | | | | | | | | | In a single node cluster, it is possible to deadlock on the "big lock", while restarting bricks. In glusterd_restart_bricks, we perform a glusterd_brick_connect, where we release the big lock in anticipation that glusterd_brick_rpc_notify could run in the same C stack (and deadlocking). So, in the restart code path, we could unlock before we have performed a lock on the big lock. To fix this, we need to take the big lock in the glusterd_launch_synctask 'thread' as well. Change-Id: I1abea1ca82b55c784b8a810a8194f254b32b1dcc BUG: 948686 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4837 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: big lock - a coarse-grained locking to prevent racesKrishnan Parthasarathi2013-04-121-21/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are primarily three lists that are part of glusterd process, that are concurrently accessed. Namely, priv->volumes, priv->peers and volinfo->bricks_list. Big-lock approach ----------------- WHAT IS IT? Big lock is a coarse-grained lock which protects all three lists, mentioned above, from racy access. HOW DOES IT WORK? At any given point in time, glusterd's thread(s) are in execution _iff_ there is a preceding, inbound network event. Of course, the sigwaiter thread and timer thread are exceptions. A network event is an external trigger to glusterd, via the epoll thread, in the form of POLLIN and POLLERR. As long as we take the big-lock at all such entry points and yield it when we are done, we are guaranteed that all the network events, accessing the global lists, are serialised. This amounts to holding the big lock at - all the handlers of all the actors in glusterd. (POLLIN) - all the cbks in glusterd. (POLLIN) - rpc_notify (DISCONNECT event), if we access/modify one of the three lists. (POLLERR) In the case of synctask'ized volume operations, we must remember that, if we held the big lock for the entire duration of the handler, we may block other non-synctask rpc actors from executing. For eg, volume-start would block in PMAP SIGNIN, if done incorrectly. To prevent this, we need to yield the big lock, when we yield the synctask, and reacquire on waking up of the synctask. Change-Id: Ib929f9905b55fb6c3fc27fefb497a26dba058e4f BUG: 948686 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4784 Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* mgmt/glusterd: enable valgrind usage even in non DEBUG buildRaghavendra Bhat2013-04-091-7/+1
| | | | | | | | | | | | * Till now running glusterfs processes were allowed to run in valgrind mode only when built with debug mode enabled. Change-Id: I11e07ea2a4da4f82f70cdded6258a22d65d6db64 BUG: 922877 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/4688 Reviewed-by: Anand Avati <avati@redhat.com> Tested-by: Anand Avati <avati@redhat.com>
* glusterd: add more specific log messagesBala.FA2013-04-021-8/+8
| | | | | | | | | Change-Id: I57fbdd83f3098e64886c3dd690a1ae04fc37442d BUG: 928648 Signed-off-by: Bala.FA <barumuga@redhat.com> Reviewed-on: http://review.gluster.org/4739 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* geo-rep: retire old style ssh setupCsaba Henk2013-03-141-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Users are still using geo-rep with the old, deprecated, insecure, unsupported ssh setup. Not their fault -- the implementation of the new method had the following charasteristics: - old method is possible, but with default settings it's not working - it can be made operational by fiddling with "remote-gsyncd" tunable - with default setting, an unhelpful, actually misleading error message is produced - the UI gave no hint to the changes in the ssh setup http://review.gluster.org/4392 tried to fix these; what it accomplished was unrestricted support to the bad practice (by making the default old setup operational). From this on: - we disable the old method by reserving the "remote-gsyncd" tunable - if the old method is attempted, give a hint what to do Change-Id: Icade94725d8d8d2d4c89cab992d4226351637b86 BUG: 895656 Signed-off-by: Csaba Henk <csaba@redhat.com> Reviewed-on: http://review.gluster.org/4602 Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* geo-rep / gsyncd: Separate log file directory for Mountbroker sessionsVenky Shankar2013-02-041-0/+28
| | | | | | | | | | | | | | | | | | | | | ... so that a mountbroker session which is initiated b/w master and slave does not use the same log file if it's started after a normal geo-rep session b/w master and slave. This results in EPERM as the log file is owned by root and the geo-rep slave process (now running as a non privileged user) does not have access to it. Also, having separate log file directory for mountbroker sessions looks clean. NOTE: geo-rep's client mount log file location remains unchanged. Change-Id: Ic7a732e250aee5393b9c3f6ebf6dfe2c310b7fe4 BUG: 893960 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/4407 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/cli: Updated the options descriptions for "volume set help"Avra Sengupta2013-01-211-0/+4
| | | | | | | | | Change-Id: I0db00b7334bb9707ab48bd661ac03a3ad818d6e4 BUG: 893458 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4393 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* core: fixes for gcc's '-pedantic' flag buildAvra Sengupta2013-01-211-4/+2
| | | | | | | | | | | | | * warnings on 'void *' arguments * warnings on empty initializations * warnings on empty array (array[0]) Change-Id: Iae440f54cbd59580eb69f3ecaed5a9926c0edf95 BUG: 875913 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4219 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: replace obsolete /usr/local reference for remote ssh/gsyncdKaleb S. KEITHLEY2013-01-181-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | See https://bugzilla.redhat.com/show_bug.cgi?id=895656 https://bugzilla.redhat.com/show_bug.cgi?id=764679 (GLUSTER-2947) https://bugzilla.redhat.com/show_bug.cgi?id=764623 (GLUSTER-2891) The comments in the bzs are a bit obtuse and/or vague. As near as I can make out we had, for a while, a "convenience symlink" to or from /usr/local/libexec/gsyncd, which no longer exists. And, lacking any comments in the code, I gather this is some sort of fallback or failsafe logic: if the first, normal attempt to invoke gsyncd fails then an attempt is made to ssh to the box and invoke it. In any event, there's nothing in /usr/local/... so it's unquestionably wrong to try to invoke anything there. BUG: 895656 Change-Id: I3b7ac7a049b91ce101b930599294830147cc60ad Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/4392 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Joe Julian <joe.julian.prime@gmail.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: log enhancements for volume startKrutika Dhananjay2013-01-041-0/+2
| | | | | | | | | | | | | | | | | | | | | * changed some of the log messages to give as much information as available in case of failure * added logs to identify on which machine lock/stage/commit failed * added macros to represent error strings to maintain uniformity among error messages for a given kind of error * moved error logs wherever possible, from caller to callee to avoid code duplication Change-Id: I0e98d5d3ba086c99240f2fbd642451f175f51942 BUG: 812356 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/4353 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* glusterd, cli: Task id's for async tasksKaushal M2012-12-191-0/+27
| | | | | | | | | | | | | | | | | | | | | | This patch introduces task-id's for async tasks like rebalance, remove-brick and replace-brick. An id is generated for each task when it is started and displayed to the user in cli output. The status of running tasks is also included in the output of "volume status" along with its id, so that a user can easily track the progress of an async task. Also, * added tests for this feature into the regression test suite. * added a python script for creating files, 'create-files.py', courtesy Vijaykumar Koppad (vkoppad@redhat.com) into the test suite. This patch reverts the revert commit 698deb33d731df6de84da8ae8ee4045e1543a168. BUG: 857330 Change-Id: Id43d7cb629a38f47f733fbc18cb4c5f2f0327c7a Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4294 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* Revert "glusterd, cli: Task id's for async tasks"Anand Avati2012-12-041-27/+0
| | | | | | | | | | | This reverts commit ed15521d4e5af2b52b78fd33711e7562f5273bc6 Strangely, the test scripts are "silently" passing for failures too. Reverting patch for now. Change-Id: I802ec1634c7863dc373cc7dc4a47bd4baa72764e Reviewed-on: http://review.gluster.org/4267 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd, cli: Task id's for async tasksKaushal M2012-12-041-0/+27
| | | | | | | | | | | | | | | | | | | | This patch introduces task-id's for async tasks like rebalance, remove-brick and replace-brick. An id is generated for each task when it is started and displayed to the user in cli output. The status of running tasks is also included in the output of "volume status" along with its id, so that a user can easily track the progress of an async task. Also, * added tests for this feature into the regression test suite. * added a python script for creating files, 'create-files.py', courtesy Vijaykumar Koppad (vkoppad@redhat.com) into the test suite. Change-Id: Ib0c0d12e0d6c8f72ace48d303d7ff3102157e876 BUG: 857330 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/3942 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* mgmt/glusterd: Consider nodesvc to be running after onlinePranith Kumar K2012-11-291-1/+1
| | | | | | | | | | | | | | | | | | | | | Definition of online in the message below is that the RPC_CLNT_CONNECT event arrives for the nfs/self-heal-daemon process. For automated tests, sometimes the script needs to wait until self-heal-daemon comes online, so that the relevant commands can be executed. Gluster volume status before this change printed whether the self-heal-daemon is running or not based on the lock availability on the pidfile. But there is a small window where the lock on pid file is present but the process is still not online. So the commands that were depending on this kept failing in the test script. Change-Id: I0e44e18b08d7b653d34fa170c1f187d91c888cd9 BUG: 858212 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/4236 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* geo-rep / gsyncd,glusterd: do not hardcode socket pathCsaba Henk2012-11-281-0/+5
| | | | | | | | | | | | ... in gsyncd python code. Indeed, use the configuration mechanism to set it suitably from glusterd. Change-Id: I9fe2088b14d28588d1e64fe892740cc5755b8365 BUG: 868877 Signed-off-by: Csaba Henk <csaba@redhat.com> Reviewed-on: http://review.gluster.org/4143 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd, cli: implement gluster system uuid reset commandRaghavendra Bhat2012-11-281-11/+29
| | | | | | | | | | | | | | | | | | | | | | A commonly faced problem among glusterfs users is: after a fresh installation of glusterfs in a virtual machine, the VM image is cloned to make multiple instances of the server. This breaks glusterd because right after glusterfs installation on the first boot glusterd would have created the node UUID and this gets inherited into the clone. The result is wierd behavior at the time of peer probe where glusterd does not (yet) deal with UUID collisions in a user friendly way. To handle it gluster peer reset command is implemented which upon execution changes the uuid of local glusterd. Change-Id: If207dd2ad93ab94ef1a3253f409c21c442975f87 BUG: 811493 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/3637 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Protected conf->xprt_list racy access.Krishnan Parthasarathi2012-11-281-3/+13
| | | | | | | | | | | | | | | | - epoll on RPCSVC_EVENT_ACCEPT would add corresponding xprt onto the xprt_list. Concurrently, synctask thread (volume op) would call into glusterd_fetchspec_notify which iterates on the xprt_list. Added a mutex to protect such a racy access of the list. Change-Id: Idc51b4bdb1c814dfab7790e1c899d6977f7640f2 BUG: 878873 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4241 Reviewed-by: Raghavendra G <raghavendra@gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* mgmt/glusterd: Implementation of server-side quorumPranith Kumar K2012-11-231-0/+41
| | | | | | | | | | | | Feature-page: http://www.gluster.org/community/documentation/index.php/Features/Server-quorum Change-Id: I747b222519e71022462343d2c1bcd3626e1f9c86 BUG: 839595 Signed-off-by: Pranith Kumar K <pranithk@gluster.com> Reviewed-on: http://review.gluster.org/3811 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: op-version handshake implementationKaushal M2012-10-301-39/+32
| | | | | | | | | | | | | Brings in a new rpc program MGMT_HANDSHAKE, which implements the op-version handshake. This is required for bringing in the op-version feature as described in http://www.gluster.org/community/documentation/index.php/Features/Opversion Change-Id: I4333fd2714dbbd3a2a3fca5862cbb3c56615529e BUG: 814534 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/3688 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: volume-start, add-brick and remove-brick to use synctask frameworkKrishnan Parthasarathi2012-10-111-5/+29
| | | | | | | | | | | | | | - Added volume-id validation to glusterd-syncop code. - All daemons are restarted using synctasks in init(). - glusterd_brick_start has wait/nowait variants to support volume commands using synctask framework and those that aren't. Change-Id: Ieec26fe1ea7e5faac88cc7798d93e4cc2b399d34 BUG: 862834 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/3969 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Fix compile time warning for gsyncd helper routineVenky Shankar2012-09-201-5/+8
| | | | | | | | | Change-Id: I262cc654a3d85ed690446b3875959565600b4bcd BUG: 846197 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/3784 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Csaba Henk <csaba@redhat.com>
* All: License message changeVarun Shastry2012-09-131-7/+6
| | | | | | | | | | | | License message changed for server-side, dual license GPLV2 and LGPLv3+. Change-Id: Ia9e53061b9d2df3b3ef3bc9778dceff77db46a09 BUG: 852318 Signed-off-by: Varun Shastry <vshastry@redhat.com> Reviewed-on: http://review.gluster.org/3940 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Added special key "group" for bulk volume set.Krishnan Parthasarathi2012-09-121-29/+37
| | | | | | | | | | | | | | | | | | | gluster volume set VOLNAME group group_name - where group_name is a file under /var/lib/glusterd/groups containing one key, value pair per line as below, key1=value1 key2=value2 [...] - the command sets key1 to value1 and so on. Change-Id: Ic4c8dedb98d013b29a74e57f8ee7c1d3573137d2 BUG: 851237 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/3831 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: fix mountbroker option parsing routineCsaba Henk2012-09-061-8/+2
| | | | | | | | | | | | Properly adjust it to the new dict API as of http://review.gluster.org/3829. Change-Id: I8f55d2b1d590b15000984f4862c52b3cd226cef8 BUG: 850917 Signed-off-by: Csaba Henk <csaba@redhat.com> Reviewed-on: http://review.gluster.org/3914 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* libglusterfs/dict: make 'dict_t' a opaque objectAmar Tumballi2012-09-061-4/+5
| | | | | | | | | | | | | | | * ie, don't dereference dict_t pointer, instead use APIs everywhere * other than dict_t only 'data_t' should be the valid export from dict.h * added 'dict_foreach_fnmatch()' API * changed dict_lookup() to use data_t, instead of data_pair_t Change-Id: I400bb0dd55519a7c5d2a107e67c8e7a7207228dc Signed-off-by: Amar Tumballi <amarts@redhat.com> BUG: 850917 Reviewed-on: http://review.gluster.org/3829 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* All: License message changeVarun Shastry2012-08-281-14/+5
| | | | | | | | | | | | | | | | | | The license message is changed to Copyright (c) 2008-2012 Red Hat, Inc. <http://www.redhat.com> This file is part of GlusterFS. This file is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. Change-Id: I07d2b63ed5fbbbd1884f1e74f2dd56013d15b0f4 BUG: 852318 Signed-off-by: Varun Shastry <vshastry@redhat.com> Reviewed-on: http://review.gluster.org/3858 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* remove useless if-before-free (and free-like) functionsJim Meyering2012-07-131-2/+1
| | | | | | | | | | | | See comments in http://bugzilla.redhat.com/839925 for the code to perform this change. Signed-off-by: Jim Meyering <meyering@redhat.com> BUG: 839925 Change-Id: I10e4ecff16c3749fe17c2831c516737e08a3205a Reviewed-on: http://review.gluster.com/3661 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* geo-rep: checkpointingCsaba Henk2012-06-131-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - gluster vol geo-rep M S conf checkpoint <LABEL|now> sets a checkpoint with LABEL (the keyword "now" is special, it's rendered to the label "as of <timestamp of current time>") that's used to refer to the checkpoint in the sequel. (Technically, gsyncd makes a note of the xtime of master's root as of setting the checkpoint, called the "checkpoint target".) - gluster vol geo-rep M S conf \!checkpoint deletes the checkpoint. - gluster vol geo-rep M S stat if status is OK, and there is a checkpoint configured, the checkpoint info is appended to status (either "not yet reached", or "completed at <timestamp of completion>"). (Technically, the worker runs a thread that monitors / serializes / verifies checkpoint status, and answers checkpoint status requests through a UNIX socket; monitoring boils down to querying the xtime of slave's root and comparing with the target.) - gluster vol geo-rep M S conf log-file | xargs grep checkpoint displays the checkpoint history. Set, delete and completion events are logged properly. Change-Id: I4398e0819f1504e6e496b4209e91a0e156e1a0f8 BUG: 826512 Signed-off-by: Csaba Henk <csaba@redhat.com> Reviewed-on: http://review.gluster.com/3491 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com>
* mgmt/glusterd: fix the infinite loop in lazy uuid generationRaghavendra Bhat2012-06-131-0/+2
| | | | | | | | | | | | | | | | | * This is how the lazy uuid generation leads to infinite loop of function calls. MY_UUID -> glusterd_uuid_init -> glusterd_retrieve_uuid -> MY_UUID * Also while starting glusterd if valgrind option is not given in the volfile, then reset the ret variable to 0. Change-Id: Ief719f436d8a264a591ee6aefc6da3c0f6c75e8f BUG: 811493 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.com/3564 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pranithk@gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* glusterd: generate node UUID lazilyAnand Avati2012-06-071-15/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | A commonly faced problem among glusterfs users is: after a fresh installation of glusterfs in a virtual machine, the VM image is cloned to make multiple instances of the server. This breaks glusterd because right after glusterfs installation on the first boot glusterd would have created the node UUID and this gets inherited into the clone. The result is wierd behavior at the time of peer probe where glusterd does not (yet) deal with UUID collisions in a user friendly way. This patch is for the 'prevention' of the issue. The approach here is to avoid generating a UUID on the first start of glusterd, but instead generate a node UUID only when a node UUID is found to be necessary. This naturally avoids the creation of node UUID on first boot and prevents the issue to a large extent. This issue also needs a 'cure' patch, which gives more meaningful error messages to the user and provides CLI to recover from the situations (gluster peer reset?) Change-Id: Ieaaeeaf76ed35385844e98a8e23fc3dd8df5a208 BUG: 811493 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.com/3533 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: Run post hooks on a different threadKrishnan Parthasarathi2012-05-291-0/+4
| | | | | | | | | | | | | | | | | | | | This change ensures post hooks can 'wait' if need be and _not_ prevent glusterd from being able to run other operations meanwhile. Also ensures that post hook scripts are 'serialized' between transactions. ie, post hook scripts of txn1 are completed before post hook scripts of txn2 are started, where txn1 happens before txn2. Change-Id: Iaeb676737d8c67e7151127c8d1fd8c2891e10aee BUG: 806996 Signed-off-by: Krishnan Parthasarathi <kp@gluster.com> Reviewed-on: http://review.gluster.com/3450 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* common-utils: Simplified mkdir_p interface and put it to use.Krishnan Parthasarathi2012-05-211-4/+4
| | | | | | | | | | | | | | | | | - Simplified mkdir_p interface. - Removed mkdir_if_missing from codebase - Modified glusterd consumers of mkdir_if_missing to use mkdir_p with allow_symlinks=_gf_true. This implicitly assumes that glusterd is in 'control' of the brick path and glusterd's working dir in the sense that the symlinks (if any) in any of the above mentioned paths are under the 'storage administrator's control. Change-Id: I7383ad5cff11b123e1e0d2fe6da0c81e03c52ed2 BUG: 823132 Signed-off-by: Krishnan Parthasarathi <kp@gluster.com> Reviewed-on: http://review.gluster.com/3378 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>