summaryrefslogtreecommitdiffstats
path: root/geo-replication/syncdaemon/gsyncd.py
Commit message (Collapse)AuthorAgeFilesLines
* geo-rep: Adhering to the common storage for geo-repKotresh HR2015-04-271-2/+3
| | | | | | | | | | | | | | | | | | | Making geo-rep use the common storage shared by nfs, snapshot and geo-rep. The meta volume should be named as gluster_shared_storage, and it should be mounted at "/var/run/gluster/shared_storage/". geo-rep will have create a directory called 'geo-rep' in the meta-volume and all the lock files are created inside it. Change-Id: I82d0bff9be191f75f643606a9a21d53559047ac4 BUG: 1210344 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/10196 Reviewed-by: Aravinda VK <avishwan@redhat.com> Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com>
* geo-rep: Log Rsync performanceAravinda VK2015-04-021-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | Introducing configurable option to log the rsync performance. gluster volume geo-replication <MASTERVOL> <SLAVEHOST>::<SLAVEVOL> \ config log-rsync-performance true Default value is False. Example log: [2015-03-31 16:48:34.572022] I [resource(/bricks/b1):857:rsync] SSH: rsync performance: Number of files: 2 (reg: 1, dir: 1), Number of regular files transferred: 1, Total file size: 178 bytes, Total transferred file size: 178 bytes, Literal data: 178 bytes, Matched data: 0 bytes, Total bytes sent: 294, Total bytes received: 32, sent 294 bytes received 32 bytes 652.00 bytes/sec Change-Id: If11467e29e6ac502fa114bd5742a8434b7084f98 Signed-off-by: Aravinda VK <avishwan@redhat.com> BUG: 764827 Reviewed-on: http://review.gluster.org/10070 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* geo-rep: Add support for aclsKotresh HR2015-03-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for ACLS. When it sees SETXATTR in Changelog, it adds the file to data queue. rsync/tar+ssh will take care of syncing ACLS. User set ACLS will be synced to Slave. This requires "system.posix_acl_access" to go through when client-pid is equal GF_CLIENT_PID_GSYNCD in fuse layer. New config interface is introduced, sync-acls Which can be set using geo-rep config(Default is True) gluster volume geo-replication <VOLUME> <SLAVEHOST>::<SLAVEVOL> \ config sync-acls false Change-Id: I7eb3523fa72b8fed830efc98138891244e830d65 BUG: 1187021 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/10001 Reviewed-by: Aravinda VK <avishwan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* feature/geo-rep: Active Passive Switching logic flockKotresh HR2015-03-151-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CURRENT DESIGN AND ITS LIMITATIONS: ----------------------------------- Geo-replication syncs changes across geography using changelogs captured by changelog translator. Changelog translator sits on server side just above posix translator. Hence, in distributed replicated setup, both replica pairs collect changelogs w.r.t their bricks. Geo-replication syncs the changes using only one brick among the replica pair at a time, calling it as "ACTIVE" and other non syncing brick as "PASSIVE". Let's consider below example of distributed replicated setup where NODE-1 as b1 and its replicated brick b1r is in NODE-2 NODE-1 NODE-2 b1 b1r At the beginning, geo-replication chooses to sync changes from NODE-1:b1 and NODE-2:b1r will be "PASSIVE". The logic depends on virtual getxattr 'trusted.glusterfs.node-uuid' which always returns first up subvolume i.e., NODE-1. When NODE-1 goes down, the above xattr returns NODE-2 and that is made 'ACTIVE'. But when NODE-1 comes back again, the above xattr returns NODE-1 and it is made 'ACTIVE' again. So for a brief interval of time, if NODE-2 had not finished processing the changelog, both NODE-2 and NODE-1 will be ACTIVE causing rename race as mentioned in the bug. SOLUTION: --------- 1. Have a shared replicated storage, a glusterfs management volume specific to geo-replication. 2. Geo-rep creates a file per replica set on management volume. 3. fcntl lock on the above said file is used for synchronization between geo-rep workers belonging to same replica set. 4. If management volume is not configured, geo-replication will back to previous logic of using first up sub volume. Each worker tries to lock the file on shared storage, who ever wins will be ACTIVE. With this, we are able to solve the problem but there is an issue when the shared replicated storage goes down (when all replicas goes down). In that case, the lock state is lost. So AFR needs to rebuild the lock state after brick comes up. NOTE: ----- This patch brings in the, pre-requisite step of setting up management volume for geo-replication during creation. 1. Create mgmt-vol for geo-replicatoin and start it. Management volume should be part of master cluster and recommended to be three way replicated volume having each brick in different nodes for availability. 2. Create geo-rep session. 3. Configure mgmt-vol created with geo-replication session as follows. gluster vol geo-rep <mastervol> slavenode::<slavevol> config meta_volume \ <meta-vol-name> 4. Start geo-rep session. Backward Compatiability: ----------------------- If management volume is not configured, it falls back to previous logic of using node-uuid virtual xattr. But it is not recommended. Change-Id: I7319d2289516f534b69edd00c9d0db5a3725661a BUG: 1196632 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/9759 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* geo-rep: Add support for xattrsAravinda VK2015-03-021-0/+1
| | | | | | | | | | | | | | | | | | | | | | This patch adds support for xattrs. When it sees SETXATTR in Changelog, it adds the file to data queue. rsync/tar+ssh will take care of syncing xattrs. User set xattrs will be synced to Slave. New config interface is introduced, sync-xattrs Which can be set using geo-rep config(Default is True) gluster volume geo-replication <VOLUME> <SLAVEHOST>::<SLAVEVOL> \ config sync-xattrs false Change-Id: I70626d854a0d616469dd54d61e5ef155ed8b67d8 BUG: 1196690 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/9499 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Saravanakumar Arumugam <sarumuga@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* geo-rep: Archive Changelogs and avoid generating empty XSync changelogsAravinda VK2015-02-191-0/+2
| | | | | | | | | | | | | | | With this patch, - Hybrid Crawl will not generate empty Changelogs - Archives Changelogs when processed(Hybrid(XSync), History, and Changelog Crawl - Passive worker cleans up its processing directory BUG: 1169331 Change-Id: I1383ffaed261cdf50da91b14260b4d43177657d1 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/9453 Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* geo-rep: Making replica failover check interval configurableAravinda VK2014-06-111-0/+2
| | | | | | | | | | | | | | | | | | Replica failover check interval is hardcoded to 60 sec by default. Now this option is made configurable and defaulted to 1 sec. To change the default value gluster volume geo-replication <MASTERVOL> \ <SLAVEHOST>::<SLAVEVOL> config replica_failover_interval 15 Change-Id: Iada1b80d510452dcfedebd8a21bebd62394b0597 BUG: 1066410 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/8003 Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* gsyncd / geo-rep: FSH recommended log locationsVenky Shankar2014-06-101-1/+8
| | | | | | | | | | | | | | | | Upgrading "working_dir" on the fly is a bit unclean yet (though it works) as currently config upgrade does not support "old" values to be expanded by using configuration variables. Change-Id: I44ed65c281f2e0ce3b6b467addc5c1c88ac674e7 BUG: 1077516 Signed-off-by: Venky Shankar <vshankar@redhat.com> Signed-off-by: Kotresh H R <khiremat@redhat.com> Signed-off-by: Aravinda VK <avishwan@redhat.com> Signed-off-by: Ajeet Jha <ajha@redhat.com> Reviewed-on: http://review.gluster.org/7375 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* feature/geo-rep: Fix to retain pause state of gsyncd on restart.Kotresh H R2014-06-051-0/+1
| | | | | | | | | | | | | | | A new gsyncd options '--pause-on-start' is introduced. When node reboots, if the status is paused, gsyncd is started with this option. After gsyncd spawns worker and agent, worker will send SIGSTOP to negative pid of monitor to enter pause mode. Change-Id: I5aad82c9a9fc8c243f384940b77d25e26e520d6d BUG: 1101410 Signed-off-by: Kotresh H R <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/7885 Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* geo-rep: Pause and Resume feature for geo-replicationAravinda VK2014-05-091-0/+13
| | | | | | | | | | | | | | | | Changelog consumption/processing now happens in seperate process group than monitor. When monitor process group gets SIGSTOP all worker process, ssh, rsync will be paused except the changelog processing. When it gets SIGCONT it resumes its operation. Changelog agent runs as RepceServer, geo-rep worker communicates with changelog agent using RepceClient. Change-Id: I35c333e4d8b13d03a7808aed601960eef23cfa04 BUG: 1093602 Signed-off-by: Venky Shankar <vshankar@redhat.com> Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/7322
* geo-rep: code pep8/flake8 fixesAravinda VK2014-04-071-107/+185
| | | | | | | | | | | | | | | | | | | | | | | | | | | pep8 is a style guide for python. http://legacy.python.org/dev/peps/pep-0008/ pep8 can be installed using, `pip install pep8` Usage: `pep8 <python file>`, For example, `pep8 master.py` will display all the coding standard errors. flake8 is used to identify unused imports and other issues in code. pip install flake8 cd $GLUSTER_REPO/geo-replication/ flake8 syncdaemon Updated license headers to each source file. Change-Id: I01c7d0a6091d21bfa48720e9fb5624b77fa3db4a Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/7311 Reviewed-by: Kotresh HR <khiremat@redhat.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* geo-rep: Config file upgradeAravinda VK2014-02-101-1/+3
| | | | | | | | | | | | | | | | | | | When old config file is used with new geo-rep, config item like 'georep_session_working_dir' was missing in old config file. With this patch geo-rep sets the default value for new items. Following config options supported: - georep_session_working_dir - gluster_params - ssh_command_tar BUG: 1036539 Change-Id: I389c62e749f3b567f9ecf96d4b41367ef962c025 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/6934 Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* gsyncd / geo-rep: geo-replication fixesAjeet Jha2013-12-121-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | -> "threaded" hybrid crawl. -> Enabling metatadata synchronization. -> Handling EINVAL/ESTALE gracefully while syncing metadata. -> Improvments to changelog crawl code. -> Initial crawl changelog generation format. -> No gsyncd restart when checkpoint updated. -> Fix symlink handling in hybrid crawl. -> Slave's xtime key is 'stime'. -> tar+ssh as data synchronization. -> Instead of 'raise', just log in warning level for xtime missing cases. -> Fix for JSON object load failure -> Get new config value after config value reset. -> Skip already processed changelogs. -> Saving status of each individual worker thread. -> GFID fetch on slave for purges. -> Add tar ssh keys and config options. -> Fix nlink count when using backend. -> Include "data" operation for hardlink. -> Use changelog time prefix as slave's time. -> Process changelogs in parallel. Change-Id: I09fcbb2e2e418149a6d8435abd2ac6b2f015bb06 BUG: 1036539 Signed-off-by: Ajeet Jha <ajha@redhat.com> Reviewed-on: http://review.gluster.org/6404 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* geo-rep: logrotate: Logrotate handlingAravinda VK2013-10-031-2/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In existing georep logrotate was implemented by handling SIGSTOP and SIGCONT, gsyncd was failing to start again after SIGSTOP. New approach uses WatchedFileHandler in logging, which tracks the log file changes or logrotate. Reopens the log file if logrotate is triggered or if same log file is updated from other process. As per python doc: http://docs.python.org/2/library/logging.handlers.html: The WatchedFileHandler class, located in the logging.handlers module, is a FileHandler which watches the file it is logging to. If the file changes, it is closed and reopened using the file name. A file change can happen because of usage of programs such as newsyslog and logrotate which perform log file rotation. This handler, intended for use under Unix/Linux, watches the file to see if it has changed since the last emit. (A file is deemed to have changed if its device or inode have changed.) If the file has changed, the old file stream is closed, and the file opened to get a new stream. Change-Id: I30f65eb1e9778b12943d6e43b60a50344a7885c6 BUG: 1012776 Signed-off-by: Aravinda VK <avishwan@redhat.com> Reviewed-on: http://review.gluster.org/5968 Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Saving geo-rep session details in a more specific pathVenky Shankar2013-09-041-0/+2
| | | | | | | | | | | | | | | Now saving the session details in /var/lib/glusterd/geo-replication/<mastervol>_<slaveip>_<slavevol> repo to distinguish between two master-slave sessions where the slavename is same across two different clusters. Change-Id: I57c93f55cc9bd4fe2bffe579028aaf5e4335b223 BUG: 991501 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/5488 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/cli changes for distributed geo-repAvra Sengupta2013-07-261-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commands: gluster system:: execute gsec_create gluster volume geo-rep <master> <slave-url> create [push-pem] [force] gluster volume geo-rep <master> <slave-url> start [force] gluster volume geo-rep <master> <slave-url> stop [force] gluster volume geo-rep <master> <slave-url> delete gluster volume geo-rep <master> <slave-url> config gluster volume geo-rep <master> <slave-url> status The geo-replication is distributed. The session will be created, and gsyncd will be spawned on all relevant nodes, instead of only one node. geo-rep: Collecting status detail related data Added persistent store for saving information about TotalFilesSynced, TotalSyncTime, TotalBytesSynced Changes in the status information in socket: Existing(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01; New(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978; TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640; Persistent details stored in /var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111 BUG: 847839 Original Author: Avra Sengupta <asengupt@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Csaba Henk <csaba@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5132 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* gsyncd: distribute the crawling loadAvra Sengupta2013-07-261-15/+110
| | | | | | | | | | | | | | | | | | | * also consume changelog for change detection. * Status fixes * Use new libgfchangelog done API * process (and sync) one changelog at a time Change-Id: I24891615bb762e0741b1819ddfdef8802326cb16 BUG: 847839 Original Author: Csaba Henk <csaba@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Avra Sengupta <asengupt@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5131 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* move 'xlators/marker/utils/' to 'geo-replication/' directoryAvra Sengupta2013-07-221-0/+419
Change-Id: Ibd0faefecc15b6713eda28bc96794ae58aff45aa BUG: 847839 Original Author: Amar Tumballi <amarts@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5133 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>