summaryrefslogtreecommitdiffstats
path: root/rpc/rpc-lib
Commit message (Collapse)AuthorAgeFilesLines
* glusterd/geo-rep: more glusterd and cli fixes for geo-rep.Ajeet Jha2013-12-121-0/+17
| | | | | | | | | | | | | | | | | | | | | | | | -> handle option validation cases in reset case. -> Creating valid conf path when glusterd restarts. -> Reading the gsyncd worker thread status and displaying it. -> Displaying status-detail per worker. -> Fetch checkpoint info in geo-rep status. -> use-tarssh value validation added. misc: misc geo-rep fixes based on cluster, logrotate etc.. -> cluster/dht: fix 'stime' getxattr getting overwritten. -> cluster/afr: return max of 'stime' values in subvol. -> geo-rep-logrotate: Sending SIGHUP to geo-rep auxiliary. -> cluster/dht: fix convoluted logic while aggregating. -> cluster/*: fix 'stime' min/max fetch logic. Change-Id: I811acea0bbd6194797a3e55d89295d1ea021ac85 BUG: 1036552 Signed-off-by: Ajeet Jha <ajha@redhat.com> Reviewed-on: http://review.gluster.org/6405 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@gmail.com> Reviewed-by: Anand Avati <avati@redhat.com>
* features/quota: Improvements to quotaRaghavendra G2013-11-261-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Two stages of quota enforcement is done: Soft and hard quota Upon reaching soft quota limit on the directory it logs/alerts in the quota daemon log (ie DEFAULT_LOG_DIR/quotad.log) and no more writes allowed after hard quota limit. After reaching the soft-limit the daemon alerts the user/admin repeatively for every 'alert-time', which is configurable. * Quota enforcer is moved to server-side. It takes care of enforcing quota. Since enforcer doesn't have the cluster view, it relies on another service called quota-aggregator. Aggregator, on query can return the size of a directory based on the cluster view. Enforcer is always loaded in the server graph and is by passed if the feature is not enabled. Options specific to enforcer: server-quota - Specifies whether the feature is on/off. It is used to by pass the quota if turned off. deem-statfs - If set to on, it takes quota limits into consideration while estimating fs size. (df command). The algorithm followed is, i. Adjust statvfs based on limit configured on root. ii. If limit is set on the inode passed, use size/limits on that inode to populate statvfs. Otherwise, use size/limits configured on root. iii. Upon statvfs, update the ctx->size on the inode. iv. Don't let DHT aggregate, instead take the maximum of the usages from the subvols of the DHT, since each of it contains the complete information. Enforcer also makes use of gfid-to-path conversion functionality to work correctly when a client like nfs predominently relies on nameless lookups. * Quota Aggregator acts as a thin client to provide cluster view Its a lightweight *gluster client* process with no mount point, started upon enabling quota or restarting the volume. This is a single process run on each brick, which can answer queries on all volumes in the cluster. Its volfile stored in GLUSTERD_DEFAULT_WORKING_DIR/quotad/quotad.vol. Credits: Raghavendra Bhat <rabhat@redhat.com> Varun Shastry <vshastry@redhat.com> Shishir Gowda <sgowda@redhat.com> Kruthika Dhananjay <kdhananj@redhat.com> Brian Foster <bfoster@redhat.com> Krishnan Parthasarathi <kparthas@redhat.com> Change-Id: Id1cb25b414951da34c665a55f77385d482e0f9de BUG: 969461 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/5952 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gNFS: More clean up for Gluster NFSSantosh Kumar Pradhan2013-11-252-39/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | 1) Fix the typo in NFS default ACL The typo was introduced as part of the Fix to BZ 1009210 i.e. http://review.gluster.org/5980. The user ACL xattr structure was passed to default ACL xattr. 2) Clean up NFS code to avoid unnecessary SEGV in rpcsvc_drc_reconfigure() which was not validating the svc->drc. Add a routine rpcsvc_drc_deinit() to handle the clean up of DRC specific data structures. For init(), use rpcsvc_drc_init(). 3) nfs_init_state() was returning wrong value even if the registration with portmapper failed, causing the NFS server process to hang around. As a result it used to get SEGV during rpcsvc_drc_reconfigure(). 4) Clean up memfactor usage across nfs.c nfs3.c. Change-Id: I5cea26cb68dd8a822ec0ae104952f67fe63fa703 BUG: 1009210 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/6329 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* gNFS: RFE for NFS connection behaviorSantosh Kumar Pradhan2013-11-147-39/+269
| | | | | | | | | | | | | | | Implement reconfigure() for NFS xlator so that volume set/reset wont restart the NFS server process. But few options can not be reconfigured dynamically e.g. nfs.mem-factor, nfs.port etc which needs NFS to be restarted. Change-Id: Ic586fd55b7933c0a3175708d8c41ed0475d74a1c BUG: 1027409 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/6236 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* bd_map: Remove bd_map xlatorM. Mohan Kumar2013-11-131-10/+0
| | | | | | | | | | | Remove bd_map xlator and CLI related changes. Change-Id: If7086205df1907127c1a1fa4ba603f1c48421d09 BUG: 1028672 Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com> Reviewed-on: http://review.gluster.org/5747 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterfs: zerofill supportM. Mohan Kumar2013-11-101-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for a new ZEROFILL fop. Zerofill writes zeroes to a file in the specified range. This fop will be useful when a whole file needs to be initialized with zero (could be useful for zero filled VM disk image provisioning or during scrubbing of VM disk images). Client/application can issue this FOP for zeroing out. Gluster server will zero out required range of bytes ie server offloaded zeroing. In the absence of this fop, client/application has to repetitively issue write (zero) fop to the server, which is very inefficient method because of the overheads involved in RPC calls and acknowledgements. WRITESAME is a SCSI T10 command that takes a block of data as input and writes the same data to other blocks and this write is handled completely within the storage and hence is known as offload . Linux ,now has support for SCSI WRITESAME command which is exposed to the user in the form of BLKZEROOUT ioctl. BD Xlator can exploit BLKZEROOUT ioctl to implement this fop. Thus zeroing out operations can be completely offloaded to the storage device , making it highly efficient. The fop takes two arguments offset and size. It zeroes out 'size' number of bytes in an opened file starting from 'offset' position. This patch adds zerofill support to the following areas: - libglusterfs - io-stats - performance/md-cache,open-behind - quota - cluster/afr,dht,stripe - rpc/xdr - protocol/client,server - io-threads - marker - storage/posix - libgfapi Client applications can exloit this fop by using glfs_zerofill introduced in libgfapi.FUSE support to this fop has not been added as there is no system call for this fop. Changes from previous version 3: * Removed redundant memory failure log messages Changes from previous version 2: * Rebased and fixed build error Changes from previous version 1: * Rebased for latest master TODO : * Add zerofill support to trace xlator * Expose zerofill capability as part of gluster volume info Here is a performance comparison of server offloaded zeofill vs zeroing out using repeated writes. [root@llmvm02 remote]# time ./offloaded aakash-test log 20 real 3m34.155s user 0m0.018s sys 0m0.040s [root@llmvm02 remote]# time ./manually aakash-test log 20 real 4m23.043s user 0m2.197s sys 0m14.457s [root@llmvm02 remote]# time ./offloaded aakash-test log 25; real 4m28.363s user 0m0.021s sys 0m0.025s [root@llmvm02 remote]# time ./manually aakash-test log 25 real 5m34.278s user 0m2.957s sys 0m18.808s The argument log is a file which we want to set for logging purpose and the third argument is size in GB . As we can see there is a performance improvement of around 20% with this fop. Change-Id: I081159f5f7edde0ddb78169fb4c21c776ec91a18 BUG: 1028673 Signed-off-by: Aakash Lal Das <aakash@linux.vnet.ibm.com> Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com> Reviewed-on: http://review.gluster.org/5327 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rpcsvc: implement per-client RPC throttlingAnand Avati2013-10-285-0/+87
| | | | | | | | | | | | | | | | Implement a limit on the total number of outstanding RPC requests from a given cient. Once the limit is reached the client socket is removed from POLL-IN event polling. Change-Id: I8071b8c89b78d02e830e6af5a540308199d6bdcd BUG: 1008301 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/6114 Reviewed-by: Santosh Pradhan <spradhan@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* rpc: add remote peer's hostname to call_bail log msgsKrishnan Parthasarathi2013-10-171-3/+4
| | | | | | | | | | Change-Id: I982cf7619463983c04b401d70a76635991d072d2 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/6091 Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
* libglusterfs: Add monotonic clocking counter for timer threadHarshavardhana2013-10-151-11/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gettimeofday() returns the current wall clock time and timezone. Using these functions in order to measure the passage of time (how long an operation took) therefore seems like a no-brainer. This time suffer's from some limitations: a. They have a low resolution: “High-performance” timing by definition, requires clock resolutions into the microseconds or better. b. They can jump forwards and backwards in time: Computer clocks all tick at slightly different rates, which causes the time to drift. Most systems have NTP enabled which periodically adjusts the system clock to keep them in sync with “actual” time. The adjustment can cause the clock to suddenly jump forward (artificially inflating your timing numbers) or jump backwards (causing your timing calculations to go negative or hugely positive). In such cases timer thread could go into an infinite loop. From 'man gettimeofday': ---------- .. .. The time returned by gettimeofday() is affected by discontinuous jumps in the system time (e.g., if the system administrator manually changes the system time). If you need a monotonically increasing clock, see clock_gettime(2). .. .. ---------- Rationale: For calculating interval timing for Timer thread, all that’s needed should be clock as a simple counter that increments at a stable rate. This is necessary to avoid the jumps which are caused by using "wall time", this counter must be monotonic that can never “tick” backwards, ever. Change-Id: I701d31e71a85a73d21a6c5cd15583e7a5a645eeb BUG: 1017993 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/6070 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr: [Feature] Command implementation to get heal-countVenkatesh Somyajulu2013-10-141-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently to know the number of files to be healed, either user has to go to backend and check the number of entries present in indices/xattrop directory. But if a volume consists of large number of bricks, going to each backend and counting the number of entries is a time-taking task. Otherwise user can give gluster volume heal vol-name info command but with this approach if no. of entries are very hugh in the indices/ xattrop directory, it will comsume time. So as a feature, new command is implemented. Command 1: gluster volume heal vn statistics heal-count This command will get the number of entries present in every brick of a volume. The output displays only entries count. Command 2: gluster volume heal vn statistics heal-count replica 192.168.122.1:/home/user/brickname Here if we are concerned with just one replica. So providing any one of the brick of a replica will get the number of entries to be healed for that replica only. Example: Replicate volume with replica count 2. Backend status: -------------- [root@dhcp-0-17 xattrop]# ls -lia | wc -l 1918 NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy entries so actual no. of entries to be healed are 1916. [root@dhcp-0-17 xattrop]# pwd /home/user/2ty/.glusterfs/indices/xattrop Command output: -------------- Gathering count of entries to be healed on volume volume3 has been successful Brick 192.168.122.1:/home/user/22iu Status: Brick is Not connected Entries count is not available Brick 192.168.122.1:/home/user/2ty Number of entries: 1916 Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290 BUG: 1015990 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/6044 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr : Implementation of command "gluster volume heal vn statistics"Venkatesh Somyajulu2013-10-141-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "gluster volume heal volumename statistics" command gives the summary of the afr crawl done based on the entries present in the xattrop directory. Whenever afr crawls are attempted, the beginning time of crawl, end time of crawl, no of files healed, heal-failed count and number of files in split brain are shown along with the type of the crawl. If crawl is already in progress then it will give the number of files healed, heal failed count and number of files in split-brain from the beginning of the crawl and instead of telling the end time of the crawl, "CRAWL IN PROGRESS" message will be shown. Output format: command: "gluster volume heal volume-name statistics" Output: Gathering afr crawl statistics crawl statistics on volume volume-name has been successful ------------------------------------------------ Crawl statistics for brick no 0 Hostname of brick 192.168.122.248 Starting time of crawl: Wed Jul 10 15:52:38 2013 Ending time of crawl: Wed Jul 10 15:52:38 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 Starting time of crawl: Wed Jul 10 15:52:38 2013 Ending time of crawl: Wed Jul 10 15:52:38 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 ------------------------------------------------ Crawl statistics for brick no 1 Hostname of brick 192.168.122.1 Starting time of crawl: Wed Jul 10 15:52:42 2013 Ending time of crawl: Wed Jul 10 15:52:42 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 Starting time of crawl: Wed Jul 10 15:52:42 2013 Ending time of crawl: Wed Jul 10 15:52:42 2013 Type of crawl: INDEX No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 -------------------------------------------------- Change-Id: I10bf9d10b005741db9973fb1352e0dd59ed99aa9 BUG: 949400 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/4790 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gNFS: Incorrect NFS ACL encoding for XFSSantosh Kumar Pradhan2013-09-291-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | Problem: Incorrect NFS ACL encoding causes "system.posix_acl_default" setxattr failure on bricks on XFS file system. XFS (potentially others?) doesn't understand when the 0x10 prefix is added to the ACL type field for default ACLs (which the Linux NFS client adds) which causes setfacl()->setxattr() to fail silently. NFS client adds NFS_ACL_DEFAULT(0x1000) for default ACL. FIX: Mask the prefix (added by NFS client) OFF, so the setfacl is not rejected when it hits the FS. Original patch by: "Richard Wareing" Change-Id: I17ad27d84f030cdea8396eb667ee031f0d41b396 BUG: 1009210 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/5980 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* rpcsvc: allocate large auxgid list on demandAnand Avati2013-09-175-1/+41
| | | | | | | | | | | | | | | For rpc requests having large aux group list, allocate large list on demand. Else use small static array by default. Without this patch, glusterfsd allocates 140+MB of resident memory just to get started and initialized. Change-Id: I3a07212b0076079cff67cdde18926e8f3b196258 Signed-off-by: Anand Avati <avati@redhat.com> BUG: 953694 Reviewed-on: http://review.gluster.org/5927 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* rpc: fix typo which refers glibc macroAnand Avati2013-08-231-1/+1
| | | | | | | | | | | | | | A typo which read MAX_AUTH_BYTES instead of GF_MAX_AUTH_BYTES was picking the value 400 instead of the larger 2048. This causes failures when number of aux group ids is a large number. Change-Id: Idb8d59aee2690fd53e24c2e09f58a16fe387ef27 BUG: 1000131 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/5695 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* libglusterfs/client_t client_t implementation, phase 1Kaleb S. KEITHLEY2013-07-291-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | Implementation of client_t The feature page for client_t is at http://www.gluster.org/community/documentation/index.php/Planning34/client_t In addition to adding libglusterfs/client_t.[ch] it also extracts/moves the locktable functionality from xlators/protocol/server to libglusterfs, where it is used; thus it may now be shared by other xlators too. This patch is large as it is. Hooking up the state dump is left to do in phase 2 of this patch set. (N.B. this change/patch-set supercedes previous change 3689, which was corrupted during a rebase. That change will be abandoned.) BUG: 849630 Change-Id: I1433743190630a6d8119a72b81439c0c4c990340 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/3957 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* glusterd/cli changes for distributed geo-repAvra Sengupta2013-07-261-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commands: gluster system:: execute gsec_create gluster volume geo-rep <master> <slave-url> create [push-pem] [force] gluster volume geo-rep <master> <slave-url> start [force] gluster volume geo-rep <master> <slave-url> stop [force] gluster volume geo-rep <master> <slave-url> delete gluster volume geo-rep <master> <slave-url> config gluster volume geo-rep <master> <slave-url> status The geo-replication is distributed. The session will be created, and gsyncd will be spawned on all relevant nodes, instead of only one node. geo-rep: Collecting status detail related data Added persistent store for saving information about TotalFilesSynced, TotalSyncTime, TotalBytesSynced Changes in the status information in socket: Existing(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01; New(Ex): FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978; TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640; Persistent details stored in /var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111 BUG: 847839 Original Author: Avra Sengupta <asengupt@redhat.com> Original Author: Venky Shankar <vshankar@redhat.com> Original Author: Aravinda VK <avishwan@redhat.com> Original Author: Amar Tumballi <amarts@redhat.com> Original Author: Csaba Henk <csaba@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5132 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* nfs/auth: reject mounts if getaddrinfo failsRajesh Amaravathi2013-06-261-3/+48
| | | | | | | | | | | | | When nfs.addr-namelookup is turned on, if the getaddrinfo call fails while authenticating client's ip/hostname, the mount request is denied Change-Id: I744f1c6b9c7aae91b9363bba6c6987b42f7f0cc9 BUG: 947055 Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com> Reviewed-on: http://review.gluster.org/5143 Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* rpc: duplicate request cache for nfsRajesh Amaravathi2013-06-217-25/+1053
| | | | | | | | | | | | | | | Duplicate request cache provides a mechanism for detecting duplicate rpc requests from clients. DRC caches replies and on duplicate requests, sends the cached reply instead of re-processing the request. Change-Id: I3d62a6c4aa86c92bf61f1038ca62a1a46bf1c303 BUG: 847624 Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com> Reviewed-on: http://review.gluster.org/4049 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* protocol/rpc: move latest added procedures to the end of the arrayNiels de Vos2013-06-171-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | While looking at the newly introduced procedures FALLOCATE and DISCARD, it seems that these were added with already existing procedure numbers. This makes the protocol incompatible with existing roll-outs. It is very confusing when new procedures are added somewhere in the middle of the array. This will cause the number of existing procedures to change. It is much preferred to add new procedures at the end of the array. This changes not only corrects the enum that generates the procedure numbers, but also the ordering in the client and server fops-array for clarity. Correcting this greatly simplifies adding support for these new procedures in Wireshark and will prevent confusion to the people reading network traces (with or without Wireshark). Change-Id: Ib9e7978531d016c7230d756b855cb94cb0793b0f BUG: 974976 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/5215 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* rpc: Cleanup rpc object in TRANSPORT_CLEANUP eventKrishnan Parthasarathi2013-06-154-26/+43
| | | | | | | | | | | | | | | | | | | | | | | rpc_transport object should be alive as long as the rpc_clnt object is alive. To ensure this, on rpc_clnt's last unref, we cleanup the corresponding rpc_transport object and complete the rpc_clnt cleanup later, in a bottom-up fashion. Introduced rpc_clnt_is_disabled, to allow higher layers to differentiate between the 'final'[1] disconnect triggered from upper layers, and a normal disconnect. This differentiation helps in cleaning up resources, at higher layers, in a race-free manner. [1] - 'final' here means that the rpc and the associated connection, is not going to be used anymore. eg - glusterd_brick_disconnect on volume-stop. Change-Id: I2ecf891a36e3b02cd9eacca964e659525d1bbc6e BUG: 962619 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/5107 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterfs: discard (hole punch) supportBrian Foster2013-06-131-0/+1
| | | | | | | | | | | | | | | | Add support for the DISCARD file operation. Discard punches a hole in a file in the provided range. Block de-allocation is implemented via fallocate() (as requested via fuse and passed on to the brick fs) but a separate fop is created within gluster to emphasize the fact that discard changes file data (the discarded region is replaced with zeroes) and must invalidate caches where appropriate. BUG: 963678 Change-Id: I34633a0bfff2187afeab4292a15f3cc9adf261af Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-on: http://review.gluster.org/5090 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gluster: add fallocate fop supportBrian Foster2013-06-131-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement support for the fallocate file operation. fallocate allocates blocks for a particular inode such that future writes to the associated region of the file are guaranteed not to fail with ENOSPC. This patch adds fallocate support to the following areas: - libglusterfs - mount/fuse - io-stats - performance/md-cache,open-behind - quota - cluster/afr,dht,stripe - rpc/xdr - protocol/client,server - io-threads - marker - storage/posix - libgfapi BUG: 949242 Change-Id: Ice8e61351f9d6115c5df68768bc844abbf0ce8bd Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-on: http://review.gluster.org/4969 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Add a cmd for getting uuid of local nodeKrishnan Parthasarathi2013-06-101-0/+1
| | | | | | | | | | | | | | | | | | | Usage: gluster system:: uuid get This is needed since we generate uuid of a node in a lazy manner. ie, we generate a uuid for the node only on the first volume or peer operation, when the node needs an external identity. With this command, we can force[1] the uuid generation, without a volume or peer operation performed. [1]: Querying for uuid (or uuid get), forces uuid to come into existence. Change-Id: I62c8b6754117756aa4d773dd48af4ddeb1a1d878 BUG: 971661 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/5175 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* rpc-transport: Moved unix socket options function to rpc-transportKrishnan Parthasarathi2013-05-164-60/+61
| | | | | | | | | | | | | This change removes the asymmetry in the 'layer' (read rpc, transport etc) in which transport options were being filled for inet and unix sockets. Change-Id: Iaa080691fd5e4c3baedffa97e9c3f16642c1fc12 BUG: 955919 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4850 Reviewed-by: Raghavendra G <raghavendra@gluster.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* rpcsvc: fix dangerous setting of pointer on free'd structureAnand Avati2013-05-131-1/+0
| | | | | | | | | | | | | | | | | | | | The current code is setting @req->hdr_iobuf = NULL _after_ calling actor_fn() on @req. Calling actor_fn() takes away all guarantees of whether @req is still a valid object or destroyed. Unfortunately most of the times the object is allocated from mem-pool an a mem_put() still keeps the arena allocated (no crash). However once the mem-pool is full and allocation falls back to malloc()/free() the code actually becomes dangerous. This resulted in random crashes when the system load is high (when there were sufficient outstanding calls that @rpc pool got full) Change-Id: I4398c717aa0e2c5f06733212b64dd79e7b2a4136 BUG: 884452 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/4990 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* rpc-lib: fix printf args when printing XIDMichael Brown2013-05-062-7/+7
| | | | | | | | | | | | | * Prior to change, XID is sometimes logged with wrong format string * Incorrect (0x%ux): generates output of "XID: 0x1920499352x" * Correct (0x%x): generates output of "XID: 0x72787e98" Change-Id: Id60b673a4356a4815cdb67303612181ac5624fe3 BUG: 960153 Signed-off-by: Michael Brown <michael@netdirect.ca> Reviewed-on: http://review.gluster.org/4949 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* Revert "glusterd: Fix spurious wakeups in glusterd syncops"Krishnan Parthasarathi2013-05-042-38/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit efa154bb0a4cac34d5a9610ec25d38eebe495f22. -- Following is Avati's analysis (edited) from gerrit -- The claim of the patch (being reverted) is that it in some cases cbkfn is missed. This is wrong analysis. cbk_fn is _always_ called. The patch treats ret > 0 as a "missed cbk". ret > 0 only means socket submission was not complete, and is queued to submit asynchronously when POLLOUT is raised. This is sufficient to guarantee that cbkfn is going to be called (either the socket errors or submission succeeds and reply eventually arrives). This commit also removes spurious barrier_wake(s). call backs are guaranteed to be called even if the transport is disconnected. This means, a 'wake' would be called if rpc_clnt_submit is called. Also, we count both successful and failed operations in a particular batch of operations for the synctask_barrier_wait. So, calling synctask_barrier_wake on failure of rpc_clnt_submit (say, due to network failure) would result in a spurious wake. Change-Id: I7d508c2a54b74a65b82f097742206bc777afc53a BUG: 948686 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4922 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Fixed spurious wakeups in glusterd syncopsKrishnan Parthasarathi2013-04-122-7/+38
| | | | | | | | | | | | | glusterd syncops perform a barrier_wake whenever rpc_clnt_submit returned -1. This is based on the wrong assumption that the cbkfn wasn't called. This would result in one more wakeup than there ought to be. Change-Id: I591e67c267f0e26d1145bf8fb5feeb2c13a751a1 BUG: 948686 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/4802 Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* rpc-lib: Move defaulting to socket message to debug levelRaghavendra Talur2013-04-021-1/+1
| | | | | | | | | | | | | | | | | | Problem: For every gluster cli operation from command line rpc init process is required. During init process we print "no transport type set, defaulting to socket" message at WARNING level, which is not necessary. Solution: Moved the log level to DEBUG. Change-Id: I73f4644264368b0f6c11a77ef66595018784ce79 BUG: 928204 Signed-off-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-on: http://review.gluster.org/4727 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* rpc/nfs: cleanup legacy code of general optionsRajesh Amaravathi2013-04-023-452/+132
| | | | | | | | | | | | | | | | | | Removing the code which handles "general" options. Since it is no longer possible to set general options which apply for all volumes by default, this was redundant. This cleanup of general options code also solves a bug wherein with nfs.addr-namelookup on, nfs.rpc-auth-reject wouldn't work on ip addresses Change-Id: Iba066e32f9a0255287c322ef85ad1d04b325d739 BUG: 921072 Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com> Reviewed-on: http://review.gluster.org/4691 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* rpc: disable root-squash dynamically upon volume set commandRaghavendra Bhat2013-04-011-2/+7
| | | | | | | | | Change-Id: I2ba9ca339ffbe07cb74833165a46a941225b623d BUG: 927616 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/4722 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rpc-transport: fix glusterd crash when rdma.so missingRajesh Amaravathi2013-03-211-2/+4
| | | | | | | | | | | Add checks before trying to delete vol_opt from list and free Change-Id: I2858f58518394beb8f74fa477be81d7bdd38304f BUG: 924215 Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com> Reviewed-on: http://review.gluster.org/4704 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* rpc: before freeing the volume options object, delete it from the listRaghavendra Bhat2013-03-201-6/+6
| | | | | | | | | | | | | | | | | | | | | | * Suppose there is an xlator option which is considered by the xlator only if the source was built with debug mode enabled (the only example in the current code base is run-with-valgrind option for glusterd), then giving that option would make the process crash if the source was not built with debug mode enabled. Reason: In rpc, after getting the options symbol dynamically, it was stored in the newly allocated volume options structure and the structure's list head was added to the xlator's volume_options list. But while freeing the structure the list was not deleted. Thus when the list was traversed, already freed structure was accessed leading to segfault. Change-Id: I3e9e51dd2099e34b206199eae7ba44d9d88a86ad BUG: 922877 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/4687 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* rpc: change dict key for fqdnRajesh Amaravathi2013-02-181-2/+2
| | | | | | | | | | | | | changed the key from "client.fqdn", which could be wrongly construed as belonging to protocol/client, to "fqdn". Change-Id: Ib5f4a875a00b99cd903a29da19bafeb70baaab4e BUG: 906119 Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com> Reviewed-on: http://review.gluster.org/4536 Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* rpc: bring in root-squashing behavior in rpcRaghavendra Bhat2013-02-174-2/+44
| | | | | | | | | | | | | | | | | | * requests coming in as root are converted to nfsnobody * with open-behind some acl checks wont happen and nfsnobody can read the file "whose owner is root and other users do not have permission to read the file". This is becasue open-behind does not send the open to the brick and sends success to the application, thus the acl related tests on the file wont happen which would have prevented the file from being opened. Change-Id: I73afbfd904f0beb3a2ebe807b938ac2fecd4976b BUG: 887145 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/4516 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* rpc: get hostnames of client to allow FQDN based authenticationRajesh Amaravathi2013-02-061-0/+20
| | | | | | | | | | | | | | If FQDNs are used to authenticate clients, then from this commit forth, the client ip(v4,6) is reverse looked up using getnameinfo to get a hostname associated with it, if any, thereby making FQDN-based rpc authentication possible. Change-Id: I4c5241e7079a2560de79ca15f611e65c0b858f9b BUG: 903553 Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com> Reviewed-on: http://review.gluster.org/4439 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* nfs/nlm: use req's uid and gid for open_and_resumeRajesh Amaravathi2013-02-061-2/+0
| | | | | | | | | | | | | | | | Previously, NLM was setting the frame->root->{uid,gid} to root by default. This causes permission problems with root squashing for lock calls. Now, we obtain the uid and gid from rpc request. And duplicate #defines are removed from rpcsvc.h Change-Id: I5d6c87aed8d04aab2619bb913408048c0a02d1e7 BUG: 906884 Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com> Reviewed-on: http://review.gluster.org/4466 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* rpcsvc/dump: return correct return valueRaghavendra G2013-02-061-0/+1
| | | | | | | | | | | | | fix e8d09b9ab9 resulted in rpcsvc_dump returning large non-zero values to caller in case of success. This patch fixes that issue by returning 0 Change-Id: I594703dada74da17b33c98b073627a3378d170c5 BUG: 903113 Signed-off-by: Raghavendra G <raghavendra@gluster.com> Reviewed-on: http://review.gluster.org/4477 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Use client-op-versions during "volume set"Kaushal M2013-02-061-0/+5
| | | | | | | | | | | | | | | The supported op-versions of the client and the name of the requested volume, are saved during server_getspec(). These are used during the staging of volume set. If the option being set is not supported by any of the clients which currently have the volume mounted, then set will fail. Change-Id: I4e6b60b274d5200508762dc0204cfa848a6c0aa4 BUG: 907311 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4424 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* rpcsvc: Fix memory corruption caused by rpcsvc_dump returning non-zeroRaghavendra G2013-02-051-10/+8
| | | | | | | | | | | | | | | | | | | | The convention followed is that any actor should return non-zero value only if it has not attempted to send the reply back. If an actor returns non-zero, rpcsvc_handle_rpc_call tries to send an error reply. Since, rpcsvc_submit_generic frees the rpc_req, its wrong to invoke it more than once on same rpc_req. When the transport is not connected, rpcsvc_dump used to pass the non-zero value it got from transport to rpcsvc resulting in memory corruption. Hence this patch makes rpcsvc_dump to return 0. Change-Id: I1b6f28969ee546c44d193d3d33debccb65585b69 BUG: 903113 Signed-off-by: Raghavendra G <raghavendra@gluster.com> Reviewed-on: http://review.gluster.org/4183 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* "gcc -pedantic": made 'inline' functions as 'static inline' functionsAmar Tumballi2013-01-233-7/+7
| | | | | | | | | | | | for passing the build with -pedantic flag Change-Id: I80fd9528321e4c6ea5bec32bf5cdc54cc4e4f65e BUG: 875913 Signed-off-by: Amar Tumballi <amarts@redhat.com> Reviewed-on: http://review.gluster.org/4186 Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* core: fixes for gcc's '-pedantic' flag buildAvra Sengupta2013-01-212-4/+4
| | | | | | | | | | | | | * warnings on 'void *' arguments * warnings on empty initializations * warnings on empty array (array[0]) Change-Id: Iae440f54cbd59580eb69f3ecaed5a9926c0edf95 BUG: 875913 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/4219 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* rpcsvc: Add an extra log messageKaushal M2012-12-101-0/+4
| | | | | | | | | | | | Adds an extra log message in rpcvsc_check_and_reply_error(). Also, makes a small change in the test-script. Change-Id: I2b686e6fa86529cc4fbf0066df64057a9784b849 BUG: 884452 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4285 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* rpcsvc: do a 'iobuf_ref()' on buffer while synctaskizing actorKaushal M2012-12-092-13/+23
| | | | | | | | | | | | | | | | | | Starting rpc actors using synctask causes rpcsvc to return before the actor actually finishes its actions. This would cause the buffer to be unreffed by socket and be possibly reused, before the actor used it, leading to failures in the actor. This patch makes rpcsvc take a ref on the buffer when synctaskizing the actor and store it in the request structure, and then later unref it when the request structure is destroyed. This makes sure that the buffer is not reused before the actor has finished. Change-Id: I8f81e1fef8f3719db220471d2d8ffb8916958956 BUG: 884452 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/4277 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* Revert "glusterd: add "volume label" command"Anand Avati2012-12-041-1/+0
| | | | | | | | | | | This reverts commit dad16a51ba7e6b1c57529423c57257dbce97ee93 Test script causing "silent" failures during execution. Change-Id: I26dbb8ed22256071cb415cc3aff572ef8372600e Reviewed-on: http://review.gluster.org/4268 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: add "volume label" commandJeff Darcy2012-12-041-0/+1
| | | | | | | | | | | | | | | | | This command is necessary when the local disk/filesystem containing a brick is unexpectedly lost and then recreated. Since 961bc80c, trying to start the brick will fail because the trusted.glusterfs.volume-id xattr is missing, and if we can't start it then we can't replace-brick or self-heal so we're stuck in a permanently degraded state. This command provides a way to label the empty brick with the proper volume ID so that further repair actions become possible. Change-Id: I1c1e5273a018b7a6b8d0852daf111ddc3fddfdc2 BUG: 860297 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/4259 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* core: remove ref/unref while unwinding framesRajesh Amaravathi2012-11-301-4/+0
| | | | | | | | | Change-Id: Ib196ffdf8122a9510cc7c5953303a6e730091302 BUG: 853373 Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com> Reviewed-on: http://review.gluster.org/4062 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* BD Backend: CLI to create a full/linked clone of a imageM. Mohan Kumar2012-11-291-0/+2
| | | | | | | | | | | | | | | | A new CLI command added to support cloning/snapshotting of a LV device Syntax is: $ gluster bd clone <volname>:<vg>/<lv> <newlv> $ gluster bd snapshot <volname>:<vg>/<lv> <snap_lv> <size> BUG: 805138 Change-Id: Idc2ac14525a3998329c742bf85a06326cac8cd54 Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com> Reviewed-on: http://review.gluster.org/3719 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* BD Backend: CLI commands to create/delete imageM. Mohan Kumar2012-11-291-0/+8
| | | | | | | | | | | | | | | | | Cli commands added to create/delete a LV device. The following command creates lv in a given vg. $ gluster bd create <volname>:<vgname>/<lvname> <size> The following command deletes lv in a given vg. $ gluster bd delete <volname>:<vgname>/<lvname> BUG: 805138 Change-Id: Ie4e100eca14e2ee32cf2bb4dd064b17230d673bf Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com> Reviewed-on: http://review.gluster.org/3718 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd, cli: implement gluster system uuid reset commandRaghavendra Bhat2012-11-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | A commonly faced problem among glusterfs users is: after a fresh installation of glusterfs in a virtual machine, the VM image is cloned to make multiple instances of the server. This breaks glusterd because right after glusterfs installation on the first boot glusterd would have created the node UUID and this gets inherited into the clone. The result is wierd behavior at the time of peer probe where glusterd does not (yet) deal with UUID collisions in a user friendly way. To handle it gluster peer reset command is implemented which upon execution changes the uuid of local glusterd. Change-Id: If207dd2ad93ab94ef1a3253f409c21c442975f87 BUG: 811493 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/3637 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>