summaryrefslogtreecommitdiffstats
path: root/xlators/protocol/server/src/server-helpers.h
Commit message (Collapse)AuthorAgeFilesLines
* gluster: GlusterFS Volume Snapshot FeatureAvra Sengupta2014-04-111-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | This is the initial patch for the Snapshot feature. Current patch includes following features: * Snapshot create * Snapshot delete * Snapshot restore * Snapshot list * Snapshot info * Snapshot status * Snapshot config Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9 BUG: 1061685 Signed-off-by: shishir gowda <sgowda@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Signed-off-by: Joseph Fernandes <josferna@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7128 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* protocol/server: do not do root-squashing for trusted clientsRaghavendra Bhat2014-02-101-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * As of now clients mounting within the storage pool using that machine's ip/hostname are trusted clients (i.e clients local to the glusterd). * Be careful when the request itself comes in as nfsnobody (ex: posix tests). So move the squashing part to protocol/server when it creates a new frame for the request, instead of auth part of rpc layer. * For nfs servers do root-squashing without checking if it is trusted client, as all the nfs servers would be running within the storage pool, hence will be trusted clients for the bricks. * Provide one more option for mounting which actually says root-squash should/should not happen. This value is given priority only for the trusted clients. For non trusted clients, the volume option takes the priority. But for trusted clients if root-squash should not happen, then they have to be mounted with root-squash=no option. (This is done because by default blocking root-squashing for the trusted clients will cause problems for smb and UFO clients for which the requests have to be squashed if the option is enabled). * For geo-replication and defrag clients do not do root-squashing. * Introduce a new option in open-behind for doing read after successful open. Change-Id: I8a8359840313dffc34824f3ea80a9c48375067f0 BUG: 954057 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/4863 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* client_t: phase 2, refactor server_ctx and locks_ctx outKaleb S. KEITHLEY2013-10-311-2/+2
| | | | | | | | | | | | | | remove server_ctx and locks_ctx from client_ctx directly and store as into discrete entities in the scratch_ctx hooking up dump will be in phase 3 BUG: 849630 Change-Id: I94cea328326db236cdfdf306cb381e4d58f58d4c Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/5678 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* libglusterfs/client_t client_t implementation, phase 1Kaleb S. KEITHLEY2013-07-291-32/+6
| | | | | | | | | | | | | | | | | | | | | | | | Implementation of client_t The feature page for client_t is at http://www.gluster.org/community/documentation/index.php/Planning34/client_t In addition to adding libglusterfs/client_t.[ch] it also extracts/moves the locktable functionality from xlators/protocol/server to libglusterfs, where it is used; thus it may now be shared by other xlators too. This patch is large as it is. Hooking up the state dump is left to do in phase 2 of this patch set. (N.B. this change/patch-set supercedes previous change 3689, which was corrupted during a rebase. That change will be abandoned.) BUG: 849630 Change-Id: I1433743190630a6d8119a72b81439c0c4c990340 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/3957 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* license: xlators/protocol/server dual license GPLv2 and LGPLv3+Kaleb S. KEITHLEY2013-04-121-14/+5
| | | | | | | | | BUG: 951549 Change-Id: I3de5bd86d4238a60a0a85ba2e15d9c131969b210 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/4816 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* protocol/server: Avoid race in add/del locker, connection_cleanupPranith Kumar K2012-03-171-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | conn->ltable address keeps changing in server_connection_cleanup every time it is called. i.e. New ltable is created every time it is called. Here is the race that happened: --------------------------------------------------- thread-1 | thread-2 add_locker is called with | conn->ltable. lets call the | ltable address lt1 | | connection cleanup is called | and do_lock_table_cleanup is | triggered for lt1. locker | lists are splice_inited under | the lt1->lock lt1 adds the locker under | lt1->lock (lets call this l1) | | GF_FREE(lt1) happens in | do_lock_table_cleanup The locker l1 that is added just before lt1 is freed will never be cleared in the subsequent server_connection_cleanups as there does not exist a reference to the locker. The stale lock remains in the locks xlator even though the transport on which it was issued is destroyed. Change-Id: I0a02f16c703d1e7598b083aa1057cda9624eb3fe BUG: 787601 Signed-off-by: Pranith Kumar K <pranithk@gluster.com> Reviewed-on: http://review.gluster.com/2957 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* protocol/server: Make conn object ref-countedPranith Kumar K2012-03-011-1/+1
| | | | | | | | | Change-Id: I992a7f8a75edfe7d75afaa1abe0ad45e8f351c8b BUG: 796581 Signed-off-by: Pranith Kumar K <pranithk@gluster.com> Reviewed-on: http://review.gluster.com/2806 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* protocol/client,server: fcntl lock self healing.Mohammed Junaid2012-02-201-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently(with out this patch), on a disconnect the server cleans up the transport which inturn closes the fd's and releases the locks acquired on those fd's by that client. On a reconnect, client just reopens the fd's but doesn't reacquire the locks. The application that had previously acquired the locks still is under the assumption that it is the owner of those locks which might have been granted to other clients(if they request) by the server leading to data corruption. This patch allows the client to reacquire the fcntl locks (held on the fd's) during client-server handshake. * The server identifies the client via process-uuid-xl (which is a combination of uuid and client-protocol name, it is assumed to be unique) and lk-version number. * The client maintains a list of process-uuid-xl, lk-version pair for each accepted connection. On a connect, the server traverses the list for a matching pair, if a matching pair is not found the the server returns lk-version with value 0, else it returns the lk-version it has in store. * On a disconnect, the server and client enter grace period, and on the completion of the grace period, the client bumps up its lk-version number (which means, it will reacquire the locks the next time) and the server will distroy the connection. If reconnection happens within the grace period, the server will find the matching (process-uuid-xl, lk-version) pair in its list which guarantees that the fd's and there corresponding locks are still valid for this client. Configurable options: To set grace-timeout, the following options are option server.grace-timeout value option client.grace-timeout value To enable or disable the lk-heal, option lk-heal [on|off] gluster volume set command can be used to configurable options Change-Id: Id677ef1087b300d649f278b8b2aa0d94eae85ed2 BUG: 795386 Signed-off-by: Mohammed Junaid <junaid@redhat.com> Reviewed-on: http://review.gluster.com/2766 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vijay@gluster.com>
* core: change lk-owner as a 1k bufferAmar Tumballi2012-01-241-2/+2
| | | | | | | | | | | | | so, NLM can send the lk-owner field directly to the locks translators, while doing the same effort, also enabled sending maximum of 500 aux gid over protocol. Change-Id: I87c2514392748416f7ffe21d5154faad2e413969 Signed-off-by: Amar Tumballi <amar@gluster.com> BUG: 767229 Reviewed-on: http://review.gluster.com/779 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@gluster.com>
* Change Copyright current yearPranith Kumar K2011-08-101-1/+1
| | | | | | | | Change-Id: I2d10f2be44f518f496427f257988f1858e888084 BUG: 3348 Reviewed-on: http://review.gluster.com/200 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@gluster.com>
* LICENSE: s/GNU Affero General Public/GNU General Public/Pranith Kumar K2011-08-061-3/+3
| | | | | | | | Change-Id: I3914467611e573cccee0d22df93920cf1b2eb79f BUG: 3348 Reviewed-on: http://review.gluster.com/182 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@gluster.com>
* protocol/server: White space cleanup and NULL check validations.Mohammed Junaid Ahmed2011-03-171-12/+12
| | | | | | | | | Signed-off-by: Junaid <junaid@gluster.com> Signed-off-by: Amar Tumballi <amar@gluster.com> Signed-off-by: Vijay Bellur <vijay@dev.gluster.com> BUG: 2346 (Log message enhancements in GlusterFS - phase 1) URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2346
* protocol/server: Distinguishing the locks based on the type of fop like ↵Mohammed Junaid Ahmed2011-01-241-2/+4
| | | | | | | | | | | | | | inodelk and entrylk. Currently, the protocol server considers entrylk to be held only on directories and inodelk on files and thus when a client unmounts itself while holding locks, it fails to free entrylk locks held on files and inodelk locks held on directories. Signed-off-by: Junaid <junaid@gluster.com> Signed-off-by: Anand V. Avati <avati@dev.gluster.com> BUG: 2221 (Failed to free Inodlk locks on directories when the client holding the locks was unmounted before releasing the locks held.) URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2221
* Change GNU GPL to GNU AGPLPranith K2010-10-041-3/+3
| | | | | | | | Signed-off-by: Pranith Kumar K <pranithk@gluster.com> Signed-off-by: Vijay Bellur <vijay@dev.gluster.com> BUG: 1388 () URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1388
* protocol/server: features/locks: Fix nonblocking entrylks, inodelks in locks ↵Pavan Sondur2010-08-221-2/+3
| | | | | | | | | | and server. Signed-off-by: Pavan Vilas Sondur <pavan@gluster.com> Signed-off-by: Anand V. Avati <avati@dev.gluster.com> BUG: 960 () URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=960
* minor fixes in rpc + protocolAmar Tumballi2010-07-011-9/+0
| | | | | | | | | | | | | | | * proper use of mem_acct_init in client.c/server.c * fentrylk_resume to be called instead of finodelk_resume in server_fentrylk(). * handle the case of xdr decoding failure on server by sending the proper error reply to client, so there is no missing frame. * removed unwanted functions from server-helpers.c Signed-off-by: Amar Tumballi <amar@gluster.com> Signed-off-by: Anand V. Avati <avati@dev.gluster.com> BUG: 875 (Implement a new protocol to provide proper backward/forward compatibility) URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=875
* minor improvements in protocolAmar Tumballi2010-06-241-0/+5
| | | | | | | | | | | | | | * rpc_clnt_submit() now takes 'cbkfn' as an argument. * readdir xdr now uses dirent structure directly instead of using 'opaque' buffer through which it was serializing / unserializing the dirent structure. * 'gfs_id' field (currently used for debugging) is properly updated Signed-off-by: Amar Tumballi <amar@gluster.com> Signed-off-by: Anand V. Avati <avati@dev.gluster.com> BUG: 875 (Implement a new protocol to provide proper backward/forward compatibility) URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=875
* rpc protocolAmar Tumballi2010-06-211-0/+89
| | | | | | | | | Signed-off-by: Amar Tumballi <amar@gluster.com> Signed-off-by: Raghavendra G <raghavendra@gluster.com> Signed-off-by: Anand V. Avati <avati@dev.gluster.com> BUG: 875 (Implement a new protocol to provide proper backward/forward compatibility) URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=875
* renamed xlator/protocol to xlator/protocol/legacyAmar Tumballi2010-06-211-72/+0
| | | | | | | | | Signed-off-by: Amar Tumballi <amar@gluster.com> Signed-off-by: Raghavendra G <raghavendra@gluster.com> Signed-off-by: Anand V. Avati <avati@dev.gluster.com> BUG: 875 (Implement a new protocol to provide proper backward/forward compatibility) URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=875
* iatt: changes across the codebaseAnand V. Avati2010-03-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - libglusterfs -- call-stub -- inode -- protocol - libglusterfsclient - cluster/replicate - cluster/{dht,nufa,switch} - cluster/unify - cluster/HA - cluster/map - cluster/stripe - debug/error-gen - debug/trace - debug/io-stats - encryption/rot-13 - features/filter - features/locks - features/path-converter - features/quota - features/trash - mount/fuse - performance/io-threads - performance/io-cache - performance/quick-read - performance/read-ahead - performance/stat-prefetch - performance/symlink-cache - performance/write-behind - protocol/client - protocol/server - storage-posix Signed-off-by: Anand V. Avati <avati@blackhole.gluster.com> Signed-off-by: Anand V. Avati <avati@dev.gluster.com> BUG: 361 (GlusterFS 3.0 should work on Mac OS/X) URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=361
* Revert "Server backend storage hang should not cause the mount point to hang."Harshavardhana Ranganath2010-01-261-6/+0
| | | | | | | | | | This reverts commit a0b148ea4e2a0163548eeb89b7580be4adbb8070. Signed-off-by: Harshavardhana <harsha@gluster.com> Signed-off-by: Anand V. Avati <avati@dev.gluster.com> BUG: 272 (Server backend storage hang should not cause the mount point to hang) URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=272
* Server backend storage hang should not cause the mount point to hang.Anand Avati2010-01-231-0/+6
| | | | | | | | | | | | | | | | | | | Submitted-by: Krishna Srinivas <krishna@gluster.com> NOTE: fixed compilation issues in posix.c introduced while merging storage/posix polls for FS/kernel being functional by issuing statvfs() call. In case statvfs expires the timer, storage/posix will send CHILD_DOWN to upper translator. Ultimately this will cause protocol/server to disconnect all clients connected and also cleans up the data structures. Hence if soft lockup or other kernel bug causes backend FS to hang, the clients will not be hung. Signed-off-by: Krishna Srinivas <krishna@gluster.com> Signed-off-by: Anand V. Avati <avati@blackhole.gluster.com> Signed-off-by: Anand V. Avati <avati@dev.gluster.com> BUG: 272 (Server backend storage hang should not cause the mount point to hang) URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=272
* protocol/server: enhance trace loggingAnand Avati2009-11-291-1/+1
| | | | | | | | | | add logging of fop name, callid number and make logging more friendly Signed-off-by: Anand V. Avati <avati@blackhole.gluster.com> Signed-off-by: Anand V. Avati <avati@dev.gluster.com> BUG: 315 (generation number support) URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=315
* protocol/server: Introduce option trace to log requests and replies in ↵Pavan Sondur2009-11-261-0/+3
| | | | | | | | | | normal log. Signed-off-by: Pavan Vilas Sondur <pavan@gluster.com> Signed-off-by: Anand V. Avati <avati@dev.gluster.com> BUG: 315 (generation number support) URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=315
* protocol/server: generation number and dentry resolutionAnand Avati2009-10-201-8/+0
| | | | | | | | | | - handle generation number in protocol - rewrite server dentry resolution code for inode cache miss Signed-off-by: Anand V. Avati <avati@dev.gluster.com> BUG: 315 (generation number support) URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=315
* Changed occurrences of Z Research to Gluster.Vijay Bellur2009-10-071-1/+1
| | | | Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
* update protocol/server with new readv writev prototypesAnand V. Avati2009-04-121-2/+1
| | | | Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
* server-protocol - reimplement connection cleanup to happen in 2 phasesRaghavendra G2009-04-031-0/+1
| | | | | | | | | | | - first phase, which happens when POLLERR is received on transport, releases all locks, flushes all open fds. - second phase, which happens when both the transports of connection destroyed, destroys the containers like lock table, fd table along with the connection. - the first phase, clears up any references to transport held by translators like posix-locks(in the form of blocked locks) paving way for the second phase. Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
* Add extra 'volume' parameter to inodelk/entrylk callsVikas Gorur2009-03-121-2/+2
| | | | Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
* updated copyright header to extend copyright upto 2009Basavanagowda Kanur2009-02-261-1/+1
| | | | | | updated copyright header to include 2009. Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
* Added all filesVikas Gorur2009-02-181-0/+77