| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Change-Id: I6f5d8140a06f3c1b2d196849299f8d483028d33b
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In a mixed mode cluster involving 4.0 and older 3.x bricks, if
clients are newer, then the iatt encoded in the dictionary can be
of the older iatt format, which a newer client will map incorrectly
to the newer structure.
This causes failures in FOPs that depend on this iatt for some
functionality (seen in mkdir operations failing as EIO, when DHT
hits its internal setxattr call).
The fix provided is to convert the iatt in the dict, based on which
RPC version is used to communicate with the server.
IOW, this is the reverse of change in commit "b966c7790e"
Tested using a mixed mode cluster (i.e bricks in 3.12 and 4.0 versions)
and a mixed set of clients, 3.12 and 4.0 clients.
There is no regression test provided, as this needs a mixed mode cluster
to test and validate.
Change-Id: I454e54651ca836b9f7c28f45f51d5956106aefa9
BUG: 1554053
Signed-off-by: ShyamsundarR <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added iatt conversion to an older format, when dealing with
older RPC versions. This enables iatt structure conformance
when dealing with older clients.
This helps fix rolling upgrade from 3.x versions to 4.0 version
of gluster by sending the right iatt in the dictionary when DHT
requests the same.
Change-Id: Ieaf925f81f8c7798a8fba1e90a59fa9dec82856c
BUG: 1544699
Signed-off-by: ShyamsundarR <srangana@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I27f5e1e34fe3eac96c7dd88e90753fb5d3d14550
BUG: 1272030
Signed-off-by: Anoop C S <anoopcs@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this patchset, some major things are changed in XDR, mainly:
* Naming: Instead of gfs3/gfs4 settle for gfx_ for xdr structures
* add iattx as a separate structure, and add conversion methods
* the *_rsp structure is now changed, and is also reduced in number
(ie, no need for different strucutes if it is similar to other response).
* use proper XDR methods for sending dict on wire.
Also, with the change of xdr structure, there are changes needed
outside of xlator protocol layer to handle these properly. Mainly
because the abstraction was broken to support 0-copy RDMA with payload
for write and read FOP. This made transport layer know about the xdr
payload, hence with the change of xdr payload structure, transport layer
needed to know about the change.
Updates #384
Change-Id: I1448fbe9deab0a1b06cb8351f2f37488cefe461f
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The most common pattern, both in our code and elsewhere, is this:
struct _xyz {
...
};
typedef struct _xyz xyz_t;
These exceptions - especially call_frame/call_stack - have been slowing
down code navigation for years. By converging on a single pattern,
navigating from xyz_t in code to the actual definition of struct _xyz
(i.e. without having to visit the typedef first) might even be
automatable.
Change-Id: I0e5dd1f51f98e000173c62ef4ddc5b21d9ec44ed
Signed-off-by: Jeff Darcy <jdarcy@fb.com>
Reviewed-on: https://review.gluster.org/17650
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: Jeff Darcy <jeff@pl.atyp.us>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I808fd5f9f002a35bff94d310c5d61a781e49570b
BUG: 1360169
Signed-off-by: Anuradha Talur <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/15010
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I981258afa527337dd2ad33eecba7fc8084238e6d
BUG: 1303829
Signed-off-by: Anuradha Talur <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/14137
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ie38198db990f133fe163ba160cdf647e34f83f4f
BUG: 1326085
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/13994
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* xlators like quota, marker, posix_acl can cause problems
if inode-ctx are not created.
sometime these xlarors may not get lookup on root inode
with below cases
1) client may not send lookup on root inode (like NSR leader)
2) if the xlators on one of the bricks are not up,
and client sending lookup during this time: brick
can miss the lookup
It is always better to make sure that there is one lookup
on root. So send a first lookup when the inode table is created
* When sending lookup on root, new inode is created, we need to
use itable->root instead
Change-Id: Iff2eeaa1a89795328833a7761789ef588f11218f
BUG: 1320818
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/13837
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As suggested during the code-review of Bug1200262, have modified
GF_CBK_UPCALL to be exlusively GF_CBK_CACHE_INVALIDATION.
Thus, for any new upcall event, a new CBK procedure will be added.
Also made changes to store upcall data separately based on the
upcall event type received.
BUG: 1200262
Change-Id: I0f5e53d6f5ece16aecb514a0a426dca40fa1c755
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Reviewed-on: http://review.gluster.org/10049
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As a new barrier translator is introduced, we dont require
the old barrier code. Hence cleaning thar up.
Change-Id: Ieedca6f33a746898f0d2332fda1f1d4c86fff98f
BUG: 1061685
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/7577
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial patch for the Snapshot feature. Current patch
includes following features:
* Snapshot create
* Snapshot delete
* Snapshot restore
* Snapshot list
* Snapshot info
* Snapshot status
* Snapshot config
Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9
BUG: 1061685
Signed-off-by: shishir gowda <sgowda@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7128
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* As of now clients mounting within the storage pool using that machine's
ip/hostname are trusted clients (i.e clients local to the glusterd).
* Be careful when the request itself comes in as nfsnobody (ex: posix tests).
So move the squashing part to protocol/server when it creates a new frame
for the request, instead of auth part of rpc layer.
* For nfs servers do root-squashing without checking if it is trusted client,
as all the nfs servers would be running within the storage pool, hence will
be trusted clients for the bricks.
* Provide one more option for mounting which actually says root-squash
should/should not happen. This value is given priority only for the trusted
clients. For non trusted clients, the volume option takes the priority. But
for trusted clients if root-squash should not happen, then they have to be
mounted with root-squash=no option. (This is done because by default
blocking root-squashing for the trusted clients will cause problems for smb
and UFO clients for which the requests have to be squashed if the option is
enabled).
* For geo-replication and defrag clients do not do root-squashing.
* Introduce a new option in open-behind for doing read after successful open.
Change-Id: I8a8359840313dffc34824f3ea80a9c48375067f0
BUG: 954057
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/4863
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
remove server_ctx and locks_ctx from client_ctx directly and store as
into discrete entities in the scratch_ctx
hooking up dump will be in phase 3
BUG: 849630
Change-Id: I94cea328326db236cdfdf306cb381e4d58f58d4c
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/5678
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implementation of client_t
The feature page for client_t is at
http://www.gluster.org/community/documentation/index.php/Planning34/client_t
In addition to adding libglusterfs/client_t.[ch] it also extracts/moves
the locktable functionality from xlators/protocol/server to libglusterfs,
where it is used; thus it may now be shared by other xlators too.
This patch is large as it is. Hooking up the state dump is left to do
in phase 2 of this patch set.
(N.B. this change/patch-set supercedes previous change 3689, which was
corrupted during a rebase. That change will be abandoned.)
BUG: 849630
Change-Id: I1433743190630a6d8119a72b81439c0c4c990340
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/3957
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
| |
BUG: 951549
Change-Id: I3de5bd86d4238a60a0a85ba2e15d9c131969b210
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/4816
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
conn->ltable address keeps changing in
server_connection_cleanup every time it is called.
i.e. New ltable is created every time it is called.
Here is the race that happened:
---------------------------------------------------
thread-1 | thread-2
add_locker is called with |
conn->ltable. lets call the |
ltable address lt1 |
| connection cleanup is called
| and do_lock_table_cleanup is
| triggered for lt1. locker
| lists are splice_inited under
| the lt1->lock
lt1 adds the locker under |
lt1->lock (lets call this l1) |
| GF_FREE(lt1) happens in
| do_lock_table_cleanup
The locker l1 that is added just before lt1 is freed will never
be cleared in the subsequent server_connection_cleanups as there
does not exist a reference to the locker. The stale lock remains
in the locks xlator even though the transport on which it was
issued is destroyed.
Change-Id: I0a02f16c703d1e7598b083aa1057cda9624eb3fe
BUG: 787601
Signed-off-by: Pranith Kumar K <pranithk@gluster.com>
Reviewed-on: http://review.gluster.com/2957
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I992a7f8a75edfe7d75afaa1abe0ad45e8f351c8b
BUG: 796581
Signed-off-by: Pranith Kumar K <pranithk@gluster.com>
Reviewed-on: http://review.gluster.com/2806
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently(with out this patch), on a disconnect the server cleans up
the transport which inturn closes the fd's and releases the locks acquired on
those fd's by that client. On a reconnect, client just reopens the fd's but
doesn't reacquire the locks. The application that had previously acquired
the locks still is under the assumption that it is the owner of those locks
which might have been granted to other clients(if they request) by the server
leading to data corruption.
This patch allows the client to reacquire the fcntl locks (held on the fd's)
during client-server handshake.
* The server identifies the client via process-uuid-xl (which is a combination
of uuid and client-protocol name, it is assumed to be unique) and lk-version
number.
* The client maintains a list of process-uuid-xl, lk-version pair for each
accepted connection. On a connect, the server traverses the list for a
matching pair, if a matching pair is not found the the server returns
lk-version with value 0, else it returns the lk-version it has in store.
* On a disconnect, the server and client enter grace period, and on the
completion of the grace period, the client bumps up its lk-version number
(which means, it will reacquire the locks the next time) and the server will
distroy the connection. If reconnection happens within the grace period, the
server will find the matching (process-uuid-xl, lk-version) pair in its list
which guarantees that the fd's and there corresponding locks are still valid
for this client.
Configurable options:
To set grace-timeout, the following options are
option server.grace-timeout value
option client.grace-timeout value
To enable or disable the lk-heal,
option lk-heal [on|off]
gluster volume set command can be used to configurable options
Change-Id: Id677ef1087b300d649f278b8b2aa0d94eae85ed2
BUG: 795386
Signed-off-by: Mohammed Junaid <junaid@redhat.com>
Reviewed-on: http://review.gluster.com/2766
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
so, NLM can send the lk-owner field directly to the locks translators,
while doing the same effort, also enabled sending maximum of 500 aux gid
over protocol.
Change-Id: I87c2514392748416f7ffe21d5154faad2e413969
Signed-off-by: Amar Tumballi <amar@gluster.com>
BUG: 767229
Reviewed-on: http://review.gluster.com/779
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@gluster.com>
|
|
|
|
|
|
|
|
| |
Change-Id: I2d10f2be44f518f496427f257988f1858e888084
BUG: 3348
Reviewed-on: http://review.gluster.com/200
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@gluster.com>
|
|
|
|
|
|
|
|
| |
Change-Id: I3914467611e573cccee0d22df93920cf1b2eb79f
BUG: 3348
Reviewed-on: http://review.gluster.com/182
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@gluster.com>
|
|
|
|
|
|
|
|
|
| |
Signed-off-by: Junaid <junaid@gluster.com>
Signed-off-by: Amar Tumballi <amar@gluster.com>
Signed-off-by: Vijay Bellur <vijay@dev.gluster.com>
BUG: 2346 (Log message enhancements in GlusterFS - phase 1)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2346
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
inodelk and entrylk.
Currently, the protocol server considers entrylk to be held only on directories
and inodelk on files and thus when a client unmounts itself while holding locks,
it fails to free entrylk locks held on files and inodelk locks held on directories.
Signed-off-by: Junaid <junaid@gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 2221 (Failed to free Inodlk locks on directories when the client holding the locks was unmounted before releasing the locks held.)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2221
|
|
|
|
|
|
|
|
| |
Signed-off-by: Pranith Kumar K <pranithk@gluster.com>
Signed-off-by: Vijay Bellur <vijay@dev.gluster.com>
BUG: 1388 ()
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1388
|
|
|
|
|
|
|
|
|
|
| |
and server.
Signed-off-by: Pavan Vilas Sondur <pavan@gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 960 ()
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=960
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* proper use of mem_acct_init in client.c/server.c
* fentrylk_resume to be called instead of finodelk_resume in
server_fentrylk().
* handle the case of xdr decoding failure on server by sending the
proper error reply to client, so there is no missing frame.
* removed unwanted functions from server-helpers.c
Signed-off-by: Amar Tumballi <amar@gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 875 (Implement a new protocol to provide proper backward/forward compatibility)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=875
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* rpc_clnt_submit() now takes 'cbkfn' as an argument.
* readdir xdr now uses dirent structure directly instead of
using 'opaque' buffer through which it was serializing /
unserializing the dirent structure.
* 'gfs_id' field (currently used for debugging) is properly updated
Signed-off-by: Amar Tumballi <amar@gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 875 (Implement a new protocol to provide proper backward/forward compatibility)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=875
|
|
|
|
|
|
|
|
|
| |
Signed-off-by: Amar Tumballi <amar@gluster.com>
Signed-off-by: Raghavendra G <raghavendra@gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 875 (Implement a new protocol to provide proper backward/forward compatibility)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=875
|
|
|
|
|
|
|
|
|
| |
Signed-off-by: Amar Tumballi <amar@gluster.com>
Signed-off-by: Raghavendra G <raghavendra@gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 875 (Implement a new protocol to provide proper backward/forward compatibility)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=875
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- libglusterfs
-- call-stub
-- inode
-- protocol
- libglusterfsclient
- cluster/replicate
- cluster/{dht,nufa,switch}
- cluster/unify
- cluster/HA
- cluster/map
- cluster/stripe
- debug/error-gen
- debug/trace
- debug/io-stats
- encryption/rot-13
- features/filter
- features/locks
- features/path-converter
- features/quota
- features/trash
- mount/fuse
- performance/io-threads
- performance/io-cache
- performance/quick-read
- performance/read-ahead
- performance/stat-prefetch
- performance/symlink-cache
- performance/write-behind
- protocol/client
- protocol/server
- storage-posix
Signed-off-by: Anand V. Avati <avati@blackhole.gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 361 (GlusterFS 3.0 should work on Mac OS/X)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=361
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit a0b148ea4e2a0163548eeb89b7580be4adbb8070.
Signed-off-by: Harshavardhana <harsha@gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 272 (Server backend storage hang should not cause the mount point to hang)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=272
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Submitted-by: Krishna Srinivas <krishna@gluster.com>
NOTE: fixed compilation issues in posix.c introduced while merging
storage/posix polls for FS/kernel being functional by issuing
statvfs() call. In case statvfs expires the timer, storage/posix will
send CHILD_DOWN to upper translator. Ultimately this will cause
protocol/server to disconnect all clients connected and also cleans up
the data structures. Hence if soft lockup or other kernel bug causes
backend FS to hang, the clients will not be hung.
Signed-off-by: Krishna Srinivas <krishna@gluster.com>
Signed-off-by: Anand V. Avati <avati@blackhole.gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 272 (Server backend storage hang should not cause the mount point to hang)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=272
|
|
|
|
|
|
|
|
|
|
| |
add logging of fop name, callid number and make logging more friendly
Signed-off-by: Anand V. Avati <avati@blackhole.gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 315 (generation number support)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=315
|
|
|
|
|
|
|
|
|
|
| |
normal log.
Signed-off-by: Pavan Vilas Sondur <pavan@gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 315 (generation number support)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=315
|
|
|
|
|
|
|
|
|
|
| |
- handle generation number in protocol
- rewrite server dentry resolution code for inode cache miss
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 315 (generation number support)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=315
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
- first phase, which happens when POLLERR is received on transport,
releases all locks, flushes all open fds.
- second phase, which happens when both the transports of connection destroyed,
destroys the containers like lock table, fd table along with the connection.
- the first phase, clears up any references to transport held by translators
like posix-locks(in the form of blocked locks) paving way for the second phase.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
| |
updated copyright header to include 2009.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|