| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Change-Id: I6f5d8140a06f3c1b2d196849299f8d483028d33b
|
|
|
|
|
|
|
|
|
| |
The "connecting" state is not used anywhere really.
It's only being set and printed. So remove it.
Change-Id: I11fc8b0bdcda5a812d065543aa447d39957d3b38
fixes: bz#1583583
Signed-off-by: Michael Adam <obnox@samba.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An earlier commit set conf->connected just after rpc layer sends
RPC_CLNT_CONNECT event. However, success of socket level connection
connection doesn't indicate brick stack is ready to receive fops, as
an handshake has to be done b/w client and server after
RPC_CLNT_CONNECT event. Any fop sent to brick in the window between,
* protocol/client receiving RPC_CLNT_CONNECT event
* protocol/client receiving a successful setvolume response
can end up accessing an uninitialized brick stack. So, set
conf->connected only after a successful SETVOLUME.
Change-Id: I139a03d2da6b0d95a0d68391fcf54b00e749decf
fixes: bz#1583937
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was an issue when some accesses to saved_fds list were
protected by the wrong mutex (lock instead of fd_lock).
Additionally, the retrieval of fdctx from fd's context and any
checks done on it have also been protected by fd_lock to avoid
fdctx to become outdated just after retrieving it.
Change-Id: If2910508bcb7d1ff23debb30291391f00903a6fe
BUG: 1553129
Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
|
|
|
|
|
|
| |
Updates: #242
Change-Id: I767e574a26e922760a7130bd209c178d74e8cf69
Signed-off-by: Poornima G <pgurusid@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I27f5e1e34fe3eac96c7dd88e90753fb5d3d14550
BUG: 1272030
Signed-off-by: Anoop C S <anoopcs@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this patchset, some major things are changed in XDR, mainly:
* Naming: Instead of gfs3/gfs4 settle for gfx_ for xdr structures
* add iattx as a separate structure, and add conversion methods
* the *_rsp structure is now changed, and is also reduced in number
(ie, no need for different strucutes if it is similar to other response).
* use proper XDR methods for sending dict on wire.
Also, with the change of xdr structure, there are changes needed
outside of xlator protocol layer to handle these properly. Mainly
because the abstraction was broken to support 0-copy RDMA with payload
for write and read FOP. This made transport layer know about the xdr
payload, hence with the change of xdr payload structure, transport layer
needed to know about the change.
Updates #384
Change-Id: I1448fbe9deab0a1b06cb8351f2f37488cefe461f
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current use of a per-client mutex to protect fdctx introduces lock
contentions when there are dozens of file operations active.
Use finer grain spinlock to reduce contention, and put retrieving
fdctx out of lock.
Change-Id: Iea3e2eb481e76a5d73a582ba81529180c5b88248
BUG: 1519598
Signed-off-by: Zhang Huan <zhanghuan@open-fs.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes the race between fd re-open code and fd release code,
both of which free the fd context due to a race in certain variable
checks as explained below:
1. client process (shd in the case of this BZ) sends an opendir to its
children (client xlators) which send the fop to the bricks to get a valid fd.
2. Client xlator loses connection to the brick. fdctx->remotefd is -1
3. Client re-establishes connection. After handshake, it reopens the dir
and sets fdctx->remotefd to a valid fd in client3_3_reopendir_cbk().
4. Meanwhile, shd sends a fd unref after it is done with the opendir.
This triggers a releasedir (since fd->refcount becomes 0).
5. client3_3_releasedir() sees that fdctx-->remotefd is a valid number
(i.e not -1), sets fdctx->released=1 and calls client_fdctx_destroy()
6. As a continuation of step3, client_reopen_done() is called by
client3_3_reopendir_cbk(), which sees that fdctx->released==1 and
again calls client_fdctx_destroy().
Depending on when step-5 does GF_FREE(fdctx), we may crash at any place in
step-6 in client3_3_reopendir_cbk() when it tries to access
fdctx->{whatever}.
Change-Id: Ia50873d11763e084e41d2a1f4d53715438e5e947
BUG: 1418629
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: https://review.gluster.org/16521
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I808fd5f9f002a35bff94d310c5d61a781e49570b
BUG: 1360169
Signed-off-by: Anuradha Talur <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/15010
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 45fa52d7 added some massive violations of our 80-column rule.
This commit fixes them.
Change-Id: I7bb588b99dbbff4f0b9874ef10f200f73ac1b627
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/14192
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anuradha Talur <atalur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I60fe2d59c454095febce4c0fbef87a2dad9636e4
BUG: 1326085
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/14013
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ie38198db990f133fe163ba160cdf647e34f83f4f
BUG: 1326085
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/13994
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Iade71daf3bc70e60833d693ac55151c9cf691381
BUG: 1303829
Signed-off-by: Anuradha Talur <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/14114
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These changes are made to accommodate compound fops.
The new functions that are added pack
the arguments required to perform the fops.
These will be used both by normal fops and compound ones.
Change-Id: I44d9cef8ff1d33aa2f5661689c8e9386d87b2007
BUG: 1303829
Signed-off-by: Anuradha Talur <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/13963
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I64c361d3e4ae86d57dc18bb887758d044c861237
BUG: 1319992
Signed-off-by: Poornima G <pgurusid@redhat.com>
Reviewed-on: http://review.gluster.org/11597
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currently on a successful connection between protocol
server and client, the protocol client initiates a
CHILD_UP event in the client stack. At this point in
time, only the connection between server and client is
established, and there is no guarantee that the server
side stack is ready to serve requests.
It works fine now, as most server side translators are
not dependent on any other factors, before being able
to serve requests today and hence they are up by the time
the client stack translators receive the CHILD_UP (initiated
by client handshake).
The gap here is exposed when certain server side translators
like NSR-Server for example, have a couple of protocol clients
as their child(connecting them to other bricks), and they
can't really serve requests till a quorum of their children are
up. Hence these translators should defer sending CHILD_UP
till they have enough children up, and the same needs to be
propagated to the client stack translators.
Fix:
Maintain a child_up variable in both the protocol client
and protocol server translators. The protocol server should
update this value based on the CHILD_UP and CHILD_DOWN
events it receives from the translators below it. On receiving
such an event it should forward that event to the client.
The protocol client on receiving such an event should forward
it up the client stack, thereby letting the client translators
correctly know that the server is up and ready to serve.
The clients connecting later(long after a server has initialized
and processed it's CHILD_UP events), will receive a child_up status
as part of the handshake, and based on the status of the server's
child_up, can either propagate a CHILD_UP event or defer it.
Change-Id: I0807141e62118d8de9d9cde57a53a607be44a0e0
BUG: 1312845
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/13549
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Network protocol extensions for the seek() FOP. The format is based on
the SEEK procedure in NFSv4.2.
Change-Id: I060768a8a4b9b1c80f4a24c0f17d630f7f028690
BUG: 1220173
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/11482
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Normally GF_EVENT_CHILD_UP is dispatched after client
handshake. But we have some dead code in client_rpc_notify
which is assumed to do the same on receiving RPC_CLNT_CONNECT.
This dispatch is based on a condition whether "disable-handshake"
is enabled or not. Since we require client-handshake everytime
we have a connect this check for "disable-handshake" is invalid
and no longer required. Moreover this option is never handled
in any of the translators.
Change-Id: Ic862d6ac08cd3b18cf231f50140cd00e84e52ca0
BUG: 1227667
Signed-off-by: Anoop C S <anoopcs@redhat.com>
Reviewed-on: http://review.gluster.org/12170
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When grace-timer is initialized via server/client init,
the default or reconfigured value for grace-timeout is
displayed incorrectly in both server and client logs.
This is because we use gf_time_fmt() to format this
grace-timeout value with gf_timefmt_s as the time format
as shown below:
gf_time_fmt (timestr, sizeof timestr, conf->grace_ts.tv_sec,
gf_timefmt_s);
gf_timefmt_s format is a wrapper for %s format specification
used in strftime library call which populates the number
of seconds since the Epoch [1970-01-01 00:00:00 +0000 (UTC)].
But this particular format is dependent on timezone
[1970-01-01 05:30:00 +0530 (IST)]and thus displayed incorrectly
in logs.
Example:
For IST with default grace-timeout value 10, it is displayed
as -19790 which is calculated as follows,
1970-01-01 00:00:10 - 1970-01-01 05:30:00 = -19790 seconds.
Change-Id: I1bdf5d12b2167323f86f0ca52a37ffb316b3f0a2
BUG: 1227667
Signed-off-by: Anoop C S <anoopcs@redhat.com>
Reviewed-on: http://review.gluster.org/11930
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I9bf2ca08fef969e566a64475d0f7a16d37e66eeb
BUG: 1194640
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/10042
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
position in the graph rather than relative (local) to a particular
translator.
Encoding the volume in this way allows a single translator to manage
which brick is currently being scanned for directory entries. Using a
single translator minimizes allocated bits in the d_off. It also allows
multiple DHT translators in the same graph to have a common frame of
reference (the graph position) for which brick is being read. Multiple
DHT translators are needed for the Tiering feature.
The fix builds off a previous change (9332) which removed subvolume
encoding from AFR. The fix makes an equivalent change to the EC
translator.
More background can be found in fix 9332 and gluster-dev discussions [1].
DHT and AFR/EC are responsibile (as before) for choosing which brick to
enumerate directory entries in over the readdir lifecycle.
The client translator receiving the readdir fop encodes the dht_t. It
is referred to as the "leaf node" in the graph and corresponds to the
brick being scanned.
When DHT decodes the d_off, it translates the leaf node to a local
subvolume, which represents the next node in the graph leading to
the brick.
Tracking of leaf nodes is done in common utility functions. Leaf nodes
counts and positional information are updated on a graph switch.
[1] www.gluster.org/pipermail/gluster-devel/2015-January/043592.html
Change-Id: Iaf0ea86d7046b1ceadbad69d88707b243077ebc8
BUG: 1190734
Signed-off-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-on: http://review.gluster.org/9688
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fix is required for glfs_fini to be able to perform fini on client
xlators in a graph. We are deferring freeing of client xlator's private
until all RPC related resources are destroyed. This guarantees that
client xlator would free RPC related resources provided its private
structures are still accessible via its this pointer.
'Weak' property: If there are no epoll threads executing after calling
fini() on a client xlator, then all its RPC related resources are
guaranteed to be freed. We can now free the corresponding 'this'
pointer.
Change-Id: Ie00b14dda096ac128e1c37e0032f07d17fd701ce
BUG: 1093594
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9680
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
... from all bricks in the volume
This patch is important in the context of MT epoll. With MT epoll,
notification events from client xlators could reach cluster xlators like
afr, dht, ec, stripe etc. in different orders.
For e.g, In a distributed replicate volume of 2 bricks, namely Brick1
and Brick2, the following network events are observed by a mount
process.
- connection to Brick1 is broken.
- connection to Brick1 has been restored.
- connection to Brick2 is broken.
- connection to Brick2 has been restored.
Without establishing a total ordering of events, we can't guarantee that
cluster xlators like afr, dht perceive them in the same order. While we
would expect afr (say) to perceive it as only one of Brick1 and Brick2
going down at any given time, it is possible for the notification of
Brick2 going offline to race with the notification of Brick1 coming back
online.
Change-Id: I78f5a52bfb05593335d0e9ad53ebfff98995593d
BUG: 1104462
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9591
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the ability to configure the number of event threads
for various gluster services.
Currently with the multi thread epoll patch, it is possible
to have more than one thread waiting on socket activity and
processing the same. This thread count is currently static,
which this commit makes dynamic.
The current services which use IO path, i.e brick processes,
any client process (nfs, FUSE, gfapi, heal,
rebalance, etc.a), gain 2 set parameters to control the number
of threads that are processing events. These settings are,
- client.event-threads <n>
- server.event-threads <n>
The client setting affects the client graph consumers, and the
server setting affects the brick processes. These are processed
and inited/reconfigured using the client/server protocol xlators.
Other services (say glusterd) would need to extend similar
configuration settings to take advantage of multi threaded event
processing.
At present glusterd is not enabled with this commit, as it does not
stand to gain from this multi-threading (as I understand it).
Change-Id: Id8422fc57a9f95a135158eb6477ccf9d3c9ea4d9
BUG: 1104462
Signed-off-by: Shyam <srangana@redhat.com>
Reviewed-on: http://review.gluster.org/9488
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes 46 defects marked as "Dereference after NULL check" errors
in coverity scan for client xlator.
Change-Id: I0b4c991a3995ce74d7885fc5470ec7f5c589b411
BUG: 789278
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/9287
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For rdma type only volume client connection establishment
with server takes more than three seconds. Because for
tcp,rdma type volume, will have 2 ports one for tcp and
one for rdma, tcp port is stored with brickname and rdma
port is stored as "brickname.rdma" during pamap_sighin.
During the handshake when trying to get the brick port
for rdma clients, since we are not aware of server
transport type, we will append '.rdma' with brick name.
So for tcp,rdma volume there will be an entry with
'.rdma', but it will fail for rdma type only volume.
So we will try again, this time without appending '.rdma'
using a flag variable need_different_port, and it will succeed,
but the reconnection happens only after 3 seconds.
In this patch for rdma only type volume
we will append '.rdma' during the pmap_signin. So during the
handshake we will get the correct port for first try itself.
Since we don't need to retry , we can remove the
need_different_port flag variable.
Change-Id: Ie8e3a7f532d4104829dbe995e99b35e95571466c
BUG: 1153569
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/8934
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new volume option 'server.manage-gids' can be enabled in
environments where a user belongs to more than the current absolute
maximum of 93 groups. This option triggers the following behavior:
1. The AUTH_GLUSTERFS structure sent by GlusterFS clients (fuse, nfs or
libgfapi) will contain only one (1) auxiliary group, instead of
a full list. This reduces network usage and prevents problems in
encoding the AUTH_GLUSTERFS structure which should fit in 400 bytes.
2. The single group in the RPC Calls received by the server is replaced
by resolving the groups server-side. Permission checks and similar in
lower xlators are applied against the full list of groups where the
user belongs to, and not the single auxiliary group that the client
sent.
Change-Id: I9e540de13e3022f8b63ff893ecba511129a47b91
BUG: 1053579
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/7501
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
It was observed that in some cases client disconnects
and re-connects before server xlator could detect that a
disconnect happened. So it still uses previous fdtable and ltable.
But it can so happen that in between disconnect and re-connect
an 'unlock' fop may fail because the fds are marked 'bad' in client
xlator upon disconnect. Due to this stale locks remain on the brick
which lead to hangs/self-heals not happening etc.
For the exact bug RCA please look at
https://bugzilla.redhat.com/show_bug.cgi?id=1049932#c0
Fix:
When lk-heal is not enabled make sure connection-id is different for
every setvolume. This will make sure that a previous connection's
resources are not re-used in server xlator.
Change-Id: Id844aaa76dfcf2740db72533bca53c23b2fe5549
BUG: 1049932
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/6669
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gettimeofday() returns the current wall clock time and timezone.
Using these functions in order to measure the passage of time
(how long an operation took) therefore seems like a no-brainer.
This time suffer's from some limitations:
a. They have a low resolution: “High-performance” timing by
definition, requires clock resolutions into the microseconds
or better.
b. They can jump forwards and backwards in time: Computer
clocks all tick at slightly different rates, which causes
the time to drift. Most systems have NTP enabled which
periodically adjusts the system clock to keep them in sync
with “actual” time. The adjustment can cause the clock to
suddenly jump forward (artificially inflating your timing
numbers) or jump backwards (causing your timing calculations
to go negative or hugely positive). In such cases timer
thread could go into an infinite loop.
From 'man gettimeofday':
----------
..
..
The time returned by gettimeofday() is affected by discontinuous
jumps in the system time (e.g., if the system administrator manually
changes the system time). If you need a monotonically increasing
clock, see clock_gettime(2).
..
..
----------
Rationale:
For calculating interval timing for Timer thread, all that’s
needed should be clock as a simple counter that increments
at a stable rate.
This is necessary to avoid the jumps which are caused by using
"wall time", this counter must be monotonic that can never
“tick” backwards, ever.
Change-Id: I701d31e71a85a73d21a6c5cd15583e7a5a645eeb
BUG: 1017993
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/6070
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: Currently when gluster volume start force is executed, client process
will talk to glusterd to get the port of the brick. But if brick's path
is not available it cannot return brick's port. So client process will
keep connecting and disconnecting from glusterd for port-query which
is ultimately responsible for execssive logging of disconnect messages.
Fix: Message will be logged just once at INFO level after the first
disconnect from glusterd. Afterwards "disconnect" messages will be
logged in DEBUG mode.
Change-Id: I2b787f3820b5da45e090c562e5698fcfe24a02cd
BUG: 959969
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/4953
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the brick is taken down and the hard disk is replaced
and the brick is brought back up, the re-opens of the open-fds
will fail because the file is not present on the brick.
Re-opens are not attempted even if the files are re-created by
self-heal until the brick is brought down after the files are
re-created and brought back up. This is a problem with a VM-store
in a replica-setup. Until the fd is re-opened the writes will
never happen on the brick where the hard-disk is replaced.
To handle this situation gracefully, client xlator is enhanced
to perform finodelk, fxattrop, writev, readv using anonymous fds
if the file is yet to be re-opened. If the fop succeeds then client
xlator attempts re-open.
Change-Id: I1cc6d1bbf8227cd996868ab2ed0a57fb05e00017
BUG: 821056
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/4358
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I01caa1b51570359e6e3ffe1ffb7279cbdb0b0c64
BUG: 821056
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/4357
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with the option, the idea is all client-side caching will be disabled,
where as on server side process, the fd will be treated as a regular
fd, thus helping the performance better.
"gluster volume set <VOLNAME> remote-dio enable" would set
this option in client protocol volumes.
Change-Id: Id2255a167137f8fee20849513e3011274dc829b4
Signed-off-by: Amar Tumballi <amarts@redhat.com>
BUG: 845213
Reviewed-on: http://review.gluster.org/4206
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently the disconnect after a portmap query is treated like an
ordinary disconnect and the reconnection attempt (in this case, to
the brick) is attempted only after 3 secs. This results in a delay
which is unnecessary.
Mark the disconnection happening because of a successful portmap
query as needing a 'quick reconnect' to avoid the delay for this
special case.
Change-Id: I43c8292ff0c30858d883ff3569a3761acbf2f5eb
BUG: 860220
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/3994
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RCA
The bug is observed in 3.2.x because posix xlator changes
the uid/gid of file as per frame->root-uid/gid if O_CREAT flag
is set in open fop. Posix does not do this in 3.3.x so that
bug does not appear anymore but this issue exposed the actual
bug in client xlator re-open. Re-open of a file on re-connection
should not perform re-open with the same flags at the time of
open/create/opendir. Imagine a case where a file is opened with
O_TRUNC|O_RDWR and some data is written to it, now if the brick
goes down and comes back the file will be truncated.
When I tested this case, the file is not truncated because locks
xlator resets O_TRUNC unconditionally.
Client xlator re-open bug and locks xlator bug cancel each other.
Fix
Reset O_CREAT|O_TRUNC|O_EXCL flags in re-open.
Locks xlator should not reset O_TRUNC.
Additional changes
Removed wbflags as it is not assigned at all.
Testcases
Automated go program is at:
://bugzilla.redhat.com/show_bug.cgi?id=807976#c2
Change-Id: I0080344fdda2e62e7c976c35a5bf5f1fa8838891
BUG: 807976
Signed-off-by: Pranith Kumar K <pranithk@gluster.com>
Reviewed-on: http://review.gluster.com/3582
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is needed when the fresh lookup triggers self-heal, gfid
won't be present in inode yet. Similar situation happens with
Rebalance as it does not perform inode_link.
Added similar fix for re-opendir.
Removed inode from fdctx and removed some duplication of code.
Change-Id: Ic94e5738c8585ed86801d2eed9ddab1015246710
BUG: 826080
Signed-off-by: Pranith Kumar K <pranithk@gluster.com>
Reviewed-on: http://review.gluster.com/3517
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Copy args->loc to local->loc in client3_1_getxattr(). This prevents logs with
"(null) (--)" in client3_1_getxattr_cbk().
Also save args->name in local->name and print it in the log as well.
Change-Id: I1bfd00c6bbbe9f617744af7acd2f07ceafaadb3a
BUG: 812199
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.com/3336
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note that the license was not changed in any of the following:
.../argp-standalone/...
.../booster/...
.../cli/...
.../contrib/...
.../extras/...
.../glusterfsd/...
.../glusterfs-hadoop/...
.../mod_clusterfs/...
.../scheduler/...
.../swift/...
The license was not changed in any of the non-building xlators. The
license was not changed in any of the xlators that seemed — to me — to
be clearly server-side only, e.g. protocol/server
Note too that copyright was changed along with the license; I did
not change the copyright in files where the license did not change.
If you find any errors or ommissions please don't hesitate to let me know.
The complete list of files with the license change is:
libglusterfs/src/byte-order.h
libglusterfs/src/call-stub.c
libglusterfs/src/call-stub.h
libglusterfs/src/checksum.c
libglusterfs/src/checksum.h
libglusterfs/src/circ-buff.c
libglusterfs/src/circ-buff.h
libglusterfs/src/common-utils.c
libglusterfs/src/common-utils.h
libglusterfs/src/compat-errno.c
libglusterfs/src/compat-errno.h
libglusterfs/src/compat.c
libglusterfs/src/compat.h
libglusterfs/src/daemon.c
libglusterfs/src/daemon.h
libglusterfs/src/defaults.c
libglusterfs/src/defaults.h
libglusterfs/src/dict.c
libglusterfs/src/dict.h
libglusterfs/src/event-history.c
libglusterfs/src/event-history.h
libglusterfs/src/event.c
libglusterfs/src/event.h
libglusterfs/src/fd-lk.c
libglusterfs/src/fd-lk.h
libglusterfs/src/fd.c
libglusterfs/src/fd.h
libglusterfs/src/gf-dirent.c
libglusterfs/src/gf-dirent.h
libglusterfs/src/globals.c
libglusterfs/src/globals.h
libglusterfs/src/glusterfs.h
libglusterfs/src/graph-print.c
libglusterfs/src/graph-utils.h
libglusterfs/src/graph.c
libglusterfs/src/hashfn.c
libglusterfs/src/hashfn.h
libglusterfs/src/iatt.h
libglusterfs/src/inode.c
libglusterfs/src/inode.h
libglusterfs/src/iobuf.c
libglusterfs/src/iobuf.h
libglusterfs/src/latency.c
libglusterfs/src/latency.h
libglusterfs/src/list.h
libglusterfs/src/lkowner.h
libglusterfs/src/locking.h
libglusterfs/src/logging.c
libglusterfs/src/logging.h
libglusterfs/src/mem-pool.c
libglusterfs/src/mem-pool.h
libglusterfs/src/mem-types.h
libglusterfs/src/options.c
libglusterfs/src/options.h
libglusterfs/src/rbthash.c
libglusterfs/src/rbthash.h
libglusterfs/src/run.c
libglusterfs/src/run.h
libglusterfs/src/scheduler.c
libglusterfs/src/scheduler.h
libglusterfs/src/stack.c
libglusterfs/src/stack.h
libglusterfs/src/statedump.c
libglusterfs/src/statedump.h
libglusterfs/src/syncop.c
libglusterfs/src/syncop.h
libglusterfs/src/syscall.c
libglusterfs/src/syscall.h
libglusterfs/src/timer.c
libglusterfs/src/timer.h
libglusterfs/src/trie.c
libglusterfs/src/trie.h
libglusterfs/src/xlator.c
libglusterfs/src/xlator.h
libglusterfsclient/src/libglusterfsclient-dentry.c
libglusterfsclient/src/libglusterfsclient-internals.h
libglusterfsclient/src/libglusterfsclient.c
libglusterfsclient/src/libglusterfsclient.h
rpc/rpc-lib/src/auth-glusterfs.c
rpc/rpc-lib/src/auth-null.c
rpc/rpc-lib/src/auth-unix.c
rpc/rpc-lib/src/protocol-common.h
rpc/rpc-lib/src/rpc-clnt.c
rpc/rpc-lib/src/rpc-clnt.h
rpc/rpc-lib/src/rpc-transport.c
rpc/rpc-lib/src/rpc-transport.h
rpc/rpc-lib/src/rpcsvc-auth.c
rpc/rpc-lib/src/rpcsvc-common.h
rpc/rpc-lib/src/rpcsvc.c
rpc/rpc-lib/src/rpcsvc.h
rpc/rpc-lib/src/xdr-common.h
rpc/rpc-lib/src/xdr-rpc.c
rpc/rpc-lib/src/xdr-rpc.h
rpc/rpc-lib/src/xdr-rpcclnt.c
rpc/rpc-lib/src/xdr-rpcclnt.h
rpc/rpc-transport/rdma/src/name.c
rpc/rpc-transport/rdma/src/name.h
rpc/rpc-transport/rdma/src/rdma.c
rpc/rpc-transport/rdma/src/rdma.h
rpc/rpc-transport/socket/src/name.c
rpc/rpc-transport/socket/src/name.h
rpc/rpc-transport/socket/src/socket.c
rpc/rpc-transport/socket/src/socket.h
xlators/cluster/afr/src/afr-common.c
xlators/cluster/afr/src/afr-dir-read.c
xlators/cluster/afr/src/afr-dir-read.h
xlators/cluster/afr/src/afr-dir-write.c
xlators/cluster/afr/src/afr-dir-write.h
xlators/cluster/afr/src/afr-inode-read.c
xlators/cluster/afr/src/afr-inode-read.h
xlators/cluster/afr/src/afr-inode-write.c
xlators/cluster/afr/src/afr-inode-write.h
xlators/cluster/afr/src/afr-lk-common.c
xlators/cluster/afr/src/afr-mem-types.h
xlators/cluster/afr/src/afr-open.c
xlators/cluster/afr/src/afr-self-heal-algorithm.c
xlators/cluster/afr/src/afr-self-heal-algorithm.h
xlators/cluster/afr/src/afr-self-heal-common.c
xlators/cluster/afr/src/afr-self-heal-common.h
xlators/cluster/afr/src/afr-self-heal-data.c
xlators/cluster/afr/src/afr-self-heal-entry.c
xlators/cluster/afr/src/afr-self-heal-metadata.c
xlators/cluster/afr/src/afr-self-heal.h
xlators/cluster/afr/src/afr-self-heald.c
xlators/cluster/afr/src/afr-self-heald.h
xlators/cluster/afr/src/afr-transaction.c
xlators/cluster/afr/src/afr-transaction.h
xlators/cluster/afr/src/afr.c
xlators/cluster/afr/src/afr.h
xlators/cluster/afr/src/pump.c
xlators/cluster/afr/src/pump.h
xlators/cluster/dht/src/dht-common.c
xlators/cluster/dht/src/dht-common.h
xlators/cluster/dht/src/dht-diskusage.c
xlators/cluster/dht/src/dht-hashfn.c
xlators/cluster/dht/src/dht-helper.c
xlators/cluster/dht/src/dht-inode-read.c
xlators/cluster/dht/src/dht-inode-write.c
xlators/cluster/dht/src/dht-layout.c
xlators/cluster/dht/src/dht-linkfile.c
xlators/cluster/dht/src/dht-mem-types.h
xlators/cluster/dht/src/dht-rebalance.c
xlators/cluster/dht/src/dht-rename.c
xlators/cluster/dht/src/dht-selfheal.c
xlators/cluster/dht/src/dht.c
xlators/cluster/dht/src/nufa.c
xlators/cluster/dht/src/switch.c
xlators/cluster/stripe/src/stripe-helpers.c
xlators/cluster/stripe/src/stripe-mem-types.h
xlators/cluster/stripe/src/stripe.c
xlators/cluster/stripe/src/stripe.h
xlators/features/index/src/index-mem-types.h ¹
xlators/features/index/src/index.c ¹
xlators/features/index/src/index.h ¹
xlators/performance/io-cache/src/io-cache.c
xlators/performance/io-cache/src/io-cache.h
xlators/performance/io-cache/src/ioc-inode.c
xlators/performance/io-cache/src/ioc-mem-types.h
xlators/performance/io-cache/src/page.c
xlators/performance/io-threads/src/io-threads.c
xlators/performance/io-threads/src/io-threads.h
xlators/performance/io-threads/src/iot-mem-types.h
xlators/performance/md-cache/src/md-cache-mem-types.h
xlators/performance/md-cache/src/md-cache.c
xlators/performance/quick-read/src/quick-read-mem-types.h
xlators/performance/quick-read/src/quick-read.c
xlators/performance/quick-read/src/quick-read.h
xlators/performance/read-ahead/src/page.c
xlators/performance/read-ahead/src/read-ahead-mem-types.h
xlators/performance/read-ahead/src/read-ahead.c
xlators/performance/read-ahead/src/read-ahead.h
xlators/performance/symlink-cache/src/symlink-cache.c
xlators/performance/write-behind/src/write-behind-mem-types.h
xlators/performance/write-behind/src/write-behind.c
xlators/protocol/auth/addr/src/addr.c ¹
xlators/protocol/auth/login/src/login.c ¹
xlators/protocol/client/src/client-callback.c
xlators/protocol/client/src/client-handshake.c
xlators/protocol/client/src/client-helpers.c
xlators/protocol/client/src/client-lk.c
xlators/protocol/client/src/client-mem-types.h
xlators/protocol/client/src/client.c
xlators/protocol/client/src/client.h
xlators/protocol/client/src/client3_1-fops.c
¹ Copyright only, license reverted to original
Change-Id: If560e826c61b6b26f8b9af7bed6e4bcbaeba31a8
BUG: 820551
Signed-off-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.com/3304
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with this change, the xlator APIs will have a dictionary as extra
argument, which is passed between all the layers. This can be
utilized for overloading in some of the operations.
Change-Id: I58a8186b3ef647650280e63f3e5e9b9de7827b40
Signed-off-by: Amar Tumballi <amarts@redhat.com>
BUG: 782265
Reviewed-on: http://review.gluster.com/2960
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During reopening of fd's and reacquiring of locks on the fd (after a
reconnect), a release on a fd on which reacquiring of locks is in progress
will free up fdctx. This patch will keep fdctx valid until the reacquiring
of locks is in progress.
Change-Id: I0464c751a5aa986abac0b72b48b261fceeba24e8
BUG: 795386
Signed-off-by: Mohammed Junaid <junaid@redhat.com>
Reviewed-on: http://review.gluster.com/2937
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Server send reply failure should not call server connection cleanup because
if a reconnection happens with in the grace-timeout the connection object is
reused. We must cleanup only on grace-timeout.
Change-Id: I7d171a863382646ff392031c2b845fe4f0d3d5dc
BUG: 803365
Signed-off-by: Mohammed Junaid <junaid@redhat.com>
Reviewed-on: http://review.gluster.com/2947
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
which does appropraite cleanup before unwinding.
Signed-off-by: Raghavendra G <raghavendra@gluster.com>
Change-Id: Ic49d6e21c5fc56e747afec35be2bebbbbd2a6583
BUG: 767359
Reviewed-on: http://review.gluster.com/2897
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
'last_sent', and 'last_recieved' variables were not used anymore
after having RPC layer. Hence removed it from the code.
Change-Id: I1ba74d47f909406ebde43476ccfed724e6c7e77f
Signed-off-by: Amar Tumballi <amarts@redhat.com>
BUG: 801721
Reviewed-on: http://review.gluster.com/2916
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A grace timer is registered on a disconnect, but a reconnect timer sends a
connect request after every 3sec and if the server is down, the client protocol
receives disconnect and a grace timer will be registered which on timeout will
increase the lk-version value. Its enough to register the grace timer once after
the disconnect and later just ignore other psuedo disconnects.
Change-Id: I36a153aa86b350d87fe50d014ee0297f558a7fb6
BUG: 795386
Signed-off-by: Mohammed Junaid <junaid@redhat.com>
Reviewed-on: http://review.gluster.com/2906
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Added a brief explanation as to why we can't use gf_log
when in statedump.
- Removed gf_log messages from client_priv_dump since
it can cause a 'deadlock' - See statedump.c for explanation
- Added try-lock based accessors for fd_lk_list for dump purposes.
Change-Id: I1d755a4ef2c568acf22fb8c4ab0a33a4f5fd07b4
BUG: 789858
Signed-off-by: Krishnan Parthasarathi <kp@gluster.com>
Reviewed-on: http://review.gluster.com/2882
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Till now, send and recieve buffer window sizes for sockets
were set to a default glusterfs-specific value.
Linux's default window sizes have been found to be better
w.r.t performance, and hence, no more setting it to any
default value.
However, if one wishes, there's the new configuration option:
network.tcp-window-size <sane_size>
which takes a size value (int or human readable) and will set
the window size of sockets for both clients and servers.
Nfs clients will also be updated with the same.
Change-Id: I841479bbaea791b01086c42f58401ed297ff16ea
BUG: 795635
Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com>
Reviewed-on: http://review.gluster.com/2821
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I4644e944bad4d240d16de47786b9fa277333dba4
BUG: 767862
Signed-off-by: Raghavendra G <raghavendra@gluster.com>
Signed-off-by: Amar Tumballi <amarts@redhat.com>
Reviewed-on: http://review.gluster.com/2735
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently(with out this patch), on a disconnect the server cleans up
the transport which inturn closes the fd's and releases the locks acquired on
those fd's by that client. On a reconnect, client just reopens the fd's but
doesn't reacquire the locks. The application that had previously acquired
the locks still is under the assumption that it is the owner of those locks
which might have been granted to other clients(if they request) by the server
leading to data corruption.
This patch allows the client to reacquire the fcntl locks (held on the fd's)
during client-server handshake.
* The server identifies the client via process-uuid-xl (which is a combination
of uuid and client-protocol name, it is assumed to be unique) and lk-version
number.
* The client maintains a list of process-uuid-xl, lk-version pair for each
accepted connection. On a connect, the server traverses the list for a
matching pair, if a matching pair is not found the the server returns
lk-version with value 0, else it returns the lk-version it has in store.
* On a disconnect, the server and client enter grace period, and on the
completion of the grace period, the client bumps up its lk-version number
(which means, it will reacquire the locks the next time) and the server will
distroy the connection. If reconnection happens within the grace period, the
server will find the matching (process-uuid-xl, lk-version) pair in its list
which guarantees that the fd's and there corresponding locks are still valid
for this client.
Configurable options:
To set grace-timeout, the following options are
option server.grace-timeout value
option client.grace-timeout value
To enable or disable the lk-heal,
option lk-heal [on|off]
gluster volume set command can be used to configurable options
Change-Id: Id677ef1087b300d649f278b8b2aa0d94eae85ed2
BUG: 795386
Signed-off-by: Mohammed Junaid <junaid@redhat.com>
Reviewed-on: http://review.gluster.com/2766
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
call to server
For calls with remote_fd set to -1, client xlator is sending the call to the
server which results in server not resolving it and thus fd being NULL. Locks
xlator when tries to get the inode context using the fd it segfaults. To avoid
it unwind the call in the client xlator if the remote_fd is -1.
Change-Id: Ic34a49fdf1012dd371f4b194703c0be74f29bda2
BUG: 784187
Signed-off-by: Raghavendra Bhat <raghavendrabhat@gluster.com>
Reviewed-on: http://review.gluster.com/2684
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
Reviewed-by: Anand Avati <avati@gluster.com>
|