| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
command: gluster volume status <volname/all> client-list
output:
Client connections for volume v1
Name count
----- ------
fuse 2
tierd 1
total clients for volume v1 : 3
-----------------------------------------------------------------
Client connections for volume v2
Name count
----- ------
tierd 1
fuse.gsync 1
total clients for volume v2 : 2
-----------------------------------------------------------------
Updates: #178
Change-Id: I0ff2579d6adf57cc0d3bd0161a2ec6ac6c4747c0
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: https://review.gluster.org/18095
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: hari gowtham <hari.gowtham005@gmail.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We are storing the entire volfile and using this to check
volfile change. With brick multiplexing there will be lot
of graphs per process which will increase the memory foot
print of the process. So instead of storing the entire
graph we could use sha256 and we can compare the hash to
see whether volfile change happened or not.
Also with Brick multiplexing, the direct comparison of vol
file is not correct. There are two problems.
Problem 1:
We are currently storing one single graph (the last
updated volfile) whereas, what we need is the entire
graph with all atttached bricks.
If we fix this issue, we have second problem
Problem 2:
With multiplexing we have a graph that contains multiple
bricks. But what we are checking as part of the reconfigure
is, comparing the entire graph with one single graph,
which will always fail.
Solution:
We create list in glusterfs_ctx_t that stores sha256 hash
of individual brick graphs. When a graph changes happens
we compare the stored hash and the current hash. If the
hash matches, then no need for reconfigure. Otherwise we
first do the reconfigure and then update the hash.
For now, gfapi has not changed this way. Meaning when gfapi
volfile fetch or reconfigure happens, we still store the
entire graph and compare, each memory.
This is fine, because libgfapi will not load brick graphs.
But changing the libgfapi will make the code similar in
both glusterfsd-mgmt and api. Also it helps to reduce some
memory.
Change-Id: I9df917a771a52b95622ab8f63af34ec390163a77
BUG: 1467986
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: https://review.gluster.org/17709
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
stopped any volume
Problem: After enabled brick mux if any volume has down and then try ot run mount
with running volume , mount command is hung.
Solution: After enable brick mux server has shared one data structure server_conf
for all associated subvolumes.After down any subvolume in some
ungraceful manner (remove brick directory) posix xlator sends
GF_EVENT_CHILD_DOWN event to parent xlatros and server notify
updates the child_up to false in server_conf.When client is trying
to communicate with server through mount it checks conf->child_up
and it is FALSE so it throws message "translator are not yet ready".
From this patch updated structure server_conf to save child_up status
for xlator wise. Another improtant correction from this patch is
cleanup threads from server side xlators after stop the volume.
BUG: 1453977
Change-Id: Ic54da3f01881b7c9429ce92cc569236eb1d43e0d
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: https://review.gluster.org/17356
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With brick multiplexing, there is a high possibility that attach and
detach requests might be parallely processed and to avoid a concurrent
update to the same graph list, a mutex lock is required.
Credits : Rafi (rkavunga@redhat.com) for the RCA of this issue
Change-Id: Ic8e6d1708655c8a143c5a3690968dfa572a32a9c
BUG: 1454865
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://review.gluster.org/17374
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With brick mux enabled, we'd need to detach a particular brick if the
underlying backend has gone bad. This patch addresses the same.
Change-Id: Icfd469c7407cd2d21d02e4906375ec770afeacc3
BUG: 1450630
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://review.gluster.org/17287
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With brick multiplexing being enabled, if a brick is instance attached to a
process then a PARENT_UP event is needed so that it reaches right till
posix layer and then from posix CHILD_UP event is sent back to all the
children.
Change-Id: Ic341086adb3bbbde0342af518e1b273dd2f669b9
BUG: 1447389
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://review.gluster.org/17225
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
reading the entire rpc message from the wire
Currently socket is added back for future events after higher layers
(rpc, xlators etc) have processed the message. If message processing
involves signficant delay (as in writev replies processed by Erasure
Coding), performance takes hit. Hence this patch modifies
transport/socket to add back the socket for polling of events
immediately after reading the entire rpc message, but before
notification to higher layers.
credits: Thanks to "Kotresh Hiremath Ravishankar"
<khiremat@redhat.com> for assitance in fixing a regression in
bitrot caused by this patch.
Change-Id: I04b6b9d0b51a1cfb86ecac3c3d87a5f388cf5800
BUG: 1448364
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: https://review.gluster.org/15036
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
exit()/_exit():
Only the least significant 8 bits i.e (err & 255) shall be available
to the waiting parent process on calling _exit() or exit() with an
integer exit status. If this number is negative, the parent process
doesn't readily get what it's really looking forward to handle.
For example: EADDRINUSE is 98 and if exit status code is set to -98,
the waiting parent process shall get 158 (= -98 & 255) as exit status.
BUG: 1193929
Change-Id: Idc6b0f40c2332e087e584b4b40cbf0d29168c9cd
Signed-off-by: Prashanth Pai <ppai@redhat.com>
Reviewed-on: https://review.gluster.org/16200
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's possible (though unlikely) that we could get a brick-attach
request while we're not ready to process it (ctx->active not set yet).
Add code to guard against this possibility, and return appropriate
error indicators.
Change-Id: Icb3bc52ce749258a3f03cbbbdf4c2320c5c541a0
BUG: 1430860
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: https://review.gluster.org/16883
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I32be91e9db361a45454d6340a4c4ac2d0d7efffc
BUG: 1430042
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: https://review.gluster.org/16866
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Prashanth Pai <ppai@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid logging Success in the event of failure especially when errno has
no meaningful value w.r.t. the failure. In this case the errno is set to
zero when there's indeed a failure at the RPC level.
Change-Id: If2cc81aa1e590023ed22892dacbef7cac213e591
BUG: 1426032
Signed-off-by: Milind Changire <mchangir@redhat.com>
Reviewed-on: https://review.gluster.org/16730
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit changes the following:
1. In glusterfs_handle_terminate, send out individual pmap signout
requests to glusterd for every brick.
2. Add another parameter to glusterfs_mgmt_pmap_signout function to
pass the brickname that needs to be removed from the pmap registry.
3. Make sure pmap_registry_search doesn't break out from the loop
iterating over the list of bricks per port if the first brick entry
corresponding to a port is whitespaced out.
4. Make sure the pmap registry entries are removed for other
daemons like snapd.
Change-Id: I69949874435b02699e5708dab811777ccb297174
BUG: 1421590
Signed-off-by: Samikshan Bairagya <samikshan@gmail.com>
Reviewed-on: https://review.gluster.org/16689
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Gaurav Yadav <gyadav@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A change introduced for moving emancipation after signin process
cause this regression, where the second signin always return the
size of process xdr.
This patch will fix the issue.
Change-Id: Ic924c82abe6932a93abe37df1fd2d1285a77ed0a
BUG: 1386578
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: https://review.gluster.org/15687
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds support for multiple brick translator stacks running
in a single brick server process. This reduces our per-brick memory usage by
approximately 3x, and our appetite for TCP ports even more. It also creates
potential to avoid process/thread thrashing, and to improve QoS by scheduling
more carefully across the bricks, but realizing that potential will require
further work.
Multiplexing is controlled by the "cluster.brick-multiplex" global option. By
default it's off, and bricks are started in separate processes as before. If
multiplexing is enabled, then *compatible* bricks (mostly those with the same
transport options) will be started in the same process.
Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
BUG: 1385758
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: https://review.gluster.org/14763
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this, we will be able to trigger statedumps on remote Gluster
clients, mainly targetted for applications using libgfapi.
Design:
SIGUSR signal is the most comman way of taking a statedump in Gluster.
But it cannot be used for libgfapi based processes, as the process
loading the library might have already consumed SIGUSR signal. Hence
going by the command way.
One has to issue a Gluster command to initiate a statedump on the
libgfapi based client. The command takes hostname and PID as an
argument. All the glusterds in the cluster, check if they are connected
to the specified hostname, and send an RPC request to all the connected
clients from that hostname (via the mgmt connection).
URL: http://review.gluster.org/16357
Change-Id: Icbe4d2f026b32a2c7d5535e1bfb2cdaaff042e91
BUG: 1169302
Signed-off-by: Poornima G <pgurusid@redhat.com>
[ndevos: minor fixes and split patch in smaller pieces]
Reviewed-on: https://review.gluster.org/9228
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Niels de Vos <ndevos@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Samikshan Bairagya <samikshan@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Iab86a3c4970e54c22d3170e68708e0ea432a8ea4
BUG: 1401921
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/16043
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: when glusterd is down, getting the continuous mgmt_rpc_notify errors
messages in the volume mount log for every 3 seconds,it will
consume disk space.
Solution: To reduce the frequency of error messages use GF_LOG_OCCASIONALLY.
BUG: 1388877
Change-Id: I6cf24c6ddd9ab380afd058bc0ecd556d664332b1
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: http://review.gluster.org/15732
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
http://review.gluster.org/14085 fixes a/the "leak" - via the
generated rpc/xdr headers - of pragmas that mask these warnings.
However 14085 won't pass the smoke test until all the warnings are
fixed.
Change-Id: Ib37642dc8d35dd1065398fc53c97de65869d5681
BUG: 1369124
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/15239
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Prashanth Pai <ppai@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The bitrot scrubber takes 'hourly/daily/biweekly/monthly'
as the values for 'scrub-frequency'. There is no way
to schedule the scrubbing when the admin wants it.
Ondemand scrubbing brings in the new option 'ondemand'
with which the admin can start scrubbing ondemand.
It starts the scrubbing immediately.
Ondemand scrubbing is successful only if the scrubber
is in 'Active (Idle)' (waiting for it's next frequency
cycle to start scrubbing). It is not entertained when
the scrubber is in 'Paused' or already running.
Here is the command line syntax.
gluster volume bitrot <vol name> scrub ondemand
Change-Id: I84c28904367eed827a7dae8d6a535c14b28e9f4d
BUG: 1366195
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/15111
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Icf5afaee8b7c704aecab7f8a8a1df9f1bc9288ce
BUG: 1360401
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/15016
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We use signin, but not signup. Having both just introduces confusion.
The proc number has been retained to avoid changes to the numbering of
other procs, and the mapping to a name has similarly been retained as a
placeholder, but the code and structure definitions have been removed.
Change-Id: I60f64f3b5d71ba6ed6862b36a38f90a9c8271c9f
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/14792
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I58e1fe7f5edf0abb5732432291ff677e81429b79
BUG: 1333317
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/14253
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currently, say we have 10 Node gluster volume, and mounted it using
Node 1 (N1) as volfile server and the rest as backup volfile servers
$ mount -t glusterfs -obackup-volfile-servers=<N2>:<N3>:...:<N10> <N1>:/vol /mnt
if N1 goes down we still be able to access the same mount point,
but the problem is that if we add or remove bricks to the volume
whoes volfile server is down in our case N1, that info will not be
passed to client, because connection between glusterfs and glusterd (of N1)
will be disconnected due to which we cannot store files to the newly
added bricks until N1 comes back
Solution:
If N1 goes down iterate through the nodes specified in
backup-volfile-servers list and try to establish the connection between
glusterfs and glsuterd, hence we don't really have to wait
until N1 comes back to store files in newly added bricks that are
successfully added when N1 was down
Change-Id: I653c9f081a84667630608091bc243ffc3859d5cd
BUG: 1289916
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-on: http://review.gluster.org/13002
Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Poornima G <pgurusid@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Glusterd not working using ipv6 transport. The idea is with proper glusterd.vol configuration,
1. glusterd needs to listen on default port (240007) as IPv6 TCP listner.
2. Volume creation/deletion/mounting/add-bricks/delete-bricks/peer-probe
needs to work using ipv6 addresses.
3. Bricks needs to listen on ipv6 addresses.
All the above functionality is needed to say that glusterd supports ipv6 transport and this is broken.
Fix:
When "option transport.address-family inet6" option is present in glusterd.vol
file, it is made sure that glusterd creates listeners using ipv6 sockets only and also the same information is saved
inside brick volume files used by glusterfsd brick process when they are starting.
Tests Run:
Regression tests using ./run-tests.sh
IPv4: Ran manually till tests/basic/rpm.t .
IPv6: (Need to add the above mentioned config and also add an entry for "hostname ::1" in /etc/hosts)
Started failing at ./tests/basic/glusterd/arbiter-volume-probe.t and ran successfully till here
Unit Tests using Ipv6
peer probe
add-bricks
remove-bricks
create volume
replace-bricks
start volume
stop volume
delete volume
Change-Id: Iebc96e6cce748b5924ce5da17b0114600ec70a6e
BUG: 1117886
Signed-off-by: Nithin D <nithind1988@yahoo.in>
Reviewed-on: http://review.gluster.org/11988
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rebalance after sending a status notification immediately
destroys the frame. Now in its callback the frame is corrupted.
Rebalance crashes when this corrupted frame is accessed. To avoid
this we must destroy the frame after the callback is completed.
Change-Id: If383017a61f09275256e51c44a1efa28feace87b
BUG: 1300152
Signed-off-by: Sakshi <sabansal@redhat.com>
Reviewed-on: http://review.gluster.org/13262
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
If index heal is launched when some of the bricks are down, glustershd of that
node sends a -1 op_ret to glusterd which eventually propagates it to the CLI.
Also, glusterd sometimes sends an err_str and sometimes not (depending on the
failure happening in the brick-op phase or commit-op phase). So the message that
gets displayed varies in each case:
"Launching heal operation to perform index self heal on volume testvol has been
unsuccessful"
(OR)
"Commit failed on <host>. Please check log file for details."
Fix:
1. Modify afr_xl_op() to return -1 even if index healing of atleast one brick
fails.
2. Ignore glusterd's error string in gf_cli_heal_volume_cbk and print a more
meaningful message.
The patch also fixes a bug in glusterfs_handle_translator_op() where if we
encounter an error in notify of one xlator, we break out of the loop instead of
sending the notify to other xlators.
Change-Id: I957f6c4b4d0a45453ffd5488e425cab5a3e0acca
BUG: 1302291
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/13303
Reviewed-by: Anuradha Talur <atalur@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterfsd fails if the glusterd is bind to specific-IP address.
This patch helps glusterfsd to get the volfile using Unix domain socket.
glusterfs -s <unix socket path> --volfile-server-transport unix
--volfile-id <volume-name> <mount-point>
The patch checks if the volfile-server-transport is of type "unix",
If It is then uses rpc_transport_unix_options_build to get the volfile.
Change-Id: I81b881e7ac5a3a4f2ac83c789c385cf547f0d53e
BUG: 1279484
Signed-off-by: Mohamed Ashiq <mliyazud@redhat.com>
Signed-off-by: Humble Devassy Chirammal <hchiramm@redhat.com>
Reviewed-on: http://review.gluster.org/12556
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CLI command for bitrot scrub status will be :
gluster volume bitrot <volname> scrub status
Above command will show the statistics of bitrot scrubber.
Upon execution of this command it will show some common
scrubber tunable value of volume <VOLNAME> followed by
statistics of scrubber statistics of individual nodes.
sample ouput for single node:
Volume name : <VOLNAME>
State of scrub: Active
Scrub frequency: biweekly
Bitrot error log location: /var/log/glusterfs/bitd.log
Scrubber error log location: /var/log/glusterfs/scrub.log
=========================================================
Node name:
Number of Scrubbed files:
Number of Unsigned files:
Last completed scrub time:
Duration of last scrub:
Error count:
=========================================================
This is just infrastructure. list of bad file, last scrub
time, error count value will be taken care by
http://review.gluster.org/#/c/12503/ and
http://review.gluster.org/#/c/12654/ patches.
Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1
BUG: 1207627
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10231
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
various xlators and other components are invoking system calls
directly instead of using the libglusterfs/syscall.[ch] wrappers.
If not using the system call wrappers there should be a comment
in the source explaining why the wrapper isn't used.
Change-Id: I1f47820534c890a00b452fa61f7438eb2b3f667c
BUG: 1267967
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/12276
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I9c71ae264665b7bba609c7f86cf42a52a6b47260
BUG: 1269696
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/12311
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When new bricks are added in the middle of an on-going
fop like 'rm', the volfile changes without waiting for
the newly added bricks to get port. Fops are sent to all
bricks and may fail on some with ENOTCONN as these bricks
may not have a port yet.
This patch ensures that the volfile change happens only
after all the bricks have a port.
Change-Id: I7ed2413475f80d0cc8849fed33036ade8d75a191
BUG: 1233151
Signed-off-by: Sakshi <sabansal@redhat.com>
Reviewed-on: http://review.gluster.org/11342
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The @owner argument tells RPC layer the xlator that owns
the connection and to which xlator THIS needs be set during
network notifications like CONNECT and DISCONNECT.
Code paths that originate from the head of a (volume) graph and use
STACK_WIND ensure that the RPC local endpoint has the right xlator saved
in the frame of the call (callback pair). This guarantees that the
callback is executed in the right xlator context.
The client handshake process which includes fetching of brick ports from
glusterd, setting lk-version on the brick for the session, don't have
the correct xlator set in their frames. The problem lies with RPC
notifications. It doesn't have the provision to set THIS with the xlator
that is registered with the corresponding RPC programs. e.g,
RPC_CLNT_CONNECT event received by protocol/client doesn't have THIS set
to its xlator. This implies, call(-callbacks) originating from this
thread don't have the right xlator set too.
The fix would be to save the xlator registered with the RPC connection
during rpc_clnt_new. e.g, protocol/client's xlator would be saved with
the RPC connection that it 'owns'. RPC notifications such as CONNECT,
DISCONNECT, etc inherit THIS from the RPC connection's xlator.
Change-Id: I9dea2c35378c511d800ef58f7fa2ea5552f2c409
BUG: 1235582
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/11436
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of including config.h in each file, and have the additional
config.h included from the compiler commandline (-include option).
When a .c file tests for a certain #define, and config.h was not
included, incorrect assumtions were made. With this change, it can not
happen again.
BUG: 1222319
Change-Id: I4f9097b8740b81ecfe8b218d52ca50361f74cb64
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/10808
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Command gluster volume status <VOLNAME> should show the status of bitrot
and scrubber daemon and its pid information.
Along with displaying bitrot and scrubber daemon information in gluster
volume status command there should be command to show its individual status
separately.
Command to show individual status of bitrot and scrubber daemon will
following.
command to show only bitd daemon information will be
gluster volume status <VOLNAME> bitd
command to show only scrubber daemon information
gluster volume status <VOLNAME> scrub
Change-Id: Id86aae1156c8c599347c98e2a538f294d37376e4
BUG: 1209752
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10175
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For rdma type only volume client connection establishment
with server takes more than three seconds. Because for
tcp,rdma type volume, will have 2 ports one for tcp and
one for rdma, tcp port is stored with brickname and rdma
port is stored as "brickname.rdma" during pamap_sighin.
During the handshake when trying to get the brick port
for rdma clients, since we are not aware of server
transport type, we will append '.rdma' with brick name.
So for tcp,rdma volume there will be an entry with
'.rdma', but it will fail for rdma type only volume.
So we will try again, this time without appending '.rdma'
using a flag variable need_different_port, and it will succeed,
but the reconnection happens only after 3 seconds.
In this patch for rdma only type volume
we will append '.rdma' during the pmap_signin. So during the
handshake we will get the correct port for first try itself.
Since we don't need to retry , we can remove the
need_different_port flag variable.
Change-Id: Ie8e3a7f532d4104829dbe995e99b35e95571466c
BUG: 1153569
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/8934
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we try to mount a tcp,rdma volume as rdma
transport using FUSE protocol, then mount will
hang if the brick is down. When we kill a process,
signal will be received in glusterfsd process and
it will call pmap_signout with port listening on tcp only.
In case of the tcp,rdma there will be two ports,
and port which is listening for rdma will not
called for sign out.
So the mount process will try to connect to a port
which is not open and it will keep trying to connect.
This patch will call pmap_signout for rdma port also,
So when mount tries to get the brick port,it will fail.
Change-Id: I23676f65f96eb90b69b76478f7a21412a6aba70f
BUG: 1143886
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/8762
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While accessing the procedures of given RPC program in,
rpcsvc_get_program_vector_sizer(), It was not checking boundary
conditions which would cause buffer overflow and subsequently SEGV.
Make sure rpcsvc_actor_t arrays have numactors number of actors.
FIX:
Validate the RPC procedure number before fetching the actor.
Special Thanks to: Murray Ketchion, Grant Byers
Change-Id: I8b5abd406d47fab8fca65b3beb73cdfe8cd85b72
BUG: 1096020
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/7726
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a brick is started, and the glusterfsd process requests
for volfile, the brick_name is sent in the req dict. In
glusterd, after fetching the spec the brick_name is looked
up in the missed_snap_list, and any missing snap creates on
the same brick are performed. After this, the glusterd
responds back with the specfile.
Also collate brick data from the node's hosting the bricks
during restore. In case the data is absent, the local node's
data is used. This is needed to ensure that, during a restore
we collect the information created when a missed snap create
is performed.
Change-Id: I47cefdeba96f2702be810965734cf0fac61d3d2d
BUG: 1061685
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7551
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changelog barriers unlink, rename, rmdir fops on barrier 'on'
notification from glusterfsd mgmt layer and unbarriers the
same on barrier 'off' notification during snapshot.
Please see the following link for more details.
http://www.gluster.org/community/documentation/index.php/Changelog_Design_changes_for_snapshot
Signed-off-by: Kotresh H R <khiremat@redhat.com>
Change-Id: Iea9c62fafc86242f9404e03679b1941aa9c88c9a
Signed-off-by: Kotresh H R <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/7415
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Varun Shastry <vshastry@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces a new 'barrier' brick-op which will be used to
activate/deactivate the barriering on the bricks. This includes
barriering in the barrier xlator and in the changelog xlator. All the
required code has been including a bricks select function, a payload
builder and a brick-op handler.
Change-Id: I91d9d77f691c2e89823f7dc4e84900ec40dc4dd2
BUG: 1060002
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6943
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial patch for the Snapshot feature. Current patch
includes following features:
* Snapshot create
* Snapshot delete
* Snapshot restore
* Snapshot list
* Snapshot info
* Snapshot status
* Snapshot config
Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9
BUG: 1061685
Signed-off-by: shishir gowda <sgowda@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7128
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
BUG: 789278
CID: 1124805
Change-Id: I5926a4e59790f142d65f034726c9c369c2ad7808
Signed-off-by: Jose A. Rivera <jarrpa@redhat.com>
Reviewed-on: http://review.gluster.org/6909
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
1.Checking of frame before calling STACK_DESTROY (frame->root)
Signed-off-by: surabhi <sbhaloth@redhat.com>
Change-Id: I21d27a8b4e556c00cd123afe8512e010a1a1f80d
BUG: 789278
Signed-off-by: surabhi <sbhaloth@redhat.com>
Reviewed-on: http://review.gluster.org/6749
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issue:
1. In "unlink (export_path)" "export_path" might contain an arbitrary value left from earlier
computations.
2. In "(msg[0] != '\0')" msg might contain an arbitrary value
Change-Id: Icca8f557fd6b5e046dff1d5a84a72061975868d0
BUG: 789278
Signed-off-by: Lalatendu Mohanty <lmohanty@redhat.com>
Reviewed-on: http://review.gluster.org/6701
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
E.g. In glusterfs_volfile_fetch(), req.xdata.xdata_val is allocated
in dict_allocate_and_serialize() but not freed after mgmt_submit_request().
A survey of dict_allocate_and_serialize/_submit_request in
glusterfsd-mgmt.c shows no consistent pattern of freeing the xdata_val
and also the dict, which is a little disturbing. (Yes, clearly not
every place this occurs needs to be freed the same way.)
Change-Id: Ic306d60b157e97c822a562bfdf21896e40db632a
BUG: 1036102
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/6363
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
re-work.
Following are the cli commands that are new/re-worked:
======================================================
volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} |
volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} |
volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>}
volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [detail|clients|mem|inode|fd|callpool]
volume statedump <VOLNAME> [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history]
glusterd changes:
=================
* Quota limits are now set as extended attributes by glusterd from
the aux mount created by the cli.
* The gfids of the directories on which quota limits are set
for a given volume are stored in
/var/lib/glusterd/vols/<volname>/quota.conf file in binary format,
and whose cksum and version is stored in
/var/lib/glusterd/vols/<volname>/quota.cksum.
Original-author: Krutika Dhananjay <kdhananj@redhat.com>
Original-author: Krishnan Parthasarathi <kparthas@redhat.com>
BUG: 969461
Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/6003
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Remove bd_map xlator and CLI related changes.
Change-Id: If7086205df1907127c1a1fa4ba603f1c48421d09
BUG: 1028672
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/5747
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* The new and the oldgraphs which have been constructed whenever there is
a volfile change (either reconfigure of the existing graph or creating
a new graph) for comparison should be freed. Otherwise frequent graph
changes will lead to huge memory leak
Change-Id: I4faddb1aa9393b34cd2de6732e537a60f600026a
BUG: 948178
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/5388
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
config service availability for clients.
Backupvolfile server as it stands is slow and prone to errors
with mount script and its combination with RRDNS. Instead in
theory it should use all the available nodes in 'trusted pool'
by default (Right now we don't have a mechanism in place for
this)
Nevertheless this patch provides a scenario where a list of
volfile-server can be provided on command as shown below
-----------------------------------------------------------------
$ glusterfs -s server1 .. -s serverN --volfile-id=<volname> \
<mount_point>
-----------------------------------------------------------------
OR
-----------------------------------------------------------------
$ mount -t glusterfs -obackup-volfile-servers=<server2>: \
<server3>:...:<serverN> <server1>:/<volname> <mount_point>
-----------------------------------------------------------------
Here ':' is used as a separator for mount script parsing
Now these will be remembered and recursively attempted for
fetching vol-file until exhausted. This would ensure that the
clients get 'volume' configs in a consistent manner avoiding the
need to poll through RRDNS.
Change-Id: If808bb8a52e6034c61574cdae3ac4e7e83513a40
BUG: 986429
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/5400
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The callback structures in both protocol/client and glusterfsd,
gfapi used the same name for the actor table - gluster_cbk_actors.
CBKs are required only for the management connection, and the
actors of protocol/client are NOP functions. This supposed-to-be
NOP function dispatch tabble is actually ending up pointing to
the actor table of glusterfsd or gfapi.
These functions, even though set wrongly, are not even expected
to be called through the protocol/client callback path. Glusterd
however sends the FETCHSPEC (and other) notify callbacks to *all*
connected clients unconditionally, and there is a small period
of time when protocol/client is connected to glusterd for
PORTMAP query. If the FETCHSPEC callback notify is issued in
this window of time, we end up calling the wrong actor in the
client side resulting in a crash.
Change-Id: I605ff7df64c7faf4607369bbf275aedec28e1778
BUG: 1004091
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/5767
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|