| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Coverity ignore directive is not working if the comment is
split across lines (or has an empty line at the end.
This can be seen in this report:
https://download.gluster.org/pub/gluster/glusterfs/static-analysis
/master/glusterfs-coverity/2018-08-06-b982e09f/html/1
/384glusterfsd-mgmt.c.html#error
In other places the same pattern has avoided coverity from
flagging off the same call, except here.
Updates: bz#789278
Change-Id: Ic35ff0fc91d0a42904630728ef7c18215aa277f3
Signed-off-by: ShyamsundarR <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Two pending SECURE_TEMP issues still exist in the coverity
reports, these are fixed by this patch.
In both instances (where functions actually seem to be
duplicates of each other) the need was for a FILE * and
not an fd. Applied the same pattern in both places as in
other parts of the code where mkstemp was used and later
a FILE * was created from the resulting fd for use.
Coverity report: https://download.gluster.org/pub/gluster/
glusterfs/static-analysis/master/glusterfs-coverity/
2018-07-30-4d3c62e7/html/
Issues numbered: 382, 383 (named SECURE_TEMP)
Further added tmpfile to the blacklist, so that future code
changes do not add the same, into symbol-check.sh.
Also corrected shellcheck errors in symbol-check.sh as a
result of updating the same.
Updates: bz#789278
Change-Id: I1d572a16ca5b5df2f597aeaa5f454fad34c8296e
Signed-off-by: ShyamsundarR <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Newer FreeBSD versions (noticed with 10.3-RELEASE) provide a event.h
file that on occasion gets included instead of the libglusterfs file.
When this happens, 'struct event_pool' will not be defined and building
will fail with errors like:
autoscale-threads.c:18:55: error: incomplete definition of type 'struct event_pool'
int thread_count = pool->eventthreadcount;
~~~~^
autoscale-threads.c:17:16: note: forward declaration of 'struct event_pool'
struct event_pool *pool = ctx->event_pool;
^
This problem is caused by 'pkg-config --cflags uuid' that adds
/usr/local/include to the GF_CPPFLAGS. The use of libuuid is preferred
so that the contrib/uuid/ directory can be removed.
By renaming event.h to gf-event.h there is no conflict between the
different event.h files anymore and compiling on FreeBSD works without
issues.
Change-Id: Ie69f6b8a4f8f8e9630d39a86693eb74674f0f763
Updates: bz#1607319
Signed-off-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: In brick mux scenario sometime glusterd is not able
to start/attach a brick and gluster v status shows
brick is already running
Solution:
1) To make sure brick is running check brick_path in
/proc/<pid>/fd , if a brick is consumed by the brick
process it means brick stack is come up otherwise not
2) Before start/attach a brick check if a brick is mounted
or not
3) At the time of printing volume status check brick is
consumed by any brick process
Test: To test the same followed procedure
1) Setup brick mux environment on a vm
2) Put a breaking point in gdb in function posix_health_check_thread_proc
at the time of notify GF_EVENT_CHILD_DOWN event
3) unmount anyone brick path forcefully
4) check gluster v status it will show N/A for the brick
5) Try to start volume with force option, glusterd throw
message "No device available for mount brick"
6) Mount the brick_root path
7) Try to start volume with force option
8) down brick is started successfully
Change-Id: I91898dad21d082ebddd12aa0d1f7f0ed012bdf69
fixes: bz#1595320
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Please review, it's not always just the comments that were fixed.
I've had to revert of course all calls to creat() that were changed
to create() ...
Only compile-tested!
Change-Id: I7d02e82d9766e272a7fd9cc68e51901d69e5aab5
updates: bz#1193929
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
problem: In a race condition, the active->first which is supposed to be filled
is NULL and trying to dereference it crashs.
back trace:
Core was generated by `/usr/sbin/glusterfsd -s bxts470192.eu.rabonet.com --volfile-id prod_xvavol.bxts'.
Program terminated with signal 11, Segmentation fault.
1029 any = active->first;
(gdb) bt
Change-Id: Ia6291865319a9456b8b01a5251be2679c4985b7c
fixes: bz#1600451
Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
If glustershd gets restarted by glusterd due to node reboot/volume start force/
or any thing that changes shd graph (add/remove brick), and index heal
is launched via CLI, there can be a chance that shd receives this IPC
before the graph is fully active. Thus when it accesses
glusterfsd_ctx->active, it crashes.
Fix:
Since glusterd does not really wait for the daemons it spawned to be
fully initialized and can send the request as soon as rpc initialization has
succeeded, we just handle it at shd. If glusterfs_graph_activate() is
not yet done in shd but glusterd sends GD_OP_HEAL_VOLUME to shd,
we fail the request.
Change-Id: If6cc07bc5455c4ba03458a36c28b63664496b17d
fixes: bz#1596513
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The state management of "connected" in rpc is ad-hoc as far as the
responsibility goes. Note that there is nothing wrong with
functionality itself. rpc layer manages this state in disconnect
codepath and has exposed an api to manage this one from
consumers. Note that rpc layer never sets "connected" to true by
itself, which forces the consumers to use this api to get a working
rpc connection. The situation is best captured from a comment in code
from Jeff Darcy in glusterfsd/src/gf-attach.c:
-/*
- * In a sane world, the generic RPC layer would be capable of tracking
- * connection status by itself, with no help from us. It might invoke our
- * callback if we had registered one, but only to provide information. Sadly,
- * we don't live in that world. Instead, the callback *must* exist and *must*
- * call rpc_clnt_{set,unset}_connected, because that's the only way those
- * fields get set (with RPC both above and below us on the stack). If we don't
- * do that, then rpc_clnt_submit doesn't think we're connected even when we
- * are. It calls the socket code to reconnect, but the socket code tracks this
- * stuff in a sane way so it knows we're connected and returns EINPROGRESS.
- * Then we're stuck, connected but unable to use the connection. To make it
- * work, we define and register this trivial callback.
- */
Also, consumers of rpc know about state of connection only through the
notifications sent by rpc-clnt. So, consumers don't have any extra
information to manage the state and hence letting them manage the
state is counter intuitive. This patch cleans that up and instead
moves the responsibility of state management of rpc layer into
itself.
Change-Id: I31e641a60795fc480ca753917f4b2579f1e05094
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Fixes: bz#1585585
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier glusterfs never had an assumption someone would start it with
right arguments, and brick processes would be spawned by a management
layer. It just assume the role based on the volfile. Other than
volfile, no other arguments should be technically mandatory for
working of glusterfs. With this patch, that assumption holds true.
Updates: github issue # 352
A note on why this particular issue for this basic sanity?
As per the design of thin-arbiter/tie-breaker, it can be started
independently on any machine, without need of glusterd. So, similar
to 'glusterd', we should be able to spawn a process with any translator
without options/volume id etc.
fixes: bz#1569399
Change-Id: I5c0650fe0bfde35ad94ccba60e63f6cdcd1ae5ff
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: There's a race between the glusterfs_handle_terminate()
response sent to glusterd from last brick of the process and the
socket disconnect event that encounters after the brick process
got killed.
Solution: When it is a last brick for the brick process, instead of
sending GLUSTERD_BRICK_TERMINATE to brick process, glusterd will
kill the process (same as we do it in case of non brick multiplecing).
The test case is added for https://bugzilla.redhat.com/show_bug.cgi?id=1549996
Change-Id: If94958cd7649ea48d09d6af7803a0f9437a85503
fixes: bz#1545048
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Added kernel-writeback-cache command line and xlator
option for requesting utilisation of the writeback
cache of the kernel in FUSE_INIT (see [1]).
- Added attr-times-granularity command line and xlator
option via which granularity of the {a,m,c}time in
stat (attr) data that we support can be indicated to
kernel. This is a means to avoid divergence of the
attr times between kernel and userspace that could
occur with writeback-cache, while still maintaining
maximum time precision the FUSE server is capable of
(see [2]).
- Handling FATTR_CTIME flag in FUSE_SETATTR that
indicates presence of ctime in setattr payload.
Currently we cannot associate arbitrary ctimes to
files on backend, so we just touch them to update
their ctimes to current time. Having ctimes in setattr
payload is also a side effect of writeback cache
(see [3] and [4]).
[1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4d99ff8,
"fuse: Turn writeback cache on"
[2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e27c9d3,
"fuse: fuse: add time_gran to INIT_OUT"
[3]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1e18bda,
"fuse: add .write_inode"
[4]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ab9e13f,
"fuse: allow ctime flushing to userspace"
Updates: #435
Change-Id: Id174c8e0c815c4456c35f8c53e41a6a507d91855
Signed-off-by: Csaba Henk <csaba@redhat.com>
|
|
|
|
|
|
|
|
|
| |
In glusterfs_handle_terminate all bricks getting detached need to
initiate a pmap_signout.
Change-Id: Iacbd6fcd49215fe6a5210df7dfed1260fde9179a
Fixes: bz#1570011
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: glusterd2 build is failed due to undefined symbol
(xlator_mem_cleanup , glusterfsd_ctx) in server.so
Solution: To resolve the same done below two changes
1) Move xlator_mem_cleanup code from glusterfsd-mgmt.c
to xlator.c to be part of libglusterfs.so
2) replace glusterfsd_ctx to this->ctx because symbol
glusterfsd_ctx is not part of server.so
BUG: 1544090
Change-Id: Ie5e6fba9ed458931d08eb0948d450aa962424ae5
fixes: bz#1544090
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The glusterd2 needs following options, some of which are provided by
gluster CLI today:
--print-xlatordir
--print-statedumpdir
--print-logdir
However, the CLI package need not be present on the machine running
glusterd2. This change adds the above CLI options to glusterfsd binary
which glusterd2 depends on.
Reverts 9a1ae47c8d60836ae0628a04a153f28c1085c0e8
Related changes:
https://review.gluster.org/#/c/19882/
https://github.com/gluster/glusterd2/pull/663
Updates: bz#1193929
Change-Id: I18c123b0d3350d2bd4f2400783e3b94e402a4e29
Signed-off-by: Prashanth Pai <ppai@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: Sometimes brick process is getting crashed at the time
of stop brick while brick mux is enabled.
Solution: Brick process was getting crashed because of rpc connection
was not cleaning properly while brick mux is enabled.In this patch
after sending GF_EVENT_CLEANUP notification to xlator(server)
waits for all rpc client connection destroy for specific xlator.Once rpc
connections are destroyed in server_rpc_notify for all associated client
for that brick then call xlator_mem_cleanup for for brick xlator as well as
all child xlators.To avoid races at the time of cleanup introduce
two new flags at each xlator cleanup_starting, call_cleanup.
BUG: 1544090
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/)
with same patch after enable brick mux forcefully, all test cases are
passed.
Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf
updates: bz#1544090
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The values for inode/fd was populated from the ctx received
from the server xlator.
Without brickmux, every brick from a volume belonged to a
single brick from the volume.
So searching the server and populating it worked.
With brickmux, a number of bricks can be confined to a single
process. These bricks can be from different volumes too (if
we use the max-bricks-per-process option).
If they are from different volumes, using the server xlator
to populate causes problem.
Fix:
Use the brick to validate and populate the inode/fd status.
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd
fixes: bz#1566067
|
|
|
|
|
|
|
|
|
|
|
| |
xlator_notify doesn't pass the extra arguments that come in the
input function, so XLATOR_NOTIFY macro should be used instead
to pass the extra arguments to the function.
BUG: 1567881
fixes bz#1567881
Change-Id: Ic15b6c446638cbacf3149693147a754219037c47
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Usage: Use 'reader-thread-count=<NUM>' as command line option to
set the thread count at the time of mounting the volume.
Next task is to make these threads auto-scale based on the load,
instead of having the user remount the volume everytime to change
the thread count.
Updates #412
Change-Id: I94aa1505e5ae6a133683d473e0e4e0edd139b76b
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit a60fc2ddc03134fb23c5ed5c0bcb195e1649416b.
This commit was causing multiple tests to time out when brick
multiplexing is enabled. With further debugging, it's found that even
though the volume stop transaction is converted into mgmt_v3 to allow
the remote nodes to follow the synctask framework to process the command,
there are other callers of glusterd_brick_stop () which are not synctask
based.
Change-Id: I7aee687abc6bfeaa70c7447031f55ed4ccd64693
updates: bz#1545048
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: There's a race between the last glusterfs_handle_terminate()
response sent to glusterd and the kill that happens immediately if the
terminated brick is the last brick.
Solution: When it is a last brick for the brick process, instead of glusterfsd
killing itself, glusterd will kill the process in case of brick multiplexing.
And also changing gf_attach utility accordingly.
Change-Id: I386c19ca592536daa71294a13d9fc89a26d7e8c0
fixes: bz#1545048
BUG: 1545048
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
So far the --direct-io-mode option has been presented
as of being Boolean valued. That is however not exact,
as a third behavior is chosen if the option is not
specified.
We accept now the "auto" value as an explicit choice
for the default heuristics, and indicate in the
descriptions of the option (which occur in commandline
help and in the gluterfs / mount.glusterfs man pages)
that auto is the default.
The default heuristics was briefly described in the
commandline help. We are getting rid of that, because:
- it's not the right place to provide such details;
- there is no guarantee of keeping the current heuristics
so it might go out of sync with reality;
- that is already the case to some degree, because the
description did not take into account that the default
heuristics varies between platforms (on Mac, it's just
"off"), and that xlators can also prescribe direct I/O
for the file of their choice (see change
I3fe3312cd96baa4eecfe1247ab7255b4f455f049).
Change-Id: Ia83479c0c67fe66b7fc2e0e8db5b7792d9f44b28
Signed-off-by: Csaba Henk <csaba@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: TLS verification fails while using intermediate CA
if mgmt SSL is enabled.
Solution: There are two main issue of TLS verification failing
1) not calling ssl_api to set cert_depth
2) The current code does not allow to set certificate depth
while MGMT SSL is enabled.
After apply this patch to set certificate depth user
need to set parameter option transport.socket.ssl-cert-depth <depth>
in /var/lib/glusterd/secure_acccess instead to set in
/etc/glusterfs/glusterd.vol. At the time of set secure_mgmt in ctx
we will check the value of cert-depth and save the value of cert-depth
in ctx.If user does not provide any value in cert-depth in that case
it will consider default value is 1
BUG: 1555154
Change-Id: I89e9a9e1026e37efb5c20f9ec62b1989ef644f35
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have the following undefined symbol error from protocol/server.so:
glusterfs_mgmt_pmap_signout
glusterfs_autoscale_threads
See https://review.gluster.org/19225 (bz#1532238)
and https://review.gluster.org/19657 (bz#1550895)
(why are there two different bzs for the same bug?)
IMO this is a cleaner solution. I.e. moving the above two functions
to libgfrpc (.../rpc/rpc-lib/...)
I would also, for (foolish) consistency sake, like to see
glusterfs_mgmt_pmap_signin() moved from glusterfsd to libgfrpc as
well.
This works on f28/rawhide, with its new, more restrictive run-time
link semantics. The smoke and regression tests on earlier fedora and
centos will confirm that it works on those platforms too.
Change-Id: I9cfbd1cc15e7ebd9fc31b56ac791287fa2c584de
BUG: 1550895
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: At the time of stopping the volume while brick multiplex is
enabled memory is not cleanup from all server side xlators.
Solution: To cleanup memory for all server side xlators call fini
in glusterfs_handle_terminate after send GF_EVENT_CLEANUP
notification to top xlator.
BUG: 1544090
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Note: Run all test-cases in separate build (https://review.gluster.org/19574)
with same patch after enable brick mux forcefully, all test cases are
passed.
Change-Id: Ia10dc7f2605aa50f2b90b3fe4eb380ba9299e2fc
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scale rpcsvc_request_handler threads to match the scaling of event
handler threads.
Please refer to https://bugzilla.redhat.com/show_bug.cgi?id=1467614#c51
for a discussion about why we need multi-threaded rpcsvc request
handlers.
Change-Id: Ib6838fb8b928e15602a3d36fd66b7ba08999430b
Signed-off-by: Milind Changire <mchangir@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
There are still remain some code paths where cleanup is required while
brick mux is on.I will upload a new patch after resolve all code paths.
This reverts commit b313d97faa766443a7f8128b6e19f3d2f1b267dd.
BUG: 1544090
Change-Id: I26ef1d29061092bd9a409c8933d5488e968ed90e
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With Gluster 4.0 we will not provide the server components for EL6 and
older. At one point Gluster 4.x will get GlusterD2, which requires
Golang tools in the distribution. EL6 does not contain these at the
moment.
With this change, it is possible to `./configure --without-server` which
prevents building glusterd and the xlators for the bricks. Building RPMs
can pass `--without server` and the glusterfs-server sub-package will
not be created.
Change-Id: I97f5ccf9f2c76e60d9af83915fc59fae57ad6d25
BUG: 1074947
Signed-off-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Clients will request for a list of volfile servers from glusterd2 by
setting a (optional) flag in GETSPEC RPC call. glusterd2 will check for
the presence of this flag and accordingly return a list of glusterd2
servers in GETSPEC RPC reply. Currently, this list of servers returned
only contains servers which have bricks belonging to the volume.
See:
https://github.com/gluster/glusterd2/issues/382
https://github.com/gluster/glusterfs/issues/351
Updates #351
Change-Id: I0eee3d0bf25a87627e562380ef73063926a16b81
Signed-off-by: Prashanth Pai <ppai@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: At the time of stopping the volume while brick multiplex is
enabled memory is not cleanup from all server side xlators.
Solution: To cleanup memory for all server side xlators call fini
in glusterfs_handle_terminate after send GF_EVENT_CLEANUP
notification to top xlator.
BUG: 1544090
Change-Id: Ifa1525e25b697371276158705026b421b4f81140
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
| |
This reverts commit 56e5fdae74845dfec0ff7ad0c8fee77695d36ad5.
Change-Id: Ia62cee5440bbe8e23f5da9cff692d792091d544a
Signed-off-by: Milind Changire <mchangir@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This patch creates a new way of defining message id's that is easier
and less error prone because it doesn't require so many manual changes
each time a new component is defined or a new message created.
Change-Id: I71ba8af9ac068f5add7e74f316a2478bc991c67b
Signed-off-by: Xavier Hernandez <jahernan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Patch attempts to use the epoll infra for handling SSL connections
as well instead of the socket_poller() thread func.
This essentially makes priv->own_thread flag redundant.
SSL_connect()/SSL_accept() is now non-blocking which has done away
with the localised poll() in ssl_do(). So, ssl_do() has been updated
appropriately.
own_thread and coincidently socket_poller() thread for SSL processing
is now deprecated.
Added a timeout to test whether seal-heal daemon is up and running
as per Ravi's suggestion.
Change-Id: If2b5d7b4fd19e321cb289e08d49a718d2161aafe
Signed-off-by: Milind Changire <mchangir@redhat.com>
|
|
|
|
|
|
|
|
|
| |
specify ctx in gf_log_set_loglevel, instead of getting it from a thread
specific variable.
Change-Id: I498f826e8e32231235a6b0005026a27c327727fd
BUG: 1521213
Signed-off-by: Zhang Huan <zhanghuan@open-fs.com>
|
|
|
|
|
|
|
|
|
|
| |
* Introduce xlator methods to allow dumping of metrics
* Separate options to get the metrics dumped in a path
Updates #168
Change-Id: I7df80df33b71d6f449f03c2332665b4a45f6ddf2
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issue: pid is written into glusterd.pid by child process instead of
parent process while forking.
Fix: After fork returns child pid to parent process, it falls under
default case of switch statement, call glusterfs_pidfile_update()
in default case instead of postfork label.
Change-Id: I41b616c140592bf117601bc451dfd8b934a5b640
BUG: 1509340
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Coverity ID: 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417,
418, 419, 423, 424, 425, 426, 427, 428, 429, 436, 437, 438, 439,
440, 441, 442, 443
Issue: Event include_recursion
Removed redundant, recursive includes from the files.
Change-Id: I920776b1fa089a2d4917ca722d0075a9239911a7
BUG: 789278
Signed-off-by: Girjesh Rajoria <grajoria@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Ensure that the fop program is the first in the program list
so that there's minimum amount of time spent to search the
program for the most frequently needed use case.
Change-Id: I45c3dcdbf39ec90ba39d914432d13a2ace00a5ee
BUG: 1509647
Signed-off-by: Milind Changire <mchangir@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: When control reaches to out, one of (iobref, iobuf, frame) can
be null.for iobref, iobuf iobref_unref() and iobuf_unref() functions
are called respectively, which are using GF_VALIDATE_OR_GOTO(), so
there won't be null pointer dereference. But for frame without null
checking STACK_DESTROY(frame->root) is called causing null pointer
dereference.
Fix: adding a line for null checking, the function
STACK_DESTROY(frame->root) is called only when frame is not null.
Change-Id: I3a6684c11fb7b694b81d6ad4fec3bced5562ad88
BUG: 1503394
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
| |
Updates: #242
BUG: 1428063
Change-Id: Iaaf2edf99b2ecc75f6d30762c752a6d445c1c826
Signed-off-by: Poornima G <pgurusid@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issue:Event unterminated_case: The case for value
"ARGP_LOCALTIME_LOGGING_KEY" is not terminated by a
'break' statement.
Solution: A break statement is added in the fallthrough case.
Change-Id: Ie44d1a291afaa0e9fb9ef2aa45368b9401a8bb82
BUG: 789278
Signed-off-by: Subha sree Mohankumar <smohanku@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The headerfile globals.h is recursively adding itself.
( globals.h -> xlator.h -> stack.h -> globals.h).
We are finding the source files which are including the header
file globals.h and removing the inclusion line.
I used git grep -l stack.h | xargs git grep globals.h --
to find out the files and removed the header file from all files
except libglusterfs/src/xlator.h and libglusterfs/src/Makefile.am
When I try to remove header file from libglusterfs/src/xlator.h
I'm getting some errors. In libglusterfs/src/Makefile.am it is
required for building RPMs.
Change-Id: I537218c09ade6d7ea51717768b26563a247daf60
BUG: 789278
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issue: Event unreachable at line number 1111 in glusterfsd/src/glusterfsd.c
There was a statement in outer if block after the break statement.
Ideally the break statement should be inside the inner if block so
that the statement will not become unreachable. I put the break
inside the inner if block.
Change-Id: Id4917305333e1638f35b3f2fb59ac42e62a12d02
BUG: 789278
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
... and disable it by default.
This is because having it disabled seems to improve performance.
This could be due to the lock contention by the different epoll threads
on the circular buff lock in the fop cbks just before writing their response
to /dev/fuse.
Just to provide some data - wrt ovirt-gluster hyperconverged
environment, I saw an increase in IOPs by 12K with event-history
disabled for randrom read workload.
Usage:
mount -t glusterfs -o event-history=on $HOSTNAME:$VOLNAME $MOUNTPOINT
OR
glusterfs --event-history=on --volfile-server=$HOSTNAME --volfile-id=$VOLNAME $MOUNTPOINT
Change-Id: Ia533788d309c78688a315dc8cd04d30fad9e9485
BUG: 1467614
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
command: gluster volume status <volname/all> client-list
output:
Client connections for volume v1
Name count
----- ------
fuse 2
tierd 1
total clients for volume v1 : 3
-----------------------------------------------------------------
Client connections for volume v2
Name count
----- ------
tierd 1
fuse.gsync 1
total clients for volume v2 : 2
-----------------------------------------------------------------
Updates: #178
Change-Id: I0ff2579d6adf57cc0d3bd0161a2ec6ac6c4747c0
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: https://review.gluster.org/18095
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: hari gowtham <hari.gowtham005@gmail.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: currently we can't identify which process is running and
how many instances of it are available.
Fix: name the process when its spawned and send it to the server
and save it in the client_t
The processes that abide by this change from this patch are:
1) fuse mount,
2) rebalance,
3) selfheal,
4) tier,
5) quota,
6) snapshot,
7) brick.
8) gfapi (by default. gfapi.<processname> if processname is found)
Note: fuse gets a process name as native-fuse-client by default.
If the user gives a name for the fuse and spawns it, it will be of
this type --process-name native-fuse-client.<name_specified>.
This can be made use by the process like aux mount done by quota,
geo-rep, etc by adding another option in the aux mount " -o
process-name=gsync_mount"
Updates: #178
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Change-Id: Ie4d02257216839338043737691753bab9a974d5e
Reviewed-on: https://review.gluster.org/17957
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: hari gowtham <hari.gowtham005@gmail.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Aravinda VK <avishwan@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changes:
1. Take subdir mount option in client (mount.gluster / glusterfsd)
2. Pass the subdir mount to server-handshake (from client-handshake)
3. Handle subdir-mount dir's lookup in server-first-lookup and handle
all fops resolution accordingly with proper gfid of subdir
4. Change the auth/addr module to handle the multiple subdir entries
in option, and valid parsing.
How to use the feature:
`# mount -t glusterfs $hostname:/$volname/$subdir /$mount_point`
Or
`# mount -t glusterfs $hostname:/$volname -osubdir_mount=$subdir /$mount_point`
Option can be set like:
`# gluster volume set <volname> auth.allow "/subdir1(192.168.1.*),/(192.168.10.*),/subdir2(192.168.8.*)"`
Updates #175
Change-Id: I7ea57f76ddbe6c3862cfe02e13f89e8a39719e11
Signed-off-by: Amar Tumballi <amarts@redhat.com>
Reviewed-on: https://review.gluster.org/17141
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Despite the fact that appliances generally use UTC, some
users really want log entries in localtime.
fixes gluster/glusterfs#272
feature page: https://review.gluster.org/17807
Change-Id: I5fbf2c3eedd9eb128fb3f851dd67b2f4081c8bba
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: https://review.gluster.org/16911
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Samikshan Bairagya <samikshan@gmail.com>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is not possible to call pthread_key_delete for the pool_key that is
intialized in the constructor for the memory pools. This makes it
difficult to do a full cleanup of all the resources in mem_pools_fini().
For this, the initialization of pool_key should be moved to
mem_pool_init().
However, the glusterfsd binary has a rather complex initialization
procedure. The memory pools need to get initialized partially to get
mem_get() functionality working. But, the pool_sweeper thread can get
killed in case it is started before glusterfsd deamonizes.
In order to solve this, mem_pools_init() is split into two pieces:
1. mem_pools_init_early() for initializing the basic structures
2. mem_pools_init_late() to start the pool_sweeper thread
With the split of mem_pools_init(), and placing the pthread_key_create()
in mem_pools_init_early(), it is now possible to correctly cleanup the
pool_key with pthread_key_delete() in mem_pools_fini().
It seems that there was no memory pool initialization in the CLI. This
has been added as well now. Without it, the CLI will not be able to call
mem_get() successfully which results in a hang of the process.
Change-Id: I1de0153dfe600fd79eac7468cc070e4bd35e71dd
BUG: 1470170
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/17779
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Set names to threads on creation for easier
debugging.
Output of top -H -p <PID-OF-GLUSTERFSD>
Before:
19773 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19774 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19775 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19776 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19777 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19778 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19779 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19780 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19781 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19782 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19783 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19784 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19785 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd
19786 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd
19787 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd
19789 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19790 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
25178 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
5398 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
7881 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
After:
19773 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19774 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustertimer
19775 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19776 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustermemsweep
19777 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustersproc0
19778 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustersproc1
19779 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll0
19780 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusteridxwrker
19781 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusteriotwr0
19782 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterbrssign
19783 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterbrswrker
19784 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterclogecon
19785 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd0
19786 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd1
19787 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd2
19789 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixjan
19790 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixfsy
25178 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll1
5398 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll2
7881 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixhc
Change-Id: Id5f333755c1ba168a2ffaa4fce6e71c375e10703
BUG: 1254002
Updates: #271
Signed-off-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-on: https://review.gluster.org/11926
Reviewed-by: Prashanth Pai <ppai@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We are storing the entire volfile and using this to check
volfile change. With brick multiplexing there will be lot
of graphs per process which will increase the memory foot
print of the process. So instead of storing the entire
graph we could use sha256 and we can compare the hash to
see whether volfile change happened or not.
Also with Brick multiplexing, the direct comparison of vol
file is not correct. There are two problems.
Problem 1:
We are currently storing one single graph (the last
updated volfile) whereas, what we need is the entire
graph with all atttached bricks.
If we fix this issue, we have second problem
Problem 2:
With multiplexing we have a graph that contains multiple
bricks. But what we are checking as part of the reconfigure
is, comparing the entire graph with one single graph,
which will always fail.
Solution:
We create list in glusterfs_ctx_t that stores sha256 hash
of individual brick graphs. When a graph changes happens
we compare the stored hash and the current hash. If the
hash matches, then no need for reconfigure. Otherwise we
first do the reconfigure and then update the hash.
For now, gfapi has not changed this way. Meaning when gfapi
volfile fetch or reconfigure happens, we still store the
entire graph and compare, each memory.
This is fine, because libgfapi will not load brick graphs.
But changing the libgfapi will make the code similar in
both glusterfsd-mgmt and api. Also it helps to reduce some
memory.
Change-Id: I9df917a771a52b95622ab8f63af34ec390163a77
BUG: 1467986
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: https://review.gluster.org/17709
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|