| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: While tier code was removed, the is_tier_enabled
related to tier wasn't handled for upgrade.
As this option was missing in the info file, the checksum
mismatch issue happens during upgrade.
This results in the peer rejections happening.
Fix: use the op_version check and note down the is_tier_enabled
always. This way it will be dummy key, but the future upgrades
will work fine.
NOTE: Just having the key from 3.10 to 7 will cause issues when
upgraded from 5 to 8 or any such upgrade which skips the version
where we handle it.
Change-Id: I9951e2b74f16e58e884e746c34dcf53e559c7143
fixes: bz#1714973
Signed-off-by: hari gowtham <hgowtham@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
storage.reserve-size option will take size as input
instead of percentage. If set, priority will be given to
storage.reserve-size over storage.reserve. Default value
of this option is 0.
fixes: bz#1651445
Change-Id: I7a7342c68e436e8bf65bd39c567512ee04abbcea
Signed-off-by: Sheetal Pamecha <sheetal.pamecha08@gmail.com>
|
|
|
|
|
|
|
| |
updates: bz#1193929
Change-Id: Ieb5e35d454498bc389972f9f15fe46b640f1b97d
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: While high no. of volumes are configured around 2000
glusterd has bottleneck during handshake at the time
of copying dictionary
Solution: To avoid the bottleneck serialize a dictionary instead
of copying key-value pair one by one
Change-Id: I9fb332f432e4f915bc3af8dcab38bed26bda2b9a
fixes: bz#1711297
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Traditionally all svc manager will execute process stop and then
followed by start each time when they called. But that is not
required by shd, because the attach request implemented in the shd
multiplex has the intelligence to check whether a detach is required
prior to attaching the graph. So there is no need to send an explicit
detach request if we are sure that the next call is an attach request
Change-Id: I9157c8dcaffdac038f73286bcf5646a3f1d3d8ec
fixes: bz#1710054
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
While restarting a glusterd process, when we have a stale pid
we were doing a simple kill. Instead we can use glusterd_proc_stop
Because it has more logging plus force kill in case if there is
any problem with kill signal handling.
Change-Id: I4a2dadc210a7a65762dd714e809899510622b7ec
updates: bz#1710054
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd_svcs_stop should call individual wrapper function to stop a
daemon rather than calling glusterd_svc_stop. For example for shd,
it should call glusterd_shdsvc_stop instead of calling basic API
function to stop. Because the individual functions for each daemon
could be doing some specific operation in their wrapper function.
Change-Id: Ie6d40590251ad470ef3901d1141ab7b22c3498f5
fixes: bz#1712741
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: "gluster v status" is hung in heterogenous cluster
when issued from a non-upgraded node.
Cause: commit 34e010d64 fixes the txn-opinfo mem leak
in op-sm framework by not setting the txn-opinfo if some
conditions are true. When vol status is issued from a
non-upgraded node, command is hanging in its upgraded peer
as the upgraded node setting the txn-opinfo based on new
conditions where as non-upgraded nodes are following diff
conditions.
Fix: Add an op-version check, so that all the nodes follow
same set of conditions to set txn-opinfo.
fixes: bz#1710159
Change-Id: Ie1f353212c5931ddd1b728d2e6949dfe6225c4ab
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
| |
1401590: Deadcode
updates: bz#789278
Change-Id: I3aa1d3aa9769e6990f74b6a53e288e788173c5e0
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: In commit ac70f66c5805e10b3a1072bd467918730c0aeeb4 I
missed one condition to populate volume dictionary in
multiple threads while brick_multiplex is enabled.Due
to that glusterd is not sending volume dictionary for
all volumes to peer.
Solution: Update the condition in code as well as update test case
also to avoid the issue
Change-Id: I06522dbdfee4f7e995d9cc7b7098fdf35340dc52
fixes: bz#1711250
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The handler functions are pointed to dummy functions.
The switch case handling for tier also have been moved to
point default case to avoid issues, if reintroduced.
The tier changes in DHT still remain as such.
updates: bz#1693692
Change-Id: I80d80c9a3eb862b4440a36b31ae82b2e9d92e4dc
Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes the following CID's:
* 1124829
* 1274075
* 1274083
* 1274128
* 1274135
* 1274141
* 1274143
* 1274197
* 1274205
* 1274210
* 1274211
* 1288801
* 1398629
Change-Id: Ia7c86cfab3245b20777ffa296e1a59748040f558
Updates: bz#789278
Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
|
|
|
|
|
|
|
|
|
| |
CID: 1401345 - Unused value
updates: bz#789278
Change-Id: I6b8f2611151ce0174042384b7632019c312ebae3
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We only need to calculate and write the checksum in case of
!is_quota_conf .
Align the code in accordance.
Also, use a smaller buffer (to write few chars).
Change-Id: I40c83ce10447df77ff9975d314d768ec2c0087c2
updates: bz#1193929
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ISSUE: gluster volume stop succeeds even if quorum is not met.
Fix: Add GD_OP_STOP_VOLUME to gluster_validate_quorum in
glusterd_mgmt_v3_pre_validate ().
Since the volume stop command has been ported from synctask to mgmt_v3,
the quorum check was missed out.
Change-Id: I7a634ad89ec2e286ea262d7952061efad5360042
fixes: bz#1690753
Signed-off-by: Vishal Pandey <vpandey@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
At the time of a glusterd restart, while doing a handshake
there is a possibility that multiple shd manager might get
executed. Because of this, there is a chance that multiple
shd get spawned during a glusterd restart
Change-Id: Ie20798441e07d7d7a93b7d38dfb924cea178a920
fixes: bz#1707081
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
volume get all all | grep <key> & volume get <volname> all | grep <key>
dumps two different output value for cluster.brick-multiplex and
cluster.server-quorum-ratio
Fixes: bz#1707700
Change-Id: Id131734e0502aa514b84768cf67fce3c22364eae
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to https://review.gluster.org/#/c/glusterfs/+/22652/ ,
reduce some of the work by using smaller buffers and less
conversion of parameters when snprintf()'ing them.
On the way, remove some clang warnings, mainly on dead assignment.
Change-Id: Ie51e6d6f14df6b2ccbebba314cf937af08839741
updates: bz#1193929
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
|
|
|
|
|
|
|
| |
updates: bz#1193929
Change-Id: Idad745d5869c92e6bed71842f14bc1a3362ca4bd
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of saving each key-value separately, which is slow (
especially as we fflush() after each!), store them all as one
string and write all together.
Implements https://github.com/gluster/glusterfs/issues/629
Change-Id: Ie77a272446b0b6785584b710a4fdd9c613dd9578
updates: bz#1193929
Signed-off-by: Yaniv Kaul <ykaul@redhat,.com>
|
|
|
|
|
|
|
|
| |
CID: 1382403 (CHECKED_RETURN)
Updates: bz#789278
Change-Id: I4c57b93fd3d14c524ff8519ed876f029834de306
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Coverity reported that GF_FREE(req_ctx) could be called 2x on req_ctx.
Change-Id: I9120686e5920de8c27688e10de0db6aa26292064
CID: 1401115
Updates: bz#789278
Signed-off-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
1. Use small arrays, 32 or 64 bytes should suffice.
2. Do not repeat the pattern of
snprintf '%s.%d', prefix, count
over and over.
Change-Id: Ief6de78b766d9a07acb6256fc4830f4f3cfba7c9
updates: bz#1193929
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Along with fixing few defect, put the required annotations for the defects which
are marked ignore/false positive/intentional as per the coverity defect sheet.
This should avoid the per component graph showing many defects as open in the
coverity glusterfs web page.
Updates: bz#789278
Change-Id: I19461dc3603a3bd8f88866a1ab3db43d783af8e4
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: Currently glusterd spawn bulkvoldict in brick_mux
environment while no. of volumes are less than configured
glusterd.vol_count_per_thread
Solution: Correct the logic to spawn bulkvoldict thread
1) Calculate endindex only while total thread is non zero
2) Update end index correctly to pass index for bulkvoldict
thread
Fixes: bz#1704252
Change-Id: I1def847fbdd6a605e7687bfc4e42b706bf0eb70b
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
| |
anymore
updates: bz#1693692
Change-Id: Id5932b11e115ca6da1c2bfff7ae1460787109e06
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: statedump is not capturing information related to glusterd
Solution: statdump is not capturing glusterd info because
trav->dumpops is null in gf_proc_dump_single_xlator_info ()
where trav is glusterd xlator object. trav->dumpops is null
because we missed to define dumpops in xlator_api of glusterd.
defining dumpops in xlator_api of glusterd fixes the issue.
fixes: bz#1703629
Change-Id: If85429ecb1ef580aced8d5b88d09fc15258bfc4c
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
| |
Change-Id: Icbe53e78e9c4f6699c7a26a806ef4b14b39f5019
updates: bz#1642168
Signed-off-by: Anuradha Talur <atalur@commvault.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1400775 - USE_AFTER_FREE
1400742 - Missing Unlock
1400736 - CHECKED_RETURN
1398470 - Missing Unlock
Missing unlock is the tricky one, we have had annotation added, but
coverity still continued to complaint. Added pthread_mutex_unlock to
clean up the lock before destroying it to see if it makes coverity
happy.
Updates: bz#789278
Change-Id: I1d892612a17f805144d96c1b15004a85a1639414
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
...during volume create if the cluster op-version is >=GD_OP_VERSION_7_0.
This option itself was introduced in GD_OP_VERSION_4_0_0 via commit 6daa65356.
We missed enabling it by default for new volume creates in that commit.
If we are to do it now safely, we need to use op version
GD_OP_VERSION_7_0 and target it for release-7.
fixes: bz#1702303
Change-Id: I7c6d4a8abe0816367e7069cb5cad01744f04858f
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Addresses the following:
* CID 1124776: Resource leaks (RESOURCE_LEAK) - Variable "aa" going out
of scope leaks the storage it points to in glusterd-volgen.c
* Bunch of CHECKED_RETURN defects in the callers of synctask_barrier_init
* CID 1400755: Error handling issues (CHECKED_RETURN) - Calling
"gf_is_service_running" without checking return value in
xlators/mgmt/glusterd/src/glusterd-shd-svc.c: 671 in
glusterd_shdsvc_stop()
* CID 1400745: Memory - illegal accesses (USE_AFTER_FREE) - Dereferencing
freed pointer "volinfo" in /xlators/mgmt/glusterd/src/glusterd-shd-svc.c: 460 in glusterd_shdsvc_start()
* CID 1400742: Program hangs (LOCK) - adding annotation to fix this
false positive
Updates: bz#789278
Change-Id: I02f16e7eeb8c5cf72f7d0b29d00df4f03b3718b3
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
When svc attach execute for multiplexing a daemon, we have to keep
a ref on volinfo until it finish the execution. Because, if the attach
is an aysnc call, then a parallel volume delete can lead to free the
volinfo
Change-Id: Ibc02b89557baaed2f63db63d7fb1a7480444ae0d
fixes: bz#1702185
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before calling strtok_r a check for null pointer is necessary to avoid
dereferencing of null pointer
CID:1398617
CID:1274074
Change-Id: I34956c6e04af1faa22d550e6474909ecd36f5d6c
updates: bz#789278
Signed-off-by: rishubhjain <rishubhjain47@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Commit efbf8ab wasn't handling all the scenarios of toggling ctime
option correctly and more over a ! had completely tossed up the logic.
Fixes: bz#1697907
Change-Id: If12e2f69045e59878992ee2cd0518cc0eabcce0d
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
| |
It was hardcoded and with a wrong value.
Fixes: bz#1699339
Change-Id: Ibabe2424a0d35e172a9259bd8849c9bb7cebff1e
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: At the time of handshaking glusterd populate volume
data in a dictionary.While no. of volumes are configured
more than 1500 glusterd takes more than 10 min to generated
the data.Due to taking more time rpc request times out and
rpc start bailing of call frames.
Solution: To optimize the code done below changes
1) Spawn multiple threads to populate volumes data in bulk
in separate dictionary and introduce an option
glusterd.brick-dict-thread-count to configure no. of threads
to populate volume data.
2) Populate tier data only while volume type is tier
3) Compare snap data only while snap_count is non zero
Fixes: bz#1699339
Change-Id: I38dc71970c049217f9d1a06fc0aaf4c26eab18f5
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CID 1400475: Null pointer dereferences (FORWARD_NULL)
CID 1400474: Null pointer dereferences (FORWARD_NULL)
CID 1400471: Code maintainability issues (UNUSED_VALUE)
CID 1400470: Null pointer dereferences (FORWARD_NULL)
CID 1400469: Memory - illegal accesses (USE_AFTER_FREE)
CID 1400467: Code maintainability issues (UNUSED_VALUE)
Change-Id: I0ca1c733be335c6e5844f44850f8066626ac40d4
updates: bz#789278
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: commit c34e4161f3cb6539ec83a9020f3d27eb4759a975 set log-level
per xlator during reconfigure only for a brick process not for
the client process.
Solution: 1) Change per xlator log-level only if brick_mux is enabled.To make sure
about brick multiplex introudce a flag brick_mux at ctx->cmd_args.
Note: There are two other changes done with this patch
1) Ignore client-log-level option to attach a brick with
already running brick if brick_mux is enabled
2) Add a log to print pid of the running process to make easier
debugging
Change-Id: I39e85de778e150d0685cd9a79425ce8b4783f9c9
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
Fixes: bz#1696046
|
|
|
|
|
|
|
|
|
|
|
| |
The values are per volume, and are not going to change
while processing its bricks, as far as I can understand the code.
Fetch them and store them outside the loop.
updates: bz#1193929
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
Change-Id: I2bc263f92f9141ea26a9dfb8265225f38307cbac
|
|
|
|
|
|
|
|
| |
As the same functionality is covered in glusterd_volinfo_find
Updates: bz#1193929
Change-Id: I2308c5fa9b2ca9edaa95f172d0bd914103808c36
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a gluster node in trusted storage pool has failed
due to hardware issues, volume delete operation fails
saying "Not all peers are up" and peer detach for failed
node fails saying "Brick(s) with peer <peer_ip> exists
in cluster".
The idea here is to use either replace-brick or remove-brick
command to remove all the bricks hosted by failed node and
then re-attempting the peer detach. This change adds this
trick in peer detach error message.
fixes: bz#1697866
Change-Id: I0c58887479d31db603ad8d6535ea9d547880ccc8
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch contains the following changes:
1) Store ID info will now be stored in the inode ctx
2) Added new readv type where read is made directly
from the remote store. This choice is made by
volume set operation.
3) cs_forget() was missing. Added it.
Change-Id: Ie3232b3d7ffb5313a03f011b0553b19793eedfa2
fixes: bz#1642168
Signed-off-by: Anuradha Talur <atalur@commvault.com>
|
|
|
|
|
|
|
|
|
|
| |
1) The placement of cloudsync xlator has been changed
to make it shard xlator's child. If cloudsync has to
work with shard in the graph, it needs to be child of shard.
Change-Id: Ib55424fdcb7ce8edae9f19b8a6e3d3ba86c1f0c4
fixes: bz#1642168
Signed-off-by: Anuradha Talur <atalur@commvault.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: glusterfs build is throwing error undefined
reference to `dlclose' on RHEL 6
Solution: Add LIB_DL link in Makefile.am to resolve the same
Fixes: bz#1696512
Change-Id: I58019ca9e29d569d8e6df282b8ab178ad540843b
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Considering ctime is a client side feature, we can't blindly load ctime
xlator into the client graph if it's explicitly turned off, that'd
result into backward compatibility issue where an old client can't mount
a volume configured on a server which is having ctime feature.
Fixes: bz#1697907
Change-Id: I6ae7b96d056073aa6746de9a449cf319786d45cc
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Its value is not going to change within the loop, as far as I can
understand the code.
Fetch and store it outside the loop.
Change-Id: I6327c23212dceec6006349421ef185495892dd8a
updates: bz#1193929
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A pattern of following was found in multiple places where both
glusterd_check_volume_exists and glusterd_volinfo_find do the same job.
We just need one of them not both. In a scaled environment having many
volumes this is a bottleneck to iterate over the volume list to find a
volume twice!
exists = glusterd_check_volume_exists(volname);
ret = glusterd_volinfo_find(volname, &volinfo);
if ((ret) || (!exists)) {
Credits: ykaul@redhat.com for finding this out
Updates: bz#1193929
Change-Id: Ie116fe5c93e261a2bddd267c28ccb20a2884a36f
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Setting the pointer to NULL after GF_FREE() and checking the pointer value
before calling GF_FREE() to avoid referencing memory after its has been freed
CID: 1398622
Change-Id: Iba0d8879abccf5923a69132a207d53bb94551417
updates: bz#789278
Signed-off-by: rishubhjain <rishubhjain47@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
we have 'sdfs-sanity.t' which covers at least 90% of the functions
and 70% of lines in the translator. But the recent changes to
disable it due to performance impact made even the test to not
consider the translator.
updates: bz#1693692
Change-Id: I0ebcb307c4ab48a6e59ded27bf39f72ce2304ebc
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.
Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.
Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.
Solution:
This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.
When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.
Example of an shd graph when it is per volume
graph
-----------------------
| debug-iostat |
-----------------------
/ | \
/ | \
--------- --------- ----------
| AFR-1 | | AFR-2 | | AFR-3 |
-------- --------- ----------
A running shd daemon with 3 volumes will be like-->
graph
-----------------------
| debug-iostat |
-----------------------
/ | \
/ | \
------------ ------------ ------------
| volume-1 | | volume-2 | | volume-3 |
------------ ------------ ------------
Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|