| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: While high no. of volumes are configured around 2000
glusterd has bottleneck during handshake at the time
of copying dictionary
Solution: To avoid the bottleneck serialize a dictionary instead
of copying key-value pair one by one
Change-Id: I9fb332f432e4f915bc3af8dcab38bed26bda2b9a
fixes: bz#1711297
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Traditionally all svc manager will execute process stop and then
followed by start each time when they called. But that is not
required by shd, because the attach request implemented in the shd
multiplex has the intelligence to check whether a detach is required
prior to attaching the graph. So there is no need to send an explicit
detach request if we are sure that the next call is an attach request
Change-Id: I9157c8dcaffdac038f73286bcf5646a3f1d3d8ec
fixes: bz#1710054
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
While restarting a glusterd process, when we have a stale pid
we were doing a simple kill. Instead we can use glusterd_proc_stop
Because it has more logging plus force kill in case if there is
any problem with kill signal handling.
Change-Id: I4a2dadc210a7a65762dd714e809899510622b7ec
updates: bz#1710054
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd_svcs_stop should call individual wrapper function to stop a
daemon rather than calling glusterd_svc_stop. For example for shd,
it should call glusterd_shdsvc_stop instead of calling basic API
function to stop. Because the individual functions for each daemon
could be doing some specific operation in their wrapper function.
Change-Id: Ie6d40590251ad470ef3901d1141ab7b22c3498f5
fixes: bz#1712741
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: "gluster v status" is hung in heterogenous cluster
when issued from a non-upgraded node.
Cause: commit 34e010d64 fixes the txn-opinfo mem leak
in op-sm framework by not setting the txn-opinfo if some
conditions are true. When vol status is issued from a
non-upgraded node, command is hanging in its upgraded peer
as the upgraded node setting the txn-opinfo based on new
conditions where as non-upgraded nodes are following diff
conditions.
Fix: Add an op-version check, so that all the nodes follow
same set of conditions to set txn-opinfo.
fixes: bz#1710159
Change-Id: Ie1f353212c5931ddd1b728d2e6949dfe6225c4ab
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At the moment new stack doesn't populate frame->root->unique in all cases. This
makes it difficult to debug hung frames by examining successive state dumps.
Fuse and server xlators populate it whenever they can, but other xlators won't
be able to assign 'unique' when they need to create a new frame/stack because
they don't know what 'unique' fuse/server xlators already used. What we need is
for unique to be correct. If a stack with same unique is present in successive
statedumps, that means the same operation is still in progress. This makes
'finding hung frames' part of debugging hung frames easier.
fixes bz#1714098
Change-Id: I3e9a8f6b4111e260106c48a2ac3a41ef29361b9e
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
| |
1401590: Deadcode
updates: bz#789278
Change-Id: I3aa1d3aa9769e6990f74b6a53e288e788173c5e0
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
| |
After basic analysis, found that these methods were not being
used at all.
updates: bz#1693692
Change-Id: If9cfa1ab189e6e7b56230c4e1d8e11f9694a9a65
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: In commit ac70f66c5805e10b3a1072bd467918730c0aeeb4 I
missed one condition to populate volume dictionary in
multiple threads while brick_multiplex is enabled.Due
to that glusterd is not sending volume dictionary for
all volumes to peer.
Solution: Update the condition in code as well as update test case
also to avoid the issue
Change-Id: I06522dbdfee4f7e995d9cc7b7098fdf35340dc52
fixes: bz#1711250
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The handler functions are pointed to dummy functions.
The switch case handling for tier also have been moved to
point default case to avoid issues, if reintroduced.
The tier changes in DHT still remain as such.
updates: bz#1693692
Change-Id: I80d80c9a3eb862b4440a36b31ae82b2e9d92e4dc
Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes the following CID's:
* 1124829
* 1274075
* 1274083
* 1274128
* 1274135
* 1274141
* 1274143
* 1274197
* 1274205
* 1274210
* 1274211
* 1288801
* 1398629
Change-Id: Ia7c86cfab3245b20777ffa296e1a59748040f558
Updates: bz#789278
Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
EC was ignoring lock contention notifications received while a lock was
being acquired. When a lock is partially acquired (some bricks have
granted the lock but some others not yet) we can receive notifications
from acquired bricks, which should be honored, since we may not receive
more notifications after that.
Since EC was ignoring them, once the lock was acquired, it was not
released until the eager-lock timeout, causing unnecessary delays on
other clients.
This fix takes into consideration the notifications received before
having completed the full lock acquisition. After that, the lock will
be releaed as soon as possible.
Fixes: bz#1708156
Change-Id: I2a306dbdb29fb557dcab7788a258bd75d826cc12
Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
|
|
|
|
|
|
|
|
|
| |
CID: 1401345 - Unused value
updates: bz#789278
Change-Id: I6b8f2611151ce0174042384b7632019c312ebae3
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We only need to calculate and write the checksum in case of
!is_quota_conf .
Align the code in accordance.
Also, use a smaller buffer (to write few chars).
Change-Id: I40c83ce10447df77ff9975d314d768ec2c0087c2
updates: bz#1193929
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A rebalance process currently only looks up files
that it is supposed to migrate. This could cause issues
when lookup-optimize is enabled as the dir layout can be
updated with the commit hash before all files are looked up.
This is expecially problematic of one of the rebalance processes
fails to complete as clients will try to access files whose
linkto files might not have been created.
Each process will now lookup every file in the directory it is
processing.
Pros: Less likely that files will be inaccessible.
Cons: More lookup requests sent to the bricks and a potential
performance hit.
Note: this does not handle races such as when a layout is updated on disk
just as the create fop is sent by the client.
Change-Id: I22b55846effc08d3b827c3af9335229335f67fb8
fixes: bz#1711764
Signed-off-by: N Balachandran <nbalacha@redhat.com>
|
|
|
|
|
|
| |
updates: bz#1712322
Change-Id: I120a1d23506f9ebcf88c7ea2f2eff4978a61cf4a
Signed-off-by: Susant Palai <spalai@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During a graph cleanup, we first sent a PARENT_DOWN and wait for
a child down to ultimately free the xlator and the graph.
In the ec xlator, we cleanup the threads when we get a PARENT_DOWN event.
But a racing event like CHILD_UP or event xl_op may trigger healing threads
after threads cleanup.
So there is a chance that the threads might access a freed private variabe
Change-Id: I252d10181bb67b95900c903d479de707a8489532
fixes: bz#1703948
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
In function "afr_selfheal_entry_granular", after completing the
heal we are not destroying the frame. This will lead to crash.
when we execute statedump operation, where it tried to access
xlator object. If this xlator object is freed as part of the
graph destroy this will lead to an invalid memory access
Change-Id: I0a5e78e704ef257c3ac0087eab2c310e78fbe36d
fixes: bz#1708926
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Consider the following case -
1. A file gets FALLOCATE'd such that > "shard-lru-limit" number of
shards are created.
2. And then it is deleted after that.
The unique thing about FALLOCATE is that unlike WRITE, all of the
participant shards are resolved and created and fallocated in a single
batch. This means, in this case, after the first "shard-lru-limit"
number of shards are resolved and added to lru list, as part of
resolution of the remaining shards, some of the existing shards in lru
list will need to be evicted. So these evicted shards will be
inode_unlink()d as part of eviction. Now once the fop gets to the actual
FALLOCATE stage, the lru'd-out shards get added to fsync list.
2 things to note at this point:
i. the lru'd out shards are only part of fsync list, so each holds 1 ref
on base shard
ii. and the more recently used shards are part of both fsync and lru list.
So each of these shards holds 2 refs on base inode - one for being
part of fsync list, and the other for being part of lru list.
FALLOCATE completes successfully and then this very file is deleted, and
background shard deletion launched. Here's where the ref counts get mismatched.
First as part of inode_resolve()s during the deletion, the lru'd-out inodes
return NULL, because they are inode_unlink()'d by now. So these inodes need to
be freshly looked up. But as part of linking them in lookup_cbk (precisely in
shard_link_block_inode()), inode_link() returns the lru'd-out inode object.
And its inode ctx is still valid and ctx->base_inode valid from the last
time it was added to list.
But shard_common_lookup_shards_cbk() passes NULL in the place of base_pointer
to __shard_update_shards_inode_list(). This means, as part of adding the lru'd out
inode back to lru list, base inode is not ref'd since its NULL.
Whereas post unlinking this shard, during shard_unlink_block_inode(),
ctx->base_inode is accessible and is unref'd because the shard was found to be part
of LRU list, although the matching ref didn't occur. This at some point leads to
base_inode refcount becoming 0 and it getting destroyed and released back while some
of its associated shards are continuing to be unlinked in parallel and the client crashes
whenever it is accessed next.
Fix is to pass base shard correctly, if available, in shard_link_block_inode().
Also, the patch fixes the ret value check in tests/bugs/shard/shard-fallocate.c
Change-Id: Ibd0bc4c6952367608e10701473cbad3947d7559f
Updates: bz#1696136
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
We were not properly cleaning self-heal daemon resources
during ec fini. With shd multiplexing, it is absolutely
necessary to cleanup all the resources during ec fini.
Change-Id: Iae4f1bce7d8c2e1da51ac568700a51088f3cc7f2
fixes: bz#1703948
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ISSUE: gluster volume stop succeeds even if quorum is not met.
Fix: Add GD_OP_STOP_VOLUME to gluster_validate_quorum in
glusterd_mgmt_v3_pre_validate ().
Since the volume stop command has been ported from synctask to mgmt_v3,
the quorum check was missed out.
Change-Id: I7a634ad89ec2e286ea262d7952061efad5360042
fixes: bz#1690753
Signed-off-by: Vishal Pandey <vpandey@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
At the time of a glusterd restart, while doing a handshake
there is a possibility that multiple shd manager might get
executed. Because of this, there is a chance that multiple
shd get spawned during a glusterd restart
Change-Id: Ie20798441e07d7d7a93b7d38dfb924cea178a920
fixes: bz#1707081
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- pass fop state instead of afr local to
afr_ta_dom_lock_check_and_release()
- avoid afr_lock_release_synctask() being called simultaneosuly from
notify code path and transaction (post-op) code path due to races.
- Check if the post-op on TA is valid based on event_gen checks.
- Invalidate in-memory information when we get TA child down.
Note: Thi patch addresses some pending review comments of commit
053b1309dc8fbc05fcde5223e734da9f694cf5cc
(https://review.gluster.org/#/c/glusterfs/+/20095/)
fixes: bz#1698449
Change-Id: I2ccd7e1b53362f9f3fed8680aecb23b5011eb18c
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
volume get all all | grep <key> & volume get <volname> all | grep <key>
dumps two different output value for cluster.brick-multiplex and
cluster.server-quorum-ratio
Fixes: bz#1707700
Change-Id: Id131734e0502aa514b84768cf67fce3c22364eae
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to https://review.gluster.org/#/c/glusterfs/+/22652/ ,
reduce some of the work by using smaller buffers and less
conversion of parameters when snprintf()'ing them.
On the way, remove some clang warnings, mainly on dead assignment.
Change-Id: Ie51e6d6f14df6b2ccbebba314cf937af08839741
updates: bz#1193929
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
|
|
|
|
|
|
|
| |
updates: bz#1193929
Change-Id: Idad745d5869c92e6bed71842f14bc1a3362ca4bd
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I was working on a blog about troubleshooting AFR issues and I wanted to copy
the messages logged by self-heal for my blog. I then realized that AFR-v2 is not
logging *before* attempting data heal while it logs it for metadata and entry
heals.
I [MSGID: 108026] [afr-self-heal-entry.c:883:afr_selfheal_entry_do]
0-testvol-replicate-0: performing entry selfheal on
d120c0cf-6e87-454b-965b-0d83a4c752bb
I [MSGID: 108026] [afr-self-heal-common.c:1741:afr_log_selfheal]
0-testvol-replicate-0: Completed entry selfheal on
d120c0cf-6e87-454b-965b-0d83a4c752bb. sources=[0] 2 sinks=1
I [MSGID: 108026] [afr-self-heal-common.c:1741:afr_log_selfheal]
0-testvol-replicate-0: Completed data selfheal on
a9b5f183-21eb-4fb3-a342-287d3a7dddc5. sources=[0] 2 sinks=1
I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
0-testvol-replicate-0: performing metadata selfheal on
a9b5f183-21eb-4fb3-a342-287d3a7dddc5
I [MSGID: 108026] [afr-self-heal-common.c:1741:afr_log_selfheal]
0-testvol-replicate-0: Completed metadata selfheal on
a9b5f183-21eb-4fb3-a342-287d3a7dddc5. sources=[0] 2 sinks=1
Adding it in this patch. Now there is a 'performing' and a corresponding
'Completed' message for every type of heal.
fixes: bz#1707746
Change-Id: I0b954cf1e17b48280aefa76640b5119b92133d61
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of saving each key-value separately, which is slow (
especially as we fflush() after each!), store them all as one
string and write all together.
Implements https://github.com/gluster/glusterfs/issues/629
Change-Id: Ie77a272446b0b6785584b710a4fdd9c613dd9578
updates: bz#1193929
Signed-off-by: Yaniv Kaul <ykaul@redhat,.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: If any custom xattrs are set on the directory before
add a brick, xattrs are not healed on the directory
after adding a brick.
Solution: xattr are not healed because dht_selfheal_dir_mkdir_lookup_cbk
checks the value of MDS and if MDS value is not negative
selfheal code path does not take reference of MDS xattrs.Change the
condition to take reference of MDS xattr so that custom xattrs are
populated on newly added brick
Updates: bz#1702299
Change-Id: Id14beedb98cce6928055f294e1594b22132e811c
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed coverity error, "Unchecked return value (CHECKED_RETURN)".
Checking return value & logging error message if afr_set_pending_dict
fails.
updates: bz#789278
Change-Id: Iab7da6b4f3cd0622b95b8e1c412b007a330467e5
Signed-off-by: Rinku Kothiya <rkothiya@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a race in the way O_DIRECT writes are handled. Assume two
overlapping write requests w1 and w2.
* w1 is issued and is in wb_inode->wip queue as the response is still
pending from bricks. Also wb_request_unref in wb_do_winds is not yet
invoked.
list_for_each_entry_safe (req, tmp, tasks, winds) {
list_del_init (&req->winds);
if (req->op_ret == -1) {
call_unwind_error_keep_stub (req->stub, req->op_ret,
req->op_errno);
} else {
call_resume_keep_stub (req->stub);
}
wb_request_unref (req);
}
* w2 is issued and wb_process_queue is invoked. w2 is not picked up
for winding as w1 is still in wb_inode->wip. w1 is added to todo
list and wb_writev for w2 returns.
* response to w1 is received and invokes wb_request_unref. Assume
wb_request_unref in wb_do_winds (see point 1) is not invoked
yet. Since there is one more refcount, wb_request_unref in
wb_writev_cbk of w1 doesn't remove w1 from wip.
* wb_process_queue is invoked as part of wb_writev_cbk of w1. But, it
fails to wind w2 as w1 is still in wip.
* wb_requet_unref is invoked on w1 as part of wb_do_winds. w1 is
removed from all queues including w1.
* After this point there is no invocation of wb_process_queue unless
new request is issued from application causing w2 to be hung till
the next request.
This bug is similar to bz 1626780 and bz 1379655.
Change-Id: Iaa47437613591699d4c8ad18bc0b32de6affcc31
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Fixes: bz#1705865
|
|
|
|
|
|
|
|
| |
CID: 1382403 (CHECKED_RETURN)
Updates: bz#789278
Change-Id: I4c57b93fd3d14c524ff8519ed876f029834de306
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Right now, the timeout is written by hard code,
fix it by using heal-timeout.
fixes: bz#1703020
Change-Id: I0d154e7807f9dba7efc3896805559bbfaa7af2ad
Signed-off-by: Kinglong Mee <kinglongmee@gmail.com>
|
|
|
|
|
|
|
|
| |
... by holding delta_blocks in 64-bit int as opposed to 32-bit int.
Change-Id: I2c1ddab17457f45e27428575ad16fa678fd6c0eb
updates: bz#1705884
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Coverity reported that GF_FREE(req_ctx) could be called 2x on req_ctx.
Change-Id: I9120686e5920de8c27688e10de0db6aa26292064
CID: 1401115
Updates: bz#789278
Signed-off-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Doing re-open with O_TRUNC will truncate the fragment even when it is not
needed needing extra heals
Fix:
At the time of re-open don't use O_TRUNC.
fixes bz#1706603
Change-Id: Idc6408968efaad897b95a5a52481c66e843d3fb8
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
1. Use small arrays, 32 or 64 bytes should suffice.
2. Do not repeat the pattern of
snprintf '%s.%d', prefix, count
over and over.
Change-Id: Ief6de78b766d9a07acb6256fc4830f4f3cfba7c9
updates: bz#1193929
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Along with fixing few defect, put the required annotations for the defects which
are marked ignore/false positive/intentional as per the coverity defect sheet.
This should avoid the per component graph showing many defects as open in the
coverity glusterfs web page.
Updates: bz#789278
Change-Id: I19461dc3603a3bd8f88866a1ab3db43d783af8e4
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: Currently glusterd spawn bulkvoldict in brick_mux
environment while no. of volumes are less than configured
glusterd.vol_count_per_thread
Solution: Correct the logic to spawn bulkvoldict thread
1) Calculate endindex only while total thread is non zero
2) Update end index correctly to pass index for bulkvoldict
thread
Fixes: bz#1704252
Change-Id: I1def847fbdd6a605e7687bfc4e42b706bf0eb70b
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
| |
CID 1401087: Null pointer dereferences (REVERSE_INULL)
CID 1401088: Null pointer dereferences (FORWARD_NULL)
Change-Id: I71bf67af80e1b22bcd2eb997b01a1a5ef0b4d80b
Updates: bz#789278
Signed-off-by: Susant Palai <spalai@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Current implementation made it possible to consider that a file was not
fresh even if it was created less than a second ago. This patch fixes
the way in which the delay is computed to ensure that at least one
second has elapsed.
Change-Id: I05f7b99e7e8dd97e31f7ebaaec6c39eecf98b00f
Updates: bz#1193929
Signed-off-by: Xavier Hernandez <jahernan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Compound fops are kept on wire as a backward compatibility with
older AFR modules. The AFR module used beyond 4.x releases are
not using compound fops. Hence removing the compound fop in the
protocol code.
Note that, compound-fops was already an 'option' in AFR, and
completely removed since 4.1.x releases.
So, point to note is, with this change, we have 2 ways to upgrade
when clients of 3.x series are present.
i) set 'use-compound-fops' option to 'false' on any volume which
is of replica type. And then upgrade the servers.
ii) Do a two step upgrade. First from current version (which will
already be EOL if it's using compound) to a 4.1..6.x version,
and then an upgrade to 7.x.
Consider the overall code which we are removing for the option
seems quite high, I believe it is worth it.
updates: bz#1693692
Signed-off-by: Amar Tumballi <amarts@redhat.com>
Change-Id: I0a8876d0367a15e1410ec845f251d5d3097ee593
|
|
|
|
|
|
|
|
| |
anymore
updates: bz#1693692
Change-Id: Id5932b11e115ca6da1c2bfff7ae1460787109e06
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: statedump is not capturing information related to glusterd
Solution: statdump is not capturing glusterd info because
trav->dumpops is null in gf_proc_dump_single_xlator_info ()
where trav is glusterd xlator object. trav->dumpops is null
because we missed to define dumpops in xlator_api of glusterd.
defining dumpops in xlator_api of glusterd fixes the issue.
fixes: bz#1703629
Change-Id: If85429ecb1ef580aced8d5b88d09fc15258bfc4c
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In some of the fops generated by generator.py, xdata request
was not being wound to the child xlator correctly.
This was happening because when though the logic in
cloudsync-fops-c.py was correct, generator.py was generating
a resultant code that omits this logic.
Made changes in cloudsync-fops-c.py so that correct code is
produced.
Change-Id: I6f25bdb36ede06fd03be32c04087a75639d79150
updates: bz#1642168
Signed-off-by: Anuradha Talur <atalur@commvault.com>
|
|
|
|
|
|
| |
Change-Id: Icbe53e78e9c4f6699c7a26a806ef4b14b39f5019
updates: bz#1642168
Signed-off-by: Anuradha Talur <atalur@commvault.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1400775 - USE_AFTER_FREE
1400742 - Missing Unlock
1400736 - CHECKED_RETURN
1398470 - Missing Unlock
Missing unlock is the tricky one, we have had annotation added, but
coverity still continued to complaint. Added pthread_mutex_unlock to
clean up the lock before destroying it to see if it makes coverity
happy.
Updates: bz#789278
Change-Id: I1d892612a17f805144d96c1b15004a85a1639414
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
...during volume create if the cluster op-version is >=GD_OP_VERSION_7_0.
This option itself was introduced in GD_OP_VERSION_4_0_0 via commit 6daa65356.
We missed enabling it by default for new volume creates in that commit.
If we are to do it now safely, we need to use op version
GD_OP_VERSION_7_0 and target it for release-7.
fixes: bz#1702303
Change-Id: I7c6d4a8abe0816367e7069cb5cad01744f04858f
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Sometimes we find that developers forget to assign lk-owner for an
inodelk/entrylk/lk before writing code to wind these fops. locks
xlator at the moment allows this operation. This leads to multiple
threads in the same client being able to get locks on the inode
because lk-owner is same and transport is same. So isolation
with locks can't be achieved.
Fix:
Disallow locks with lk-owner zero.
fixes bz#1624701
Change-Id: I1aadcfbaaa4d49308f7c819505857e201809b3bc
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change got missed while the initial changes were sent.
Should have been a part of :
https://review.gluster.org/#/c/glusterfs/+/21757/
Gist of the change:
Function that fills in stat info for dirents is
invoked in readdirp in posix when cloudsync populates xdata
request with GF_CS_OBJECT_STATUS.
Change-Id: Ide0c4e80afb74cd2120f74ba934ed40123152d69
updates: bz#1642168
Signed-off-by: Anuradha Talur <atalur@commvault.com>
|