| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
Since release-6 is not done yet, this option can be introduced with
GD_OP_VERSION_6_0.
Change-Id: I8a0867e5b8b23d0d485704a2fc7a3efc4a90f637
Signed-off-by: Raghavendra Gowdappa <rgowdapp@redhat.com>
updates: bz#1664934
|
|
|
|
|
|
|
|
|
|
| |
During disconnect cleanup, we are not cancelling reconnect
timer, which causes a ref leak each time when a disconnect
happen.
Change-Id: I9d05d1f368d080e04836bf6a0bb018bf8f7b5b8a
updates: bz#1659708
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Explicit invalidation by calling inode_invalidate is necessary when
same (meta)data is shared/access across multiple mounts. Without an
explicit inode_invalidate call, caches in the mount which didn't
witness writes wouldn't be aware of changes as writes wouldn't have
passed through them. However, if (meta)data is not shared, all
relevant I/O goes through the cache of single mount and hence is
coherent with (meta)data on bricks always. So, explicit inode
invalidation can be disabled for this case which gives a huge
performance boost for workloads that write data and then immediately
read the data they just wrote. Note that otherwise, local writes
(which pass through the cache) will change ctime and cause unnecessary
invalidations.
The name of the option that controls this behavior is
"performance.global-cache-invalidation". This option is global and it
purges caches both in glusterfs and kernel stack for native FUSE
mounts. For non-native FUSE mounts, it purges cache only from
glusterfs stack. This option is effective only when
performance.stat-prefetch is on.
Note that there is a similar option "performance.cache-invalidation",
but the scope of that option is limited to quick-read and md-cache.
Change-Id: I462bb4b65ff9aae1f6ba76f50b1f2f94fb10323b
Signed-off-by: Raghavendra Gowdappa <rgowdapp@redhat.com>
updates: bz#1664934
|
|
|
|
|
|
|
|
|
| |
glusterd_resolve_all_bricks failure log should highlight the brick
identifier.
Updates: bz#1193929
Change-Id: I035b4650ef6a14bb1e1221d3bad1c40f9d43dbdd
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: get-state command will error out, if any of the underlying
brick(s) of volume(s) in the cluster go bad.
It is expected that get-state command should not error out, but
should generate an output successfully.
Solution: In glusterd_get_state(), a statfs call is made on the
brick path for every bricks of the volumes to calculate the total
and free memory available. If any of statfs call fails on any
brick, we should not error out and should report total memory and free
memory of that brick as 0.
This patch also handles a statfs failure scenario in
glusterd_store_retrieve_bricks().
fixes: bz#1672205
Change-Id: Ia9e8a1d8843b65949d72fd6809bd21d39b31ad83
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scenarios tested:
* Upgrade the node when there are stripe / tiering and regular
type of volumes are present.
- All volumes are started fine (as the change was not on brick volfile)
- For tier, the functionality may not even work, as changetimerecorder
is not present.
- 'gluster volume info' properly shows as 'NOT SUPPORTED' for stripe and
tier type of volume.
* Upgrade in a rolling upgrade scenario, where an old version is
able to connect to higher master.
- on a normal volume, if the volfile-server was new, the newer client
volfiles needed to have utime xlator conditionally.
- with this one change, all other changes seem to work fine.
Change-Id: Ib2d3b69dafa02b2c695a735b13c1aa70aba07cb8
updates: bz#1635688
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the feature enabled, some of the performance testing results,
specially those which create millions of small files, got approximately
4x regression compared to version before enabling this.
On master without this patch: 765 creates/sec
On master with this patch : 3380 creates/sec
Also there seems to be regression caused by this in 'ls -l' workload.
On master without this patch: 3030 files/sec
On master with this patch : 16610 files/sec
This is a feature added to handle multiple clients parallely operating
(specially those which race for file creates with same name) on a single
namespace/directory. Considering that is < 3% of Gluster's usecase right
now, it makes sense to disable the feature by default, so we don't
penalize the default users who doesn't bother about this usecase.
Also note that the client side translators, specially, distribute,
replicate and disperse already handle the issue upto 99.5% of the cases
without SDFS, so it makes sense to keep the feature disabled by default.
Credits: Shyamsunder <srangana@redhat.com> for running the tests and
getting the numbers.
Change-Id: Iec49ce1d82e621e9db25eb633fcb1d932e74f4fc
Updates: bz#1670031
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch helps enable IPv6 connections in the cluster.
The default address-family is IPv4 without using this option explicitly.
When address-family is set to "inet6" in the /etc/glusterfs/glusterd.vol
file, the mount command-line also needs to have
-o xlator-option="transport.address-family=inet6" added to it.
This option also gets added to the brick command-line.
Snapshot and gfapi use-cases should also use this option to pass in the
inet6 address-family.
Change-Id: I97db91021af27bacb6d7578e33ea4817f66d7270
fixes: bz#1635863
Signed-off-by: Milind Changire <mchangir@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a low level security issue with fencing since one client
can preempt another client's lock.
This patch does not completely eliminate the issue of a client
misbehaving, but certainly it adds a security layer for default use cases
that does not need fencing.
Change-Id: I55cd15f2ed1ae0f2556e3d27a2ef4bc10fdada1c
updates: #466
Signed-off-by: Susant Palai <spalai@redhat.com>
|
|
|
|
|
|
| |
Change-Id: Iefe08b136044495f6fa2b092c9e8c833efee1400
fixes: bz#1667905
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
| |
In gluster get-state volumeoptions command there was some amount of leak
observed. This fix resolves the identified leaks.
Change-Id: Ibde5743d1136fa72c531d48bb1b0b5da0c0b82a1
fixes: bz#1667779
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: In gluster code some of the places it call's get_new_dict
to create a dictionary without taking reference so at the time
of dict_unref it has become a leak
Solution: To resolve the same call dict_new instead of get_new_dict
updates bz#1650403
Change-Id: I3ccbbf5af07079a4fa09aad2cd0458c8625b2f06
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: running "gluster get-state glusterd odir /get-state"
resulted in glusterd crash.
Cause: In the above command output directory has been specified
without "/" at the end. If "/" is not given at the end, "/" will
be added to path using "strcat", so the added character "/" is
not having memory allocated. When tried to free, glusterd will
crash as"/" has no memory allocated.
Solution: Instead of concatenating "/" to output directory, add
it to output filename.
Change-Id: I5dc00a71e46fbef4d07fe99ae23b36fb60dec1c2
fixes: bz#1665038
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
https://review.gluster.org/#/c/glusterfs/+/21762/ has migrated
rebalance commands from op-sm framework to mgmt v3 framework.
In a heterogenous cluster, if rebalance commands follow op-sm
framework, localhost information is not displayed in the
output of "gluster v rebalance <volname> status".
Cause:
Previously without https://review.gluster.org/#/c/glusterfs/+/21762/
rebalance commands were following op-sm framework.
In glusterd_volume_rebalance_use_rsp_dict() current_index variable
keeps track of number/count of peers in trusted storage pool.
In op-sm, glusterd_volume_rebalance_use_rsp_dict() will be called
only for the peers. So the current index should start from 2
assuming local host as node 1.
With the above patch, rebalance commands are following mgmt v3
framework. In mgmt v3, glusterd_volume_rebalance_use_rsp_dict()
is called for all nodes. For localhost it is called from
brick-op function and for peers it is called from brick-op
call back function. So the current index value should start
from 1.
https://review.gluster.org/#/c/glusterfs/+/21762/ has changed the
value of current index to 1. Because of this, In heterogenous cluster,
local host's information is overwritten by one of the peers information.
And rebalance status will not display localhost's information in
the output.
Solution: assign a value to current index based on a op-version
check.
Change-Id: I2dfba1f007e908cf160acc4a4a5d8ef672572e4d
fixes: bz#1663243
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we run profile info command, it should display statistics
of all the bricks of the volume. To display information of bricks
which are hosted on peers, we need to aggregate the response from
peers.
For profile info command, all the statistics will be added into
the dictionary in brick-op phase. To aggregate the information from
peers, we need to call glusterd_syncop_aggr_rsp_dict() in brick-op
call back function.
fixes: bz#1663223
Change-Id: I5f5890c3d01974747f829128ab74be6071f4aa30
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Add missing unref to req_dict to fix memory leak in handle of
handshake.
Change-Id: I0d8573fc3668c1a0ccc9030e3a096bbe20ed5c36
fixes: bz#1663077
Signed-off-by: Zhang Huan <zhanghuan@open-fs.com>
|
|
|
|
|
|
|
|
|
| |
Added ternary operator to avoid this issue
Updates: bz#1622665
Change-Id: I163d0628304a0d61249d1d97a4a3d3bee4ba4927
Signed-off-by: Sheetal Pamecha <sheetal.pamecha08@gmail.com>
|
|
|
|
|
|
|
|
| |
Attempt to free rsp.dict.dict_val twice
Change-Id: I5dbc50430f59ca8d0c739b0fbe95d71981852889
Updates: bz#1622665
Signed-off-by: Sheetal Pamecha <sheetal.pamecha08@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch addresses coverity issues with CID 1398470 and 1398475
1398470 - Missing unlock - False positive, Added a annotation to
make coverity happy
1398475 - Unused value
Change-Id: I1bb3df0b716690fad8fc52c393c8b2b6c41f7860
updates: bz#789278
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
| |
updates: bz#789278
Change-Id: I7de800b90a614e3666e965b0cafc70026a844b2d
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: When trying to convert a plain distribute volume to replica-3
or arbiter type it is failing with ENOTCONN error as the lookup on
the root will fail as there is no quorum.
Fix: Allow lookup on root if it is coming from the ADD_REPLICA_MOUNT
which is used while adding bricks to a volume. It will try to set the
pending xattrs for the newly added bricks to allow the heal to happen
in the right direction and avoid data loss scenarios.
Note: This fix will solve the problem of type conversion only in the
case where the volume was mounted at least once. The conversion of
non mounted volumes will still fail since the dht selfheal tries to
set the directory layout will fail as they do that with the PID
GF_CLIENT_PID_NO_ROOT_SQUASH set in the frame->root.
Change-Id: Ic511939981dad118cc946754341318b164954b3b
fixes: bz#1655854
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With read-after-open being set to yes by default, if open-behind sees
any reads, it'll do an open on backend (and hence flush/release
later). This means with the current order of quick-read and
open-behind, open-behind sees all reads and hence also does open
bringing down performance for small file reads.
Since for small files, reads are absorbed by quick-read, if quick-read
is made a parent of open-behind, ob doesn't witness any reads. For
read-only workloads, this means ob doen't do any opens (even with
read-after-open yes and use-anonymous-fd no).
Change-Id: I138a42b006d104cff43ee6f07829e39c36f6f234
Signed-off-by: Raghavendra Gowdappa <rgowdapp@redhat.com>
Fixes: bz#1659327
|
|
|
|
|
|
|
|
|
| |
Current rebalance commands use the op_state machine framework.
Porting it to use the mgmt_v3 framework.
Change-Id: I6faf4a6335c2e2f3d54bbde79908a7749e4613e7
fixes: bz#1655827
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
| |
Fixes: bz#1659868
Change-Id: I38675ba4d47c8ba7f94cfb4734692683ddb3dcfd
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove the options to load old symbol.
* keep only 'xlator_api' symbol from being exported using xlator.sym
* add xlator_api to all the xlators where its missing
NOTE: This covers all the xlators which has at least a test case
to validate its loading. If there is a translator, which doesn't
have any test, then we should probably remove that from codebase.
fixes: #164
Change-Id: Ibcdc8c9844cda6b4463d907a15813745d14c1ebb
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: Functions allocate memory for req structure but after submit
request they missed to cleanup memory
Solution: After submit request cleanup allocated mmeory
Change-Id: I8f995787ed8986b882f008ccd588670b5d4139f5
updates: bz#1633930
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
get_mux_limit_per_process () reads the global option dictionary and in
case it doesn't find out a key, assumes that
cluster.max-bricks-per-process option isn't configured however the
default value should be picked up in such case.
Change-Id: I35dd8da084adbf59793d58557e818d8e6c17f9f3
Fixes: bz#1656951
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.
Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation <> in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs
This change although big, is just moving around the headers and
making it correct when including these headers from other sources.
This helps us correctly include libglusterfs includes without
namespace conflicts.
Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We still use gnfs on our side, so do a little work to support
server.all-squash. Just like server.root-squash, it's also a
volume wide option. Also see bz#1285126
$ gluster volume set <VOLNAME> server.all-squash on
Note: If you enable server.root-squash and server.all-squash
at the same time, only server.all-squash works. Please refer
to following table
+---------------+-----------------+---------------------------+
| |all_squash | no_all_squash |
+-------------------------------------------------------------+
| | |anonuid/anongid for root |
|root_squash |anonuid/anongid |useruid/usergid for no-root|
+-------------------------------------------------------------+
|no_root_squash |anonuid/anongid |useruid/usergid |
+-------------------------------------------------------------+
Updates bz#1285126
Signed-off-by: Xie Changlong <xiechanglong@cmss.chinamobile.com>
Signed-off-by: Xue Chuanyu <xuechuanyu@cmss.chinamobile.com>
Change-Id: Iea043318fe6e9a75fa92b396737985062a26b47e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While glusterd has an infra to allow post install of spec to bring it up
in the interim upgrade mode to allow all the volfiles to be regenerated
with the latest executable, in container world the same methodology is
not followed as container image always point to the specific gluster rpm
and gluster rpm doesn't go through an upgrade process.
This fix does the following:
1. If glusterd.upgrade file doesn't exist, regenerate the volfiles
2. If maximum-operating-version read from glusterd.upgrade doesn't match
with GD_OP_VERSION_MAX, glusterd detects it to be a version where new
options are introduced and regenerate the volfiles.
Tests done:
1. Bring up glusterd, check if glusterd.upgrade file has been created
with GD_OP_VERSION_MAX value.
2. Post 1, restart glusterd and check glusterd hasn't regenerated the
volfiles as there's is no change in the GD_OP_VERSION_MAX vs the
op_version read from the file.
3. Bump up the GD_OP_VERSION_MAX in the code by 1 and post compilation
restart glusterd where the volfiles should be again regenerated.
Note: The old way of having volfiles regenerated during an rpm upgrade
is kept as it is for now but eventually this can be sunset later.
Change-Id: I75b49a1601c71e99f6a6bc360dd12dd03a96414b
Fixes: bz#1651463
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In a previous patch (https://review.gluster.org/20769) we've
added the key length to be passed to dict_* funcs, to remove the need
to strlen() it. This patches makes use of these functions over
this whole file.
Please review carefully, as there are many many changes there.
Compile-tested only!
updates: bz#1193929
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
Change-Id: I2e1ee340300ec330936c31becda6bfe1b6533281
|
|
|
|
|
|
|
|
|
| |
Commit 6821cec changed this default from 0 to 250 in the option table,
however the same wasn't done in the global option table.
Change-Id: I6075f2ebc51e839510d6492fb62e706deb2d845b
Fixes: bz#1652118
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current profile commands use the op_state machine framework.
Porting it to use the mgmt_v3 framework.
The following tests were performed on the patch:
case 1:
1. On a 3 node cluster, created and started 3 volumes
2. Mounted all the three volumes and wrote some data
3. Started profile operation for all the volumes
4. Ran "gluster v status" from N1,
"gluster v profile <volname1> info" form N2,
"gluster v profile <volname2> info" from N3 simultaneously in a
loop for around 10000 times
5. Didn't find any cores generated.
case 2:
1. Repeat the steps 1,2 and 3 from case 1.
2. Ran "gluster v status" from N1,
"gluster v profile <volname1> info" form N2(terminal 1),
"gluster v profile <volname2> info" from N2(terminal 2)
simultaneously in a loop.
3. No cores were generated.
fixes: bz#1654181
Change-Id: I83044cf5aee3970ef94066c89fcc41783ed468a6
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: glusterd should not try to acquire locks on any resources,
when it already received a SIGTERM and cleanup is started. Otherwise
we might hit segfault, since the thread which is going through
cleanup path will be freeing up the resouces and some other thread
might be trying to acquire locks on freed resources.
Solution: perform rcu_read_lock/unlock() under cleanup_lock mutex.
fixes: bz#1654270
Change-Id: I87a97cfe4f272f74f246d688660934638911ce54
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: 1) server_init does not cleanup allocate resources
while it is failed before return error
2) dict leak at the time of graph destroying
Solution: 1) free resources in case of server_init is failed
2) Take dict_ref of graph xlator before destroying
the graph to avoid leak
Change-Id: I9e31e156b9ed6bebe622745a8be0e470774e3d15
fixes: bz#1654917
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It seems there were quite a few unused enums (that in turn
cause unndeeded memory allocation) in some xlators.
I've removed them, hopefully not causing any damage.
Compile-tested only!
updates: bz#1193929
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
Change-Id: I8252bd763dc1506e2d922496d896cd2fc0886ea7
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
libglusterfs provides wrapper functions MALLOC/__gf_default_malloc,
CALLOC/__gf_default_calloc, and REALLOC/__gf_default_realloc for those
few places outside of mempool.c that need to call malloc/calloc/realloc
directly.
Notable exceptions are "contrib" code, e.g. rbtree and timer-wheel,
and perhaps parsers generated by yacc+lex. But even parsers can be
fixed to at least call the wrappers mentioned above, if not our own
allocators
Change-Id: Ie6156307b6d2183be9c9aff153afb7598974f4e4
updates: bz#1193929
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
| |
All glusterd store operation and cleanup thread should work under a
critical section to avoid any partial store write.
Change-Id: I4f12e738f597a1f925c87ea2f42565dcf9ecdb9d
Fixes: bz#1652430
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
| |
This patch fixes CID : 1174824 : RESOURCE_LEAK
Change-Id: I59d2d6ebc1fa3d7ebe0b97c7dbe3c5539128522a
updates: bz#789278
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
| |
Change-Id: Ia2c6a10e2b76a4aa8bd4ea97e5ce33bdc813942e
Fixes: bz#1652118
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
| |
With commit 8ad159b2a7, bz#1511339 got reintroduced.
fixes: bz#1511339
Change-Id: I1e34c1fc60c6dda04af25d123f1ca40964cadb7a
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
A new constant named GF_NETWORK_TIMEOUT has been defined and all
references to the hard-coded timeout of 42 seconds have been
replaced with this constant.
Change-Id: Id30f5ce4f1230f9288d9e300538624bcf1a6da27
fixes: bz#1652852
Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: Memory leak when graph init fails as during volfile
exchange between brick and glusterd
Solution: Fix the error code path in glusterfs_graph_init
Change-Id: If62bee61283fccb7fd60abc6ea217cfac12358fa
fixes: bz#1651431
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
| |
Added a default value "off" for (client|server).ssl
fixes: bz#1651059
Change-Id: I3d9c80093ac471d9d770fbd6c67f945491cf726e
Signed-off-by: Sheetal Pamecha <sheetal.pamecha08@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
For added fun, coverity is not smart enough to detect that the
strncpy() is safe, and for extra laughs, using coverity annotations
doesn't do anything either; but we're adding them anyway, along
with marking the BUFFER_SIZE_WARNINGS as false positives on
scan.coverity.com.
Change-Id: If7fa157eca565842109f32fee0399ac183b19ec7
updates: bz#1193929
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Removed unnecessary iteration during brick disconnect
handler when multiplex is enabled.
Change-Id: I62dd3337b7e7da085da5d76aaae206e0b0edff9f
fixes: bz#1650115
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: In commit bcf1e8b07491b48c5372924dbbbad5b8391c6d81 code
was missed to free path return by function search_brick_path_from_proc
This patch fixes CID:
1396668: Resource leak
Change-Id: I4888c071c1058023c7e138a8bcb94ec97305fadf
fixes: bz#1646892
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
| |
Added a default value "none" and additional description.
Change-Id: I3a5c06f8ec1e502fc399860e4b5cb835102cd71d
Updates: bz#1608512
Signed-off-by: Shwetha Acharya <sacharya@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since gcc-8.2.x (fedora-28 or so) gcc has been emitting warnings
about buggy use of strncpy.
Most uses that gcc warns about in our sources are exactly backwards;
the 'limit' or len is the strlen/size of the _source param_, giving
exactly zero protection against overruns. (Which was, after all, one
of the points of using strncpy in the first place.)
IOW, many warnings are about uses that look approximately like this:
...
char dest[8];
char src[] = "this is a string longer than eight chars";
...
strncpy (dest, src, sizeof(src)); /* boom */
...
The len/limit should be sizeof(dest).
Note: the above example has a definite over-run. In our source the
overrun is typically only theoretical (but possibly exploitable.)
Also strncpy doesn't null-terminate on truncation; snprintf does; prefer
snprintf over strncpy.
Mildly surprising that coverity doesn't warn/isn't warning about this.
Change-Id: I022d5c6346a751e181ad44d9a099531c1172626e
updates: bz#1193929
Signed-off-by: Kaleb S. KEITHLE <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since gcc-8.2.x (fedora-28 or so) gcc has been emitting warnings
about buggy use of strncpy.
e.g.
warning: ‘strncpy’ output truncated before terminating nul
copying as many bytes from a string as its length
and
warning: ‘strncpy’ specified bound depends on the length of the
source argument
Since we're copying string fragments and explicitly null terminating
use memcpy to silence the warning
Change-Id: I413d84b5f4157f15c99e9af3e154ce594d5bcdc1
updates: bz#1193929
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
|