| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In case a new node is added to the peer, after a snapshot was
taken, the geo-rep files are not synced to that node. This
leads to the failure of snapshot restore. Hence, ignoring the
missing geo-rep files in the new node, and proceeding with
snapshot restore. Once the restore is successful, the missing
geo-rep files can be generated with "gluster volume geo-rep
<master-vol> <slave-vol> create push-pem force"
Change-Id: I1c364f8aefdd6c99b0b861b6d0cb33709ec39da2
BUG: 1181418
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/9489
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Reviewed-by: Aravinda VK <avishwan@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If restore commit is successful on the originator and
a few nodes, but fails on some other node, restore cleanup
should restate the volume and the snapshot in question
as it was before the command was run.
Change-Id: I7bb0becc7f052f55bc818018bc84770944e76c80
BUG: 1181418
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/9441
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current Behaviour:
1. Geo-replication gsec_create creates common_secret.pem.pub file
containing public keys of the all the nodes of master cluster
in the location /var/lib/glusterd/
2. Geo-replication create push-pem copies the common_secret.pem.pub
to the same location on all the slave nodes with same name.
Problem:
Wrong public keys might get copied on to slave nodes in multiple
geo-replication sessions simultaneosly.
E.g.
A geo-rep session is established between Node1(vol1:Master) to
Node2 (vol2:Slave). And one more geo-rep session where
Node2 (vol3) becomes master to Node3 (vol4) as below.
Session1: Node1 (vol1) ---> Node2 (vol2)
Session2: Node2 (vol3) ---> Node3 (vol4)
If steps followed to create both geo-replication session is as
follows, wrong public keys are copied on to Node3 from Node2.
1. gsec_create is done on Node1 (vol1) -Session1
2. gsec_create is done on Node2 (vol3) -Session2
3. create push-pem is done Node1 - Session1.
-This overwrites common_secret.pem.pub in Node2
created by gsec_create in second step.
4. create push-pem on Node2 (vol3) copies overwrited
common_secret.pem.pub keys to Node3. -Session2
Consequence:
Session2 fails to start with Permission denied because of wrong
public keys
Solution:
On geo-rep create push-pem, don't copy common_secret.pem.pub
file with same name on to all slave nodes. Prefix master and
slave volume names to the filename.
NOTE: This brings change in manual steps to be followed to setup
non-root geo-replication (mountbroker). To copy ssh public
keys, extra two arguments needs to be followed.
set_geo_rep_pem_keys.sh <mountbroker_user> <master vol name> \
<slave vol name>
Path to set_geo_rep_pem_keys.sh:
Source Installation:
/usr/local/libexec/glusterfs/set_geo_rep_pem_keys.sh
Rpm Installatino:
/usr/libexec/glusterfs/set_geo_rep_pem_keys.sh
Change-Id: If38cd4e6f58d674d5fe2d93da15803c73b660c33
BUG: 1183229
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/9460
Reviewed-by: Aravinda VK <avishwan@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If volume uses quota, volume delete operation should unmount the
auxiliary quota mount usin glusterd_remove_auxiliary_mount(). This
may fail with EBADF is the mount is already gone. In that situation,
ignore the error so that volume delete succeeds.
This fixes a spurious failure on NetBSD in tests/basic/quota.t 74-75
BUG: 1129939
Change-Id: I69325f71fc2c8af254db46f696c8669a4e6bd7e4
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/9468
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Found a bug where a replica 2 volume creation prompts
saying the bricks are in the same hosts even when they
are in different hosts.
Change-Id: Ie55addae55c55e32ad2b5339530ab71f0e3711ab
BUG: 1091935
Signed-off-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-on: http://review.gluster.org/9373
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously glusterd was not checking quorum validation in syncop framework.
So when there is loss in quorum then few operation (for eg. add-brick,
remove-brick, volume set) which is based on syncop framework passed
successfully with out doing quorum validation check.
With this change it will do quorum validation in syncop framework and it will
block all operation (except volume set <quorum options> and "volume reset all"
commands) when there is loss in quorum.
Change-Id: I4c2ef16728d55c98a228bb86795023d9c1f4e9fb
BUG: 1177132
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/9349
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem : glusterd was crashing with SIGABRT if rpc connection is failed in
debug mode.
Reason : It was happening due to iov is passing to assert() before checking
rpc status in rpc call back function (rpc is calling callback function with
setting rpc status as -1 and passing NULL to iov if connection is failed).
Fix : Error checking for iov added after checking the rpc status verified
and error messages are added properly .
Change-Id: I35c05c438444d0454aadac4e45524565a7be68a8
BUG: 1181543
Signed-off-by: Anand <anekkunt@redhat.com>
Reviewed-on: http://review.gluster.org/9449
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For volumes with replicate, disperse xlators, self-heal daemon should do
healing. This patch provides enable/disable functionality for the xlators to be
part of self-heal-daemon. Replicate already had this functionality with
'gluster volume set cluster.self-heal-daemon on/off'. But this patch makes it
uniform for both types of volumes. Internally it still does 'volume set' based
on the volume type.
Change-Id: Ie0f3799b74c2afef9ac658ef3d50dce3e8072b29
BUG: 1177601
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9358
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"features.uss" with a non-boolean value gets set in the volume option
table because of which subsequent volume set operation fails since
features.uss does not contain a valid boolean value.
Fix is not to allow a non-boolean value to get set in the volume option
table. "features.uss" option should have validation function "validate_uss"
which validate the input value given by user.
Change-Id: I4a212f876627a4979715183b0d488fd69095f193
BUG: 1179175
Signed-off-by: ggarg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/9395
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On changelog_register cleanup .processing, .history/.processing,
.current and .history/.current from the working directory.
Moved glusterd_recursive_rmdir and glusterd_for_each_entry to common
place(libglusterfs) and renamed as recursive_rmdir and
GF_FOR_EACH_ENTRY_IN_DIR respectively
BUG: 1162057
Change-Id: I1f98468a344cead039026762a805437b2f9e507b
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: http://review.gluster.org/9082
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Initialising the transport's list, meant to hold clients connected to
it, on the first connection event is prone to race, especially with the
introduction of multi-threaded event layer.
BUG: 1181203
Change-Id: I6a20686a2012c1f49a279cc9cd55a03b8c7615fc
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9413
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Apart from snapshot, for all other transactions quorum should be calculated on
global peer list.
Change-Id: I30bacdb6521b0c6fd762be84d3b7aa40d00aacc4
BUG: 1177132
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9422
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterfs socket files should not reside outside of gluster folder.
Change-Id: I5d7b43b11c8c78a32df8aaf38917b80e4e33c9d0
BUG: 1180972
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9423
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, enabling SSL authentication/encryption but not authorization
required explicitly setting ssl-allow=*. Now that same behavior is the
default (i.e. when ssl-allow is not set).
Also, there's no reason that a name used for *login* auth (typically a
UUID for internal purposes or a human name when using SSL) should
validate as an RFC-compliant host name or IP address. Therefore the
validation only occurs when the auth type is "addr" (not "login" or
anything else).
Change-Id: I01485ff4f0ab37de4b182858235a5fb0cf4c3c7d
BUG: 1179208
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/9397
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use list_for_each_entry_safe() instead of
list_for_each_entry() for cleanup of local
xaction_peers list.
Change-Id: I6d70c04dfb90cbbcd8d9fc4155b8e5e7d7612460
BUG: 1173414
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/9416
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Refactor glusterd-utils.c to create
glusterd-snapshot-utils.c consisting of all snapshot
utility functions.
Change-Id: Id9823a2aec9b115f9c040c9940f288d4fe753d9b
BUG: 1176770
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/9391
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to the recent change introduced by commit
da9deb54df91dedc51ebe165f3a0be646455cb5b cluster quorum count calucation now
depends on whether the peer list is either all peers or global transaction peer
list or the local transaction peer list.
Change-Id: I9f63af9a0cb3cfd6369b050247d0ef3ac93d760f
BUG: 1173414
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9350
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I1bf26c9d294e95f7b82cfc7a96f9d5575f5e0362
BUG: 1176770
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9313
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
... instead of consulting the on-disk data directory. There is no reason
why the on-disk is more accurate than the in-memory representation. In
fact, it is the other way around when a node is reconciling
volume/cluster configuration with the rest of the cluster.
Change-Id: I786823efdf1d0f6b9e6fcdb72d51e5227c399ce1
BUG: 1176770
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9292
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
CID: 1260432
Change-Id: I6845bc4c231b53428419a5a2ad0c78ea9da31058
BUG: 1093692
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9338
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I24ed0f866d53e91a8323c043a38f73207cbfd7d2
BUG: 1168189
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9351
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
... and unlink the 'right' socket file
Change-Id: Id12ee8c622914555b7933104e13b43b3b31b5d19
BUG: 1176770
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9315
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In current implementation xaction_peers list is maintained in a global variable
(glustrd_priv_t) for syncop/mgmt_v3. This means consistency and atomicity of
peerinfo list across transactions is not guranteed when multiple syncop/mgmt_v3
transaction are going through.
We had got into a problem in mgmt_v3-locks.t which was failing spuriously, the
reason for that was two volume set operations (in two different volume) was
going through simultaneouly and both of these transaction were manipulating the
same xaction_peers structure which lead to a corrupted list. Because of which in
some cases unlock request to peer was never triggered and we end up with having
stale locks.
Solution is to maintain a per transaction local xaction_peers list for every
syncop.
Please note I've identified this problem in op-sm area as well and a separate
patch will be attempted to fix it.
Finally thanks to Krishnan Parthasarathi and Kaushal M for your constant help to
get to the root cause.
Change-Id: Ib1eaac9e5c8fc319f4e7f8d2ad965bc1357a7c63
BUG: 1173414
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9269
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The mgmt_v3 handler functions already send the ret code as
part of the *send_resp calls, and further propagating the
ret code to the calling functions will lead to double deletion
of the req object. Hence returning success from the mgmt_v3
handler functions.
Change-Id: I1090e49c54a786daae5fd97b5c1fbcb5d819acba
BUG: 1138577
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/8620
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of relying on brickinfo->status, check if the brick
process is running before copying the brick port number.
Change-Id: I246465fa4cf4911da63a1c26bbb51cc4ed4630ac
BUG: 1175700
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/9297
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ie5eaa2beb4446640b22873f91e17da90d1cd8fad
BUG: 1174625
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/9280
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
* For samba export, the entry point is also added to the readdir response.
Change-Id: I825c017e0f16db1f1890bb56e086f36e6558a1c2
BUG: 1168875
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/9218
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In function glusterd_dump_peer() it is copying "input_key" into "key"
buffer without checking the length which might cause string_overflow
overrun. Similar problem with other coverity issue.
With this fix it will copy "input_key" into "key" buffer by maximum
length of buffer.
Coverity CID: 1256171
Coverity CID: 1256172
Coverity CID: 1256174
Change-Id: I4e092309d9503bd79ff82cf83ed5e8d758743453
BUG: 1093692
Signed-off-by: Gaurav Kumar Garg ggarg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/9208
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In glusterd_op_sm(), we lock and unlock the gd_op_sm_lock mutex.
Unfortunately, locking and unlocking can happen in different threads
(task swap will occur in handler call with use of synctasks).
This case is explictely covered by POSIX: the behavior is undefined.
http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_mutex_lock.html
When unlocking from a thread that is not owner, Linux seems to be fine
(though you never know with unspecified operation), while NetBSD returns
EPERM, causing a spurious error in tests/basic/pump.
To fix this, we use synclock_t which was precisely meant for this.
synclock is a pthread_mutex_t like synchronization object which uses the
synctask handle for owner and is immune to the task being run on
multiple threads during its lifetime.
Change-Id: Idca15190d42f32a843088cc8236138f676377586
BUG: 1129939
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9212
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ifa0d4ac17f9da94660a7b7f567a0f07b5cec7aec
BUG: 1164775
Signed-off-by: Petr Medonos <petr.medonos@etnetera.cz>
Reviewed-on: http://review.gluster.org/9138
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Create a new rebalance volfile, which will not contain
snap-view client translators, irrespective of the status
of USS.
This volfile, will be created and regenerated everytime
the fuse-volfile is generated, and will be consumed
by the rebalance process.
Change-Id: I514a8e88d06c0b8fb6949c3a3e6dc4dbe55e38af
BUG: 1164711
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/9190
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd_handle_snapd_option
glusterd_handle_snapd_option was returning failure if snapd is not running
because of which gluster commands were failing.
Change-Id: I22286f4ecf28b57dfb6fb8ceb52ca8bdc66aec5d
BUG: 1168803
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9206
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a code path (__glusterd_handle_stage_op) where glusterd_get_txn_opinfo
may fail to get a valid transaction id if there is no volume name provided in
the command, however if this function fails to get a txn id in op state machine
then its a serious issue and op-sm is impacted. From debugability aspect gf_log
() can never give the consumer of this function, so logging these failures with
gf_log_calling_fn is must here.
Change-Id: I4937a9fb20cc6a747fd30dcd9fd4936398d0602a
BUG: 1168809
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9207
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a few lingering size_t problems. Of particular note are
some uses of off_t for size params in function calls.
There is no correct, _portable_ way to correctly print an off_t. The
best you can do is use a scratch int64_t/PRId64 or uint64_t/PRIu64.
Change-Id: I86f3cf4678c7dbe5cad156ae8d540a66545f000d
BUG: 1110916
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/8105
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
See also http://review.gluster.org/#/c/7693/, BZ 1091677
AFAICT these are false positives:
[geo-replication/src/gsyncd.c:100]: (error) Memory leak: str
[geo-replication/src/gsyncd.c:403]: (error) Memory leak: argv
[xlators/nfs/server/src/nlm4.c:1201]: (error) Possible null pointer dereference: fde
[xlators/cluster/afr/src/afr-self-heal-common.c:138]: (error) Possible null pointer dereference: __ptr
[xlators/cluster/afr/src/afr-self-heal-common.c:140]: (error) Possible null pointer dereference: __ptr
[xlators/cluster/afr/src/afr-self-heal-common.c:331]: (error) Possible null pointer dereference: __ptr
Test program:
[extras/test/test-ffop.c:27]: (error) Buffer overrun possible for long command line arguments.
[tests/basic/fops-sanity.c:55]: (error) Buffer overrun possible for long command line arguments.
the remainder are fixed with this change-set:
[cli/src/cli-rpc-ops.c:8883]: (error) Possible null pointer dereference: local
[cli/src/cli-rpc-ops.c:8886]: (error) Possible null pointer dereference: local
[contrib/uuid/gen_uuid.c:369]: (warning) %ld in format string (no. 2) requires 'long *' but the argument type is 'unsigned long *'.
[contrib/uuid/gen_uuid.c:369]: (warning) %ld in format string (no. 3) requires 'long *' but the argument type is 'unsigned long *'.
[xlators/cluster/dht/src/dht-rebalance.c:1734]: (error) Possible null pointer dereference: ctx
[xlators/cluster/stripe/src/stripe.c:4940]: (error) Possible null pointer dereference: local
[xlators/mgmt/glusterd/src/glusterd-geo-rep.c:1718]: (error) Possible null pointer dereference: command
[xlators/mgmt/glusterd/src/glusterd-replace-brick.c:942]: (error) Resource leak: file
[xlators/mgmt/glusterd/src/glusterd-replace-brick.c:1026]: (error) Resource leak: file
[xlators/mgmt/glusterd/src/glusterd-sm.c:249]: (error) Possible null pointer dereference: new_ev_ctx
[xlators/mgmt/glusterd/src/glusterd-snapshot.c:6917]: (error) Possible null pointer dereference: volinfo
[xlators/mgmt/glusterd/src/glusterd-utils.c:4517]: (error) Possible null pointer dereference: this
[xlators/mgmt/glusterd/src/glusterd-utils.c:6662]: (error) Possible null pointer dereference: this
[xlators/mgmt/glusterd/src/glusterd-utils.c:7708]: (error) Possible null pointer dereference: this
[xlators/mount/fuse/src/fuse-bridge.c:4687]: (error) Uninitialized variable: finh
[xlators/mount/fuse/src/fuse-bridge.c:3080]: (error) Possible null pointer dereference: state
[xlators/nfs/server/src/nfs-common.c:89]: (error) Dangerous usage of 'volname' (strncpy doesn't always null-terminate it).
[xlators/performance/quick-read/src/quick-read.c:586]: (error) Possible null pointer dereference: iobuf
Rerunning cppcheck after fixing the above:
As before, test program:
[extras/test/test-ffop.c:27]: (error) Buffer overrun possible for long command line arguments.
[tests/basic/fops-sanity.c:55]: (error) Buffer overrun possible for long command line arguments.
As before, false positive:
[geo-replication/src/gsyncd.c:100]: (error) Memory leak: str
[geo-replication/src/gsyncd.c:403]: (error) Memory leak: argv
[xlators/nfs/server/src/nlm4.c:1201]: (error) Possible null pointer dereference: fde
[xlators/cluster/afr/src/afr-self-heal-common.c:138]: (error) Possible null pointer dereference: __ptr
[xlators/cluster/afr/src/afr-self-heal-common.c:140]: (error) Possible null pointer dereference: __ptr
[xlators/cluster/afr/src/afr-self-heal-common.c:331]: (error) Possible null pointer dereference: __ptr
False positive after fix:
[xlators/performance/quick-read/src/quick-read.c:584]: (error) Possible null pointer dereference: iobuf
Change-Id: I20e0e3ac1d600b2f2120b8d8536cd6d9e17023e8
BUG: 1109180
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/8064
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I4513a2c260530855e09be64083e9344108c7a6c0
BUG: 1165996
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/9150
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously when host disconnected from cluster then glusterd logs
identifies host using host's UUID.
Now with this fix, UUID along with host's ip will be present in
glusterd log message when one of the peer disconnected from cluster.
So it will enhancement better readability of user from log file.
Change-Id: I3b7eaf1b1a8963ef2096e67a78cf69f67d5d5166
BUG: 1101382
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/9136
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gf_time_fmt() has existed since 3.3; it provides consistent timestamps
(i.e. UTC times) throughout the implementation. (BTW, the other name for UTC
is GMT.)
N.B. many (all?) commercial storage solutions use UTC time for logging.
This makes for easier debugging across geographically distributed systems.
Also adding a "%s" fmt for portably printing time as simple numeric
value on systems regardless of whether 32-bit or 64-bit time_t. Plus a
minor tweak to return a ptr to the dest-string to allow gf_time_fmt()
to be passed as a param in a *printf().
Someday we should pick the "one true" timestamp format and revise all
calls to gf_time_fmt() to use it instead of the five or six different
formats.
Change-Id: I78202ae14b7246fa424efeea56bf2463e14abfb0
BUG: 1109917
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/8085
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For rdma only volumes, daemons like snapd, glustershd
etc make use of tcp transport for their operations.
This patch will introduce the support of rdma by default
for those daemons in rdma only volumes. In order to
accomodate this change we rename the tcp client volfile
labels from
<volname>-fuse.vol
to
<volname>.tcp-fuse.vol
Change-Id: Id9727b97d00e62a4a1556b9c0c56653d45c8fe1d
BUG: 1164079
Signed-off-by: Anoop C S <achiraya@redhat.com>
Reviewed-on: http://review.gluster.org/9146
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we mount rdma only volume or tcp,rdma volume using newly
peer probed IP's(nfs-server on new nodes) through nfs protocol,
mount fails for rdma only volume and mount happens with
help of tcp protocol in the case of tcp,rdma volumes. That is for
newly added servers will always get transport type as "socket".
This is due to nfs_transport_type is exported correctly and
imported wrongly.
This can be verified by the following ,
* Create a rdma only volume or tcp,rdma volume
* Add a new server into the trusted pool.
* Checkout the client transport type specified nfs-server
volgraph.It will be always tcp(socket type) instead of rdma.
* And also for rdma only volume in the nfs log, we can see
'connection refused' message for every reconnect between
nfs server and glusterfsd.
BUG: 1157381
Change-Id: I6bd4979e31adfc72af92c1da06a332557b6289e2
Author: Jiffin Tony Thottan <jthottan@redhat.com>
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: http://review.gluster.org/8975
Reviewed-by: Meghana M <mmadhusu@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As of now for both tcp only volumes and rdma only volumes, volfile
names are in the format <volname>-fuse.vol. This patch will change
the client volfile namings as shown below.
* TCP mounts always use <volname>-fuse.vol
* RDMA mounts always use <volname>.rdma-fuse.vol
Following the above naming convention, for tcp,rdma volumes both
volfiles will be present under /var/lib/glusterd/vols/<volname>/
such that rdma only volume can be mounted as
mount -t glusterfs -o transport=rdma <server/ip>:/<volname> <mount-point>
OR
mount -t glusterfs <server/ip>:/<volname>.rdma <mount-point>
The above command format can also be used to fuse mount a tcp,rdma
volume via rdma transport.
When we try to fuse mount a tcp,rdma volume with transport-type
as rdma it silently mounts via tcp. This change will also make
sure that it fetches the correct volfile based on the
transport-type specified from client side.
BUG: 1131502
Change-Id: I34da4b01ac813b69494a43188f51145457412923
Signed-off-by: Anoop C S <achiraya@redhat.com>
Reviewed-on: http://review.gluster.org/8498
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For rdma type only volume client connection establishment
with server takes more than three seconds. Because for
tcp,rdma type volume, will have 2 ports one for tcp and
one for rdma, tcp port is stored with brickname and rdma
port is stored as "brickname.rdma" during pamap_sighin.
During the handshake when trying to get the brick port
for rdma clients, since we are not aware of server
transport type, we will append '.rdma' with brick name.
So for tcp,rdma volume there will be an entry with
'.rdma', but it will fail for rdma type only volume.
So we will try again, this time without appending '.rdma'
using a flag variable need_different_port, and it will succeed,
but the reconnection happens only after 3 seconds.
In this patch for rdma only type volume
we will append '.rdma' during the pmap_signin. So during the
handshake we will get the correct port for first try itself.
Since we don't need to retry , we can remove the
need_different_port flag variable.
Change-Id: Ie8e3a7f532d4104829dbe995e99b35e95571466c
BUG: 1153569
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/8934
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we try to mount a tcp,rdma volume as rdma
transport using FUSE protocol, then mount will
hang if the brick is down. When we kill a process,
signal will be received in glusterfsd process and
it will call pmap_signout with port listening on tcp only.
In case of the tcp,rdma there will be two ports,
and port which is listening for rdma will not
called for sign out.
So the mount process will try to connect to a port
which is not open and it will keep trying to connect.
This patch will call pmap_signout for rdma port also,
So when mount tries to get the brick port,it will fail.
Change-Id: I23676f65f96eb90b69b76478f7a21412a6aba70f
BUG: 1143886
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/8762
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem : When glusterd is down on one of the nodes and during that
time if USS is disabled then snapd will still be running
in the node where glusterd was down.
Solution : during restart of glusterd check if USS is disabled,
if so then issue a kill for snapd.
NOTE : The test case which I wrote in my previous patchset
is facing some spurious failures, hence I thought of removing
that test case. I'll add the test case once the issue is resolved.
Change-Id: I2870ebb4b257d863cdfc319e8485b19e932576e9
BUG: 1161015
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9062
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Id13dc4cd3f5246446a9dfeabc9caa52f91477524
BUG: 1111554
Signed-off-by: Varun Shastry <vshastry@redhat.com>
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8133
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
if original brick already has this option
Change-Id: I2841d2ac371a3e9505f6061f35d1d447946c0bae
BUG: 1133456
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8526
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By default snapshot should be deactivated and this should be a
configurable option.
This behaviour can be configured by the command below:
gluster snapshot config activate-on-create <enable|disable>
Change-Id: I1911595c32beed43bb2fca4bf99f0d264b422513
BUG: 1157991
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8985
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Check if the LV is present before deleting the LV. In case where
the LV is absent (already deleted?), need not fail the snap delete
operation.
Also check if the LV is mounted before trying umount. In case it
isn't umounted, only remove the LV.
Change-Id: I0f5b2674797299d8748c6fac5b091f0caba65ca4
BUG: 1104714
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/8954
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For USS we have 1 snapd log per volume and as many snap logs for volume.
For example if there are 4 volumes having 256 snaps each and USS is
enabled than total number of logs under /var/log/glusterfs for USS would
be 1028 logs.
Total logs = (4(snapd per volume) + 4(volumes)*256(snaps)) = 1028
Hence, it makes sense to move into into sub-folder structure like
/var/log/glusterfs/snaps/<vol-name>/<snapd + snaps logs>
Change-Id: I29262e6458c3906916923cd67d1145d6ae10bec3
BUG: 1160534
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/9050
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of displaying all the snapshots in the uss world,
it is better if we display only the activated snapshots.
Change-Id: I70d3ec212b62ec15956ae3e826bc4201d8dedd17
BUG: 1155042
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8958
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|