| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The opRet field was being assigned to 0 in the XML output when a
gluster volume info --xml call is made on a non-existent volume.
This change assigns a value of -1 to opRet for volume info calls
for non-existent volumes. Other fields like opErrno and opErrstr
are also assigned relevant values
Change-Id: I3920c602328f74252c87bb521f5a43d4bdc7d44d
BUG: 1321836
Signed-off-by: Samikshan Bairagya <samikshan@gmail.com>
Reviewed-on: http://review.gluster.org/13843
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: darshan n <dnarayan@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test was earlier starting the volume which will always make volume delete fail.
so it was actually not validating BZ 1344407
Change-Id: I6761be16e414bb7b67694ff1a468073bfdd872ac
BUG: 1344407
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/14693
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Deleting a volume on a cluster where one of the node in the cluster is down is
buggy since once that node comes back the resync of the same volume will happen.
Till we bring in the soft delete feature tracked in
http://review.gluster.org/12963 this is a safe guard to block the volume
deletion.
Change-Id: I9c13869c4a7e7a947f88842c6dc6f231c0eeda6c
BUG: 1344407
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/14681
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Next step in eventual deprecation of glusterfs nfs server in favor
of ganesha.nfsd.
Also replace several open-coded strings with constant.
Change-Id: If52f5e880191a14fd38e69b70a32b0300dd93a50
BUG: 1092414
Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/13738
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 2d87a98 introduced a validation to fail lowering down the
cluster.op-version. Commit 2eb8758 actually changed the variable value from
cluster's op-version to volume's op-version which resulted the logic go for a
toss.
Change-Id: I70df32b75c3a3fe47dc840c4a655059e5b124bca
BUG: 1315186
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/14069
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 34899d7
Commit 34899d7 introduced a change, where restarting a volume or rebooting
a node result into fresh allocation of brick port. In production
environment generally administrator makes firewall configuration for a
range of ports for a volume. With commit 34899d7, on rebooting of node
or restarting a volume might result into volume start fail because
firewall might block fresh allocated port of a brick and also it will be
difficult in testing because of fresh allocation of port.
Change-Id: I7a90f69e8c267a013dc906b5228ca76e819d84ad
BUG: 1322805
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/13989
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 23ccabbeb7 introduced a new key "disperse.eager-lock" which
causes a conflict with key "cluster.eager-lock" when option is used
without the qualifying namespace. group-virt.example which gets
installed as /var/lib/glusterd/ groups/virt contains options without
namespace qualifiers. This patch adds the appropriate namespace to all
options in group-virt.example.
Change-Id: I2c09dd10d44138410d889ddeb805f01c641c6780
BUG: 1314649
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/13929
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Variable "real_path" in brick info was used to store absolute path
and using this we check the availability of the newly added bricks.
But we were not populating the variable when we import a volume
from peers. That caused to reset the real_path variable to zero,
which resulted in validation failure for all new brick creation.
Change-Id: I62be7bf452f0dcdf6aec3a4ec33c2e1fba2951ca
BUG: 1323287
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/13890
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is no point of using the same port through the entire volume life cycle
for a particular bricks process since there is no guarantee that the same port
would be free and no other application wouldn't consume it in between the
glusterd/volume restart.
We hit a race where on glusterd restart the daemon services start followed by
brick processes and the time brick process tries to bind with the port which was
allocated by glusterd before a restart is been already consumed by some other
client like NFS/SHD/...
Note : This is a short term solution as here we reduce the race window but don't
eliminate it completely. As a long term solution the port allocation has to be
done by glusterfsd and the same should be communicated back to glusterd for book
keeping
Change-Id: Ibbd1e7ca87e51a7cd9cf216b1fe58ef7783aef24
BUG: 1322805
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/13865
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When an updated volinfo is imported in, the brick ports from the old
volinfo should be always copied.
Earlier, this was being done only if the old volinfo was stopped and
new volinfo was started. This could lead to brick ports chaging when the
following sequence of steps happened.
- A volume is stopped
- GlusterD is stopped on a peer
- The stopped volume is started
- The stopped GlusterD is started
This sequence would lead to bricks on the peer with re-started GlusterD
to get new ports, which could break firewall rules and could prevent
client access. This sequence could be hit when enabling management
encryption in a Gluster trusted storage pool.
Change-Id: I808ad478038d12ed2b19752511bdd7aa6f663bfc
BUG: 1313628
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/13578
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
Tested-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Requirements:
Should be able to skip tests from run-tests.sh run.
Should be granular enough to disable on subset of OSes.
Solution:
Tests can have special comment lines with some comma separated values
within them.
Key names used to determine test status are
G_TESTDEF_TEST_STATUS_CENTOS6
G_TESTDEF_TEST_STATUS_NETBSD7
Some examples:
G_TESTDEF_TEST_STATUS_CENTOS6=BAD_TEST,BUG=123456
G_TESTDEF_TEST_STATUS_NETBSD7=KNOWN_ISSUE,BUG=4444444
G_TESTDEF_TEST_STATUS_CENTOS6=BAD_TEST,BUG=123456;555555
You can change status of test to enabled or delete the line only if all the
bugs are closed or modified or if the patch fixes it.
Change-Id: Idee21fecaa5837fd4bd06e613f5c07a024f7b0c2
BUG: 1295704
Signed-off-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-on: http://review.gluster.org/13393
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a heal is needed after inode refresh (lookup, read_txn), launch it in
the background instead of blocking the fop (that triggered refresh) until the
heal happens.
afr_replies_interpret() is modified such that the heal is
launched only if atleast one sink brick is up.
Max. no of heals that can happen in parallel is configurable via the
'background-self-heal-count' volume option. Any number greater than that
is put in a wait queue whose length is configurable via
'heal-wait-queue-leng' volume option. If the wait queue is also full,
further heals will be ignored.
Default values: background-self-heal-count=8, heal-wait-queue-leng=128
Change-Id: I1d4a52814cdfd43d90591b6d2ad7b6219937ce70
BUG: 1297172
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/13207
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A recent change in cli changed elapsed time format
that broke a test.
This patch will fix the issue with parsing.
Change-Id: I9a4a4b28f654cf2ac223e25abfc9df6570607d74
BUG: 1312036
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/13524
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove brick commit will fail when it is executed while rebalance is in
progress. Hence added a rebalance timeout check before remove-brick commit to
enusre that rebalance has completed.
Change-Id: Ic12f97cbba417ce8cddb35ae973f2bc9bde0fc80
BUG: 1225716
Signed-off-by: Sakshi Bansal <sabansal@redhat.com>
Reviewed-on: http://review.gluster.org/13191
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During reblance restart after glusterd restarted, we are not
connecting to rebalance process from glusterd, because the
defrag variable in volinfo will be null.
Initializing the variable will connect the rpc
Change-Id: Id820cad6a3634a9fc976427fbe1c45844d3d4b9b
BUG: 1303028
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/13319
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently when server quorum is not met then upon executing
# gluster volume start [force] command its starting the volume.
With this patch if server side quorum is not met then it will
prevent starting of the volume.
Change-Id: I39734b2dcf8e90c3c68bf2762d8350aecc82cc38
BUG: 1308402
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/13442
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Given a two node cluster with node N1 & N2, if a dummy node N3 is peer probed, the
probed node N3 goes for importing volumes from the probing node (N1), but
it still doesn't have information about the other node (N2) about its membership
(since peer update happens post volume updates) and hence fail to update its
brick's uuid. Post that even though N2 updates N3 about its membership the
brick's uuid was never generated. Now as a consequence when N3 initiates a
detach of N2, it checks whether the node to be detached has any bricks
configured by its respective uuid which is NULL in this case and hence it goes
ahead and removes the peer which ideally it shouldn't have (refer to
glusterd_friend_contains_vol_bricks () for the logic)
Fix is to export brick's uuid and import it at the probed node instead of
resolving it.
Change-Id: I2d88c72175347550a45ab12aff0ae248e56baa87
BUG: 1293414
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/13047
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The brick_up_status function wasn't correct after the introduction of
the RDMA port into the `volume status` output.
It has been fixed to use the XML brick status of a specific brick
instead of normal CLI output.
Change-Id: I5327e1a32b1c6f326bc3def735d0daa9ea320074
BUG: 1289584
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/12913
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously when you create volume with "glusterd_shared_storage" name
and if user disable enable-shared-storage option then gluster will
delete the "glusterd_shared_storage" volume.
With this fix gluster will do appropriate validation of
enable-shared-storage option and it will not delete volume with
"glusterd_shared_storage" name if it is a user created volume.
Change-Id: I2bd92f938fb3de6ef496a934933bdcea9f251491
BUG: 1266818
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/12232
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I5b4a28db101e9f7e07f4b388c7a2594051c9e8dd
BUG: 1265479
Signed-off-by: Sakshi <sabansal@redhat.com>
Reviewed-on: http://review.gluster.org/12215
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ie9e24e037b7a39b239a7badb983504963d664324
BUG: 1225716
Signed-off-by: Sakshi <sabansal@redhat.com>
Reviewed-on: http://review.gluster.org/10954
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current detach-tier cli command support commit force.
Deprecating the same to force.
So the new syntax would be:
volume detach-tier <VOLNAME> <start|stop|status|commit|force>
Change-Id: Ie86dfd72341078c0a1be94767f523730911312ef
BUG: 1261862
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/12151
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently when user execute gluster v detach-tier commit command without
starting detach-tier or without giving force option then gluster will
success this operation.
Detach-tier commit should not allow without giving "force" optioin.
Change-Id: Id161c288f6f3e0f6b298878a5c35a49fcbd9c6e3
BUG: 1260185
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/12107
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
host of the brick
remove brick stage blindly starts the remove brick operation even if the
glusterd instance of the node hosting the brick is down. Operationally its
incorrect and this could result into a inconsistent rebalance status across all
the nodes as the originator of this command will always have the rebalance
status to 'DEFRAG_NOT_STARTED', however when the glusterd instance on the other
nodes comes up, will trigger rebalance and make the status to completed once the
rebalance is finished.
This patch fixes two things:
1. Add a validation in remove brick to check whether all the peers hosting the
bricks to be removed are up.
2. Don't copy volinfo->rebal.dict from stale volinfo during restore as this
might end up in a incosistent node_state.info file resulting into volume status
command failure.
Change-Id: Ia4a76865c05037d49eec5e3bbfaf68c1567f1f81
BUG: 1245045
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/11726
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently glusterd is not stopping all the deamon service on peer detach
With this fix it will do peer detach cleanup properlly and will stop all
the daemon which was running before peer detach on the node.
Change-Id: Ifed403ed09187e84f2a60bf63135156ad1f15775
BUG: 1255386
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/11509
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
problem : Reset/set commands were not working properly. reset command returns
success but it not sending notification to svcs if corresponding graph modified.
Fix: Whenever reset/set command issued, generate the temp graph and compare
with original graph and do the fallowing actions
1.) If both graph are identical nothing to do with svcs.
2.) If any changes in graph topology restart/stop service by calling
svc manager.
3) If changes in options send notify signal by calling glusterd_fetchspec_notify.
Change-Id: I852c4602eafed1ae6e6a02424814fe3a83e3d4c7
BUG: 1209329
Signed-off-by: anand <anekkunt@redhat.com>
Reviewed-on: http://review.gluster.org/10850
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RCA: If rebalance start is triggered from one node and one of other nodes in the cluster goes down simultaneously
we might end up in a case where callback will use the txn_id from priv->global_txn_id which is always zeros and
this means injecting an event with an incorrect txn_id will result into op-sm getting stuck.
fix: set txn_id in frame->cookie during sumbit_and_request, so that we can get txn_id in call back
functions.
Change-Id: I519176c259ea9d37897791a77a7c92eb96d10052
BUG: 1245142
Signed-off-by: anand <anekkunt@redhat.com>
Reviewed-on: http://review.gluster.org/11728
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As of now all the daemon services are initialized at glusterD init path. Since
socket file path of per node daemon demands the uuid of the node, MY_UUID macro
is invoked as part of the initialization.
The above flow breaks the usecases where a gluster image is built following a
template could be Dockerfile, Vagrantfile or any kind of virtualization
environment. This means bringing instances of this image would have same UUIDs
for the node resulting in peer probe failure.
Solution is to lazily initialize the services on demand.
Change-Id: If7caa533026c83e98c7c7678bded67085d0bbc1e
BUG: 1238135
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/11488
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issue:Rebalance is failing in cluster framework (any simulated cluster environment in same node ).
RCA:
1. we are passing always "localhost" as volfile server for rebalance xlator .
2. Rebalance daemons are overwriting unix socket and log files each other.
(All rebalance processes are creating socket with same name) .
Fix: set vol_file_server, unix socket and log files properly.
Change-Id: I6654461e00c2a164b2f1f1db24a316c4180dd8d5
BUG: 1231437
Signed-off-by: anand <anekkunt@redhat.com>
Reviewed-on: http://review.gluster.org/11210
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On restarting glusterd quota daemon is not started when more than one
volumes are configured and quota is enabled only on 2nd volume.
This is because of while restarting glusterd it will restart all the bricks.
During brick restart it will start respective daemon by passing volinfo of
first volume. Passing volinfo to glusterd_svc_manager will imply daemon
managers will take action based on the same volume's configuration which
is incorrect for per node daemons.
Fix is to pass volinfo NULL while restarting bricks.
Change-Id: I2602002a8ba7762fc1eb08123e79fbcf568ecab4
BUG: 1242875
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/11658
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I0fdb58e15da15c40c3fc9767f2fe4df0ea9d2350
BUG: 1242609
Signed-off-by: Anuradha Talur <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/11651
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
get-task-status () used to always return 0 *until and unless* the CLI command
itself fails which is unlikely. However if the CLI command fails due to some
reason EXPECT_WITHIN will abort.
Change-Id: Ibe54dcdccc26b3ee003677fc3516cfed98b5c06f
BUG: 1227590
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/11054
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd was crashing while trying to remove bricks from replica set
after shrinking nx3 replica to nx2 replica to nx1 replica.
This is because volinfo->subvol_count is calculating value from old
replica count value.
Change-Id: I1084a71e29c9cfa1cd85bdb4e82b943b1dc44372
BUG: 1230121
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/11165
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I421f50aeb89213d036b4b40f20a8e0d6bd78d60b
BUG: 1229825
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/11143
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The key concept here is to determine whether a directory is "clean" by
comparing its last-known-good topology to the current one for the
volume. These are stored as "commit hashes" on the directory and the
volume root respectively. The volume's commit hash changes whenever a
brick is added or removed, and a fix-layout is done. A directory's
commit hash changes only when a full rebalance (not just fix-layout)
is done on it. If all bricks are present and have a directory
commit hash that matches the volume commit hash, then we can assume
that every file is in its "proper" place. Therefore, if we look for
a file in that proper place and don't find it, we can assume it's not
on any other subvolume and *safely* skip the global (broadcast to all)
lookup.
Change-Id: Id6ce4593ba1f7daffa74cfab591cb45960629ae3
BUG: 1219637
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Signed-off-by: Shyam <srangana@redhat.com>
Reviewed-on: http://review.gluster.org/7702
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace-brick operation with data migration support have been
deprecated from gluster.
With this fix replace brick command will support only one commad
gluster volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force}
Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b
BUG: 1094119
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10101
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: If57d08f3446755ea41f66ca258efcc8ea5a89063
BUG: 1217701
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/10480
Tested-by: NetBSD Build System
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In restore path snapd svc was not initialized because of which any glusterd
instance which went down and came back may have uninitialized snapd svc. The
reason I used 'may' is because depending on the nodes in the cluster. In a
single node cluster this wouldn't be a problem since glusterd_spawn_daemon takes
care of initializing it.
Change-Id: I2da1e419a0506d3b2742c1cf39a3b9416eb3c305
BUG: 1213295
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/10304
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously when user start remove-brick operation on a volume then by
giving non-existing brick for remove-brick status/stop command it was
showing remove-brick status/stoping remove-brick operation on a volume.
With this fix it will validate bricks which user have given for
remove-brick status/stop command and if bricks are part of volume then
it will show statistics of remove-brick operation otherwise it will show
error "Incorrect brick <brick_name> for <volume_name>".
Change-Id: I151284ef78c25f52d1b39cdbd71ebfb9eb4b8471
BUG: 1121584
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/9681
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These are suspected of causing core dumps during regression tests,
leading to spurious failures. Per email conversation, since this
isn't a supported feature anyway, the tests are being removed to
facilitate testing of features we do support.
Change-Id: I7fd5c76d26dd6c3ffa91f89fc10469ae3a63afdf
BUG: 1195415
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/10167
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
glusterd was failing to get some specific volume option. for eg:
gluster volume get <vol-name> cluster.op-version
Fix:
glusterd should set count value in dictionary while retrieving specific volume
option.
Change-Id: Iada768ea3d8a0006895525eca2c2dcc40432a4ea
BUG: 1199451
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/9821
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It turns out that "pidof" is unreliable on some platforms (e.g. Fedora
21) because it will show spurious entries for processes using the same
inode under a different name. Use "pgrep" instead because it's
name-based and doesn't get confused by glusterd/glusterfs being links
to glusterfsd.
Also changed bug-913555.t because it had the same mistake in its own
version of the same function. Now it uses the common version.
Change-Id: I5d70edd5655faa5470e0f378b8c16a6adacbd4b4
BUG: 1163543
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/9948
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For tcp,rdma type voumes, there will be two ports, one for tcp
and one for rdma. But volume status command only display tcp port.
By this change, adding an extra column for rdma port and changing
the port to tcp port.
Eg:
>gluster volume status pathy
>For tcp,rdma type volume
Status of volume: patchy
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick brickname 49152 49153 Y 14158
>For rdma type volume
Status of volume: patchy
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick brickname 0 49153 Y 14158
For tcp type volume
Status of volume: patchy
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick brickname 49152 0 Y 14158
>gluster volume status patchy detail
Status of volume: xcube2
------------------------------------------------------------------------------
Brick : Brick brickname
TCP Port : 49152
RDMA Port : 49153
Online : Y
Pid : 14158
File System : ext4
Device :
/dev/mapper/luks-2099dd4a-0050-4cae-ad7b-c6a0498c4e88
Mount Options : rw,seclabel,relatime,data=ordered
Inode Size : 256
Disk Space Free : 31.1GB
Total Disk Space : 47.9GB
Inode Count : 3203072
Free Inodes : 2926789
>gluster volume status xcube --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr>(null)</opErrstr>
<volStatus>
<volumes>
<volume>
<volName>xcube</volName>
<nodeCount>2</nodeCount>
<node>
<hostname>hostname</hostname>
<path>/home/brick1</path>
<peerid>2d7bcb95-3d26-4d4f-b3c6-e2ee01b71662</peerid>
<status>1</status>
<port>49152</port>
<ports>
<tcp>49152</tcp>
<rdma>N/A</rdma>
</ports>
<pid>5657</pid>
</node>
<node>
<hostname>NFS Server</hostname>
<path>localhost</path>
<peerid>2d7bcb95-3d26-4d4f-b3c6-e2ee01b71662</peerid>
<status>1</status>
<port>2049</port>
<ports>
<tcp>2049</tcp>
<rdma>N/A</rdma>
</ports>
<pid>5665</pid>
</node>
<tasks/>
</volume>
</volumes>
</volStatus>
</cliOutput>
Change-Id: I81aab226edbd400d29cd3f510af4f344dd99ba51
BUG: 1164079
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/9191
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I908934f1f22cf7d2d0ceccc0dedf28a69861997f
BUG: 1187885
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9517
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-by: Anuradha Talur <atalur@redhat.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PROBLEM:
Previously gluster accepting input value as a percentage which is out of range
[0-100] and accepting input value as a size (unit is byte) which is fractional
for option cluster.min-free-disk.
FIX:
Now with this change it will refer to correct validation function
and it will accept value that is in range [0-100] for input value as a
percentage and unsigned integer value for input as a size (unit in byte)
for option cluster.min-free-disk.
Change-Id: Iee1962a100542e146276cfc8a4068abddee2bf2d
BUG: 1163108
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/9104
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently the test case changed here, checks for the peer count to
be 1 until probe timeout and then checks for the changed
configuration, if it has been synced.
The peer count is not a gurantee that the configuration is also
in sync, hence changing this test case to check for the conf
update till probe timeout, by which time it should be in sync
(or at least that is our tolerance), and the test case deemed
as passing.
Change-Id: I4b1560979cfde3bd3bd691852d7d3a63e253bcf2
BUG: 1181203
Signed-off-by: Shyam <srangana@redhat.com>
Reviewed-on: http://review.gluster.org/9498
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
quota is disabled.
problem : If quota is disabled then all the options associated with
quota is removed, except quota-deem-statfs and quota-timeout.
When gluster volume info is issued then the user can see that quota
is disabled whereas quota-deem-statfs and quota-timeout values still
exist.
Solution : remove quota-deem-statfs and quota-timeout option when quota is
disabled
NOTE : If features.quota-deem-statfs is turned on, it takes quota limits
into consideration while estimating fs size.
Change-Id: I8cca6a8f47d2355799228643aedc8fc03896cfad
BUG: 1151933
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8924
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Found a bug where a replica 2 volume creation prompts
saying the bricks are in the same hosts even when they
are in different hosts.
Change-Id: Ie55addae55c55e32ad2b5339530ab71f0e3711ab
BUG: 1091935
Signed-off-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-on: http://review.gluster.org/9373
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously glusterd was not checking quorum validation in syncop framework.
So when there is loss in quorum then few operation (for eg. add-brick,
remove-brick, volume set) which is based on syncop framework passed
successfully with out doing quorum validation check.
With this change it will do quorum validation in syncop framework and it will
block all operation (except volume set <quorum options> and "volume reset all"
commands) when there is loss in quorum.
Change-Id: I4c2ef16728d55c98a228bb86795023d9c1f4e9fb
BUG: 1177132
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/9349
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"features.uss" with a non-boolean value gets set in the volume option
table because of which subsequent volume set operation fails since
features.uss does not contain a valid boolean value.
Fix is not to allow a non-boolean value to get set in the volume option
table. "features.uss" option should have validation function "validate_uss"
which validate the input value given by user.
Change-Id: I4a212f876627a4979715183b0d488fd69095f193
BUG: 1179175
Signed-off-by: ggarg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/9395
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|