| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
cluster op-version must be atleast 4 for add/remove brick to proceed.
This change is required for the new afr-changelog xattr changes that
will be done for glusterFS 3.6 (http://review.gluster.org/#/c/7155/).
In add-brick, the check is done only when replica count is increased
because only that will affect the AFR xattrs.
In remove-brick, the check is unconditional failing which there will be
inconsistencies in the client xlator names amongst the volfiles of
different peers.
Change-Id: If981da2f33899aed585ab70bb11c09a093c9d8e6
BUG: 1066778
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/7122
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-Add a unique brick-id field to glusterd_brickinfo_t
-Persist the id to the brickinfo file
-Use the brick-id as the client xlator name during vol create, add-brick and
replace-brick operations.
-For older volumes,generate the id in-memory during glusterd restore but defer
writing it to the brickinfo file until the next volume set operation.
-send and receive the brick-ids during peer probe.
Feature page:
www.gluster.org/community/documentation/index.php/Features/persistent-AFR-changelog-xattributes
Related patch:
http://review.gluster.org/#/c/7122
Change-Id: Ib7f1570004e33f4144476410eec2b84df4e41448
BUG: 1066778
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/7155
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this patch we are replacing the existing cluster-wide
lock taken on glusterds across the cluster, with volume locks
which are also taken on glusterds across the cluster, but are
volume specific. So with the volume locks we are able to perform
more than one gluster operation at the same time, as long as the
operations are being performed on different volumes.
We maintain a global list of volume-locks (using a dict for a list)
where the key is the volume name, and which saves the uuid of the
originator glusterd. These locks are held and released per volume
transaction.
In order to acheive multiple gluster operations occuring at the
same time, we also separate opinfos in the op-state-machine, as a
part of this patch. To do so, we generate a unique transaction-id
(uuid) per gluster transaction. An opinfo is then associated with
this transaction id, which is used throughout the transaction. We
maintain a run-time global list(using a dict) of transaction-ids,
and their respective opinfos to achieve this.
Upstream Feature Page: http://www.gluster.org/community/documentation/index.php/Features/glusterd-volume-locks
Change-Id: Iaad505a854bac8de8f83beec0357eb6cde3f7ea8
BUG: 1011470
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5994
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently volinfo is added at the end of the list while creating a volume.
On gluster restart, readdir will not provide the ordered list and the data
is populated in the same order as readdir.
Solution is to insert the volinfo to the list in an order
Change-Id: I1716ac6abbd7dd301a7125425fc413c6833f7a48
BUG: 1039912
Signed-off-by: Vijaykumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/6472
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add glusterd_volinfo_remove(..) which removes @volinfo from the list
of volumes in the cluster and performs an unref on @volinfo
Change-Id: I5f546ca58f61bc334ab1bab4c51c4a21e1f66161
BUG: 1038051
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/6521
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rpc:
- On a RPC_TRANSPORT_CLEANUP event, rpc_clnt_notify calls the registered
notifyfn with a RPC_CLNT_DESTROY event. The notifyfn should properly
cleanup the saved mydata on this event.
- Break the reconnect chain when an rpc client is disabled. This will
prevent new disconnect events which can lead to crashes.
glusterd:
- Added support for RPC_CLNT_DESTROY in glusterd_brick_rpc_notify
- Use a common glusterd_rpc_clnt_unref() function throught glusterd in
place of rpc_clnt_unref(). This function correctly gives up the
big-lock before performing the unref.
Change-Id: I93230441c5089039643fc9f5632477ef1b695348
BUG: 962619
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5512
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> handle option validation cases in reset case.
-> Creating valid conf path when glusterd restarts.
-> Reading the gsyncd worker thread status and displaying it.
-> Displaying status-detail per worker.
-> Fetch checkpoint info in geo-rep status.
-> use-tarssh value validation added.
misc: misc geo-rep fixes based on cluster, logrotate etc..
-> cluster/dht: fix 'stime' getxattr getting overwritten.
-> cluster/afr: return max of 'stime' values in subvol.
-> geo-rep-logrotate: Sending SIGHUP to geo-rep auxiliary.
-> cluster/dht: fix convoluted logic while aggregating.
-> cluster/*: fix 'stime' min/max fetch logic.
Change-Id: I811acea0bbd6194797a3e55d89295d1ea021ac85
BUG: 1036552
Signed-off-by: Ajeet Jha <ajha@redhat.com>
Reviewed-on: http://review.gluster.org/6405
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@gmail.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Glusterd will now correctly copy existing rebalance information when a
volinfo is updated during volume sync. If the existing rebalance
information was stale, then any existing rebalance process will be
termimnated. A new rebalance process will be started only if there is no
existing rebalance process. The rebalance process will not be started if
the existing rebalance session had completed, failed or been stopped.
Change-Id: I68c5984267c188734da76770ba557662d4ea3ee0
BUG: 1036464
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6334
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, glusterd used to just send back the local status of a task
in a 'volume status [tasks]' command. As the rebalance operation is
distributed and asynchronus, this meant that different peers could give
different status values for a rebalance or remove-brick task.
With this patch, all the peers will send back the tasks status as a part
of the 'volume status' commit op, and the origin peer will aggregate
these to arrive at a final status for the task.
The aggregation is only done for rebalance or remove-brick tasks. The
replace-brick task will have the same status on all the peers (see
comment in glusterd_volume_status_aggregate_tasks_status() for more
information) and need not be aggregated.
The rebalance process has 5 states,
NOT_STARTED - rebalance process has not been started on this node
STARTED - rebalance process has been started and is still running
STOPPED - rebalance process was stopped by a 'rebalance/remove-brick
stop' command
COMPLETED - rebalance process completed successfully
FAILED - rebalance process failed to complete successfully
The aggregation is done using the following precedence,
STARTED > FAILED > STOPPED > COMPLETED > NOT_STARTED
The new changes make the 'volume status tasks' command a distributed
command as we need to get the task status from all peers.
The following tests were performed,
- Start a remove-brick task and do a status command on a peer which
doesn't have the brick being removed. The remove-brick status was
given correctly as 'in progress' and 'completed', instead of 'not
started'
- Start a rebalance task, run the status command. The status moved to
'completed' only after rebalance completed on all nodes.
Also, change the CLI xml output code for rebalance status to use the
same algorithm for status aggregation.
Change-Id: Ifd4aff705aa51609a612d5a9194acc73e10a82c0
BUG: 1027094
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6230
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
re-work.
Following are the cli commands that are new/re-worked:
======================================================
volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} |
volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} |
volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>}
volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [detail|clients|mem|inode|fd|callpool]
volume statedump <VOLNAME> [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history]
glusterd changes:
=================
* Quota limits are now set as extended attributes by glusterd from
the aux mount created by the cli.
* The gfids of the directories on which quota limits are set
for a given volume are stored in
/var/lib/glusterd/vols/<volname>/quota.conf file in binary format,
and whose cksum and version is stored in
/var/lib/glusterd/vols/<volname>/quota.cksum.
Original-author: Krutika Dhananjay <kdhananj@redhat.com>
Original-author: Krishnan Parthasarathi <kparthas@redhat.com>
BUG: 969461
Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/6003
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Gluster was starting rebalance processes on peers where it wasn't
required in two cases.
- For a normal rebalance command on a volume, rebalance processes were
started on all peers instead of just the peers which contain bricks of
the volume
- For rebalance process being restarted by a volume sync, caused by a
new peer being probed or a peer restarting, rebalance processes were
started on all peers, for both a normal rebalance and for remove-brick
needing rebalance.
This patch adds a new check before starting rebalance process in the
above two cases.
- For rebalance process required by a rebalance command, each peer will
check if it contains atleast one brick of the volume
- For rebalance process required by a remove-brick command, each peer
will check if it contains atleast one of the bricks being removed
Change-Id: I512da16994f0d5482889c3a009c46dc20a8a15bb
BUG: 1031887
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6301
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement reconfigure() for NFS xlator so that volume set/reset wont
restart the NFS server process. But few options can not be reconfigured
dynamically e.g. nfs.mem-factor, nfs.port etc which needs NFS to be
restarted.
Change-Id: Ic586fd55b7933c0a3175708d8c41ed0475d74a1c
BUG: 1027409
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/6236
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
brick force.
Expectation with force is that user is aware of the consequences of
sanity checks not being triggered.
Change-Id: I79dfeed16a23829a7217cef33ab83f9f0ffae336
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
BUG: 1007509
Reviewed-on: http://review.gluster.org/5746
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
oVirt's Gluster Integration needs an inexpensive command that can be
executed every 10 seconds to monitor async tasks and their parameters,
for all volumes.
The solution involves adding a 'tasks' sub-command to 'volume status'
to fetch only the async task IDs, type and other relevant parameters.
Only the originator glusterd participates in this command as all the
information needed is available on all the nodes. This is to make the
command suitable for being executed every 10 seconds.
Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1
BUG: 1012346
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/6006
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
non-root@hostname::slave-vol geo-rep sessions are not supported.
only hostname and root@hostname sessions are supported, and are
treated as the same.
Change-Id: I87551e1bd4ff4e0e6520c34eb3d944587cc65476
BUG: 998933
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/5659
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During add-brick, when a new brick is added in one of the
nodes that was already a part of the existing volume, and
gsyncd was already running on that node, then all gsyncd
processes running on that node, for that particular master
and any slave sessions will be restarted
If a new brick is added in a new node, then after adding the
brick, the user has to perform the following steps:
1. gluster system:: execute gsec_create
2. gluster volume geo-replication <master-vol> <slave-vol> create push-pem force
3. gluster volume geo-replication <master-vol> <slave-vol> start force
Change-Id: I4b9633e176c80e4a7cf33f42ebfa47ab8fc283f1
BUG: 989532
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5416
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commands:
gluster system:: execute gsec_create
gluster volume geo-rep <master> <slave-url> create [push-pem] [force]
gluster volume geo-rep <master> <slave-url> start [force]
gluster volume geo-rep <master> <slave-url> stop [force]
gluster volume geo-rep <master> <slave-url> delete
gluster volume geo-rep <master> <slave-url> config
gluster volume geo-rep <master> <slave-url> status
The geo-replication is distributed. The session will be created, and
gsyncd will be spawned on all relevant nodes, instead of only one
node.
geo-rep: Collecting status detail related data
Added persistent store for saving information about
TotalFilesSynced, TotalSyncTime, TotalBytesSynced
Changes in the status information in socket:
Existing(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;
New(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978;
TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640;
Persistent details stored in
/var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status
Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111
BUG: 847839
Original Author: Avra Sengupta <asengupt@redhat.com>
Original Author: Venky Shankar <vshankar@redhat.com>
Original Author: Aravinda VK <avishwan@redhat.com>
Original Author: Amar Tumballi <amarts@redhat.com>
Original Author: Csaba Henk <csaba@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5132
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: If47e209cb61ea0eb74ee2d6ef9e9342b2d6ee13a
BUG: 980838
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/5261
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Check if a previous remove-brick operation has been committed before
starting a new rebalance/remove-brick task.
Change-Id: I553e5ba64a6a352ca91032ab1a17997051a4494e
BUG: 963541
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5019
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
rpc_transport object, which is part of rpc_clnt, is destroyed
prematurely. This is because, rpc_transport object is ref'd by socket
layer and rpc layer. These ref's, until the synctask'izing of
operations, were unref'd sequentially in the epoll thread.
With more threads at play, the sequential unref guarantee is off.
Fix:
Shutting down the transport before proceeding with cleaning up of
rpc_clnt object would serialize the unref's on the rpc_transport object
and thus eliminating the race.
Also, we don't store the address of brickinfo in brick's rpc notify
function, to avoid the possibility of referring a freed brickinfo.
Instead we use a string based id to 'reach' the corresponding brickinfo.
Change-Id: If2739e2eeaee1e8b071ab2b6754b7ea0f81cfceb
BUG: 962619
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/5000
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I9e2743ab61c8baee92a1dfd376ec4bb145776176
BUG: 963524
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/5016
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I08065aaa3c140d4b02af4ca38f5f4d00d7f0c2bb
BUG: 958739
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/4937
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Each volume is now associated with two op-versions,
* op_version - the op-version of the highest op-versioned feature enabled
* client_op_version - the op-version of the highest op-versioned feature
enabled which affects the clients only.
These two op-versions are generated dynamically and kept updated during
runtime. Glusterd now uses the respective volumes' client-op-version during
getspec requests.
To achieve the above a new field in the vme table is introduced,
client_option, this boolean field tells if the option is a client side
option.
Change-Id: I12c83b1dd29ab506026efd50d448cebbcee53c27
BUG: 907311
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/4584
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are primarily three lists that are part of glusterd process,
that are concurrently accessed. Namely, priv->volumes, priv->peers
and volinfo->bricks_list.
Big-lock approach
-----------------
WHAT IS IT?
Big lock is a coarse-grained lock which protects all three
lists, mentioned above, from racy access.
HOW DOES IT WORK?
At any given point in time, glusterd's thread(s) are in execution
_iff_ there is a preceding, inbound network event. Of course, the
sigwaiter thread and timer thread are exceptions.
A network event is an external trigger to glusterd, via the epoll
thread, in the form of POLLIN and POLLERR.
As long as we take the big-lock at all such entry points and yield
it when we are done, we are guaranteed that all the network events,
accessing the global lists, are serialised.
This amounts to holding the big lock at
- all the handlers of all the actors in glusterd. (POLLIN)
- all the cbks in glusterd. (POLLIN)
- rpc_notify (DISCONNECT event), if we access/modify
one of the three lists. (POLLERR)
In the case of synctask'ized volume operations, we must remember that,
if we held the big lock for the entire duration of the handler,
we may block other non-synctask rpc actors from executing.
For eg, volume-start would block in PMAP SIGNIN, if done incorrectly.
To prevent this, we need to yield the big lock, when we yield the
synctask, and reacquire on waking up of the synctask.
Change-Id: Ib929f9905b55fb6c3fc27fefb497a26dba058e4f
BUG: 948686
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/4784
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch incorporates all the changes suggested on the behaviour of
'volume create' command in http://review.gluster.org/#change,4214
(comment #14, to be precise).
Change-Id: Iaac524a59738b177415595b18aa8a136090d3d25
BUG: 948729
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/4740
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ie2259023b9001311a2032792639c3093054f6750
BUG: 896431
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/4552
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is needed to support automated testing of cluster-communication
features such as probing and quorum. In order to use this, you need to
do the following preparatory steps.
* Copy /var/lib/glusterd to another directory for each virtual host
* Ensure that each virtual host has a different UUID in its glusterd.info
Now you can start each copy of glusterd with the following xlator-options.
* management.transport.socket.bind-address=$ip_address
* management.working-directory=$unique_working_directory
You can use 127.x.y.z addresses for binding without needing to assign
them to interfaces explicitly. Note that you must use addresses, not
names, because of some stuff in the socket code that's not worth fixing
just for this usage, but after that you can use names in /etc/hosts
instead.
At this point you can issue CLI commands to a specific glusterd using
the --remote-host option. So far probe, volume create/start/stop,
mount, and basic I/O all seem to work as expected with multiple
instances.
Change-Id: I1beabb44cff8763d2774bc208b2ffcda27c1a550
BUG: 913555
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/4556
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I4c275253144ed3ac11a701a56dd1116c002471ba
BUG: 852147
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/4495
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ia1fe3d0500d999c1f95b43c9e53947834e39d680
BUG: 852147
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/4490
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ib4c4794563a5a694fab16f17c642f788399462f6
BUG: 852147
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/4295
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Id3bd0bfc4802c166f7a32b0cc6a726aeb5617b5d
BUG: 890618
Signed-off-by: JulesWang <w.jq0722@gmail.com>
Reviewed-on: http://review.gluster.org/4427
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces task-id's for async tasks like rebalance, remove-brick and
replace-brick. An id is generated for each task when it is started and displayed
to the user in cli output. The status of running tasks is also included in the
output of "volume status" along with its id, so that a user can easily track the
progress of an async task.
Also,
* added tests for this feature into the regression test suite.
* added a python script for creating files, 'create-files.py', courtesy
Vijaykumar Koppad (vkoppad@redhat.com) into the test suite.
This patch reverts the revert commit 698deb33d731df6de84da8ae8ee4045e1543a168.
BUG: 857330
Change-Id: Id43d7cb629a38f47f733fbc18cb4c5f2f0327c7a
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/4294
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit ed15521d4e5af2b52b78fd33711e7562f5273bc6
Strangely, the test scripts are "silently" passing for failures too. Reverting patch for now.
Change-Id: I802ec1634c7863dc373cc7dc4a47bd4baa72764e
Reviewed-on: http://review.gluster.org/4267
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces task-id's for async tasks like rebalance, remove-brick and
replace-brick. An id is generated for each task when it is started and displayed
to the user in cli output. The status of running tasks is also included in the
output of "volume status" along with its id, so that a user can easily track the
progress of an async task.
Also,
* added tests for this feature into the regression test suite.
* added a python script for creating files, 'create-files.py', courtesy
Vijaykumar Koppad (vkoppad@redhat.com) into the test suite.
Change-Id: Ib0c0d12e0d6c8f72ace48d303d7ff3102157e876
BUG: 857330
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/3942
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Definition of online in the message below is that the
RPC_CLNT_CONNECT event arrives for the nfs/self-heal-daemon process.
For automated tests, sometimes the script needs to wait until
self-heal-daemon comes online, so that the relevant
commands can be executed. Gluster volume status before this change
printed whether the self-heal-daemon is running or not based on the
lock availability on the pidfile. But there is a small window where
the lock on pid file is present but the process is still not
online. So the commands that were depending on this kept failing in
the test script.
Change-Id: I0e44e18b08d7b653d34fa170c1f187d91c888cd9
BUG: 858212
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/4236
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
... in gsyncd python code. Indeed, use the configuration
mechanism to set it suitably from glusterd.
Change-Id: I9fe2088b14d28588d1e64fe892740cc5755b8365
BUG: 868877
Signed-off-by: Csaba Henk <csaba@redhat.com>
Reviewed-on: http://review.gluster.org/4143
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Feature-page:
http://www.gluster.org/community/documentation/index.php/Features/Server-quorum
Change-Id: I747b222519e71022462343d2c1bcd3626e1f9c86
BUG: 839595
Signed-off-by: Pranith Kumar K <pranithk@gluster.com>
Reviewed-on: http://review.gluster.org/3811
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
An op-version check is performed for the given keys during stage. The commit
phase moves the cluster op-version to the required version if needed.
Change-Id: Id5c387094dbec723df736b2ecdc49ff93c179e0e
BUG: 814534
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/3780
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Added volume-id validation to glusterd-syncop code.
- All daemons are restarted using synctasks in init().
- glusterd_brick_start has wait/nowait variants to support
volume commands using synctask framework and those that aren't.
Change-Id: Ieec26fe1ea7e5faac88cc7798d93e4cc2b399d34
BUG: 862834
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/3969
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
- Moved inner functions used in conjunction with synctask, 'out'.
Change-Id: I7fbfd9881ea58645c4295a9fa7163ddd15a45d2f
BUG: 862834
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/4066
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is important for the effort to make glusterd use synctask
framework.
Change-Id: I0affb10a342df99df8daccfd6eef8fa6dd63928c
BUG: 862834
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/4057
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PROBLEM:
In the existing implementation, the success/failure of
execution of a command is decided (and logged) in glusterd
handler functions. Strictly speaking, the logging mechanism
must take into account what course the command takes within
the state machine before concluding whether it succeeded or
failed.
FIX:
This patch attempts to fix the above issue for vol commands.
The format of the log message is as follows:
for failure:
<command string> : FAILED : <cause of failure>
for success:
<command string> : SUCCESS
APPROACH (in a nutshell):
* The command string is packed into dict at cli and sent to
glusterd.
* glusterd logs the command status just before doing a
"submit_reply", which is called (either directly or
indirectly via a call to glusterd_op_cli_send_response)
at 2 places for every vol command:
i. in handler functions, and
ii. in glusterd_op_txn_complete
In short, the failure of a command in the handler implies the
command has indeed failed. However, its success in the handler
does NOT necessarily mean the command succeeded/will succeed.
Change-Id: I5a8a2ddc318ef2dc2a9699f704a6bcd2f0ab0277
BUG: 823081
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/3948
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
License message changed for server-side, dual license GPLV2 and LGPLv3+.
Change-Id: Ia9e53061b9d2df3b3ef3bc9778dceff77db46a09
BUG: 852318
Signed-off-by: Varun Shastry <vshastry@redhat.com>
Reviewed-on: http://review.gluster.org/3940
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The license message is changed to
Copyright (c) 2008-2012 Red Hat, Inc. <http://www.redhat.com>
This file is part of GlusterFS.
This file is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3 or
later), or the GNU General Public License, version 2 (GPLv2), in all
cases as published by the Free Software Foundation.
Change-Id: I07d2b63ed5fbbbd1884f1e74f2dd56013d15b0f4
BUG: 852318
Signed-off-by: Varun Shastry <vshastry@redhat.com>
Reviewed-on: http://review.gluster.org/3858
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Retained apparent redundant checks in stage, commit phase of set
volume for the help options for backward compatibility
Change-Id: Iaefe3805d6b5eeeced2e7e4870830edf3e61dc87
BUG: 844696
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.com/3761
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch modifies the existing brickinfo function signatures
and/or names to do one thing right and call them by 'appropriate' names.
- Decoupled brickinfo_get and is_brickpath_available
- Removed dead comment about realpath(3) in canonicalize_path
- Renamed glusterd_brickinfo_from_brick to glusterd_brickinfo_new_from_brick
to make the name of the function reflect that an allocation is happening
Change-Id: I29daba6d431ca799d43c927b9dfbaeda327e83e8
BUG: 764890
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.com/3668
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: If5f196c9154ea59e37b83d3e4cad445fee6e9d45
BUG: 826512
Signed-off-by: Csaba Henk <csaba@redhat.com>
Reviewed-on: http://review.gluster.com/3490
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pranithk@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Decommissioning is started only on nodes where the bricks which
are being decommissioned are present. The stats were reset only
when decommission was started. Hence stale stats were being
shown on nodes where the bricks were not present.
BUG: 822778
Change-Id: I2d839f877d4e040b463bebde5ba753b7265ab633
Signed-off-by: shishir gowda <shishirng@gluster.com>
Reviewed-on: http://review.gluster.com/3425
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Simplified mkdir_p interface.
- Removed mkdir_if_missing from codebase
- Modified glusterd consumers of mkdir_if_missing to use mkdir_p
with allow_symlinks=_gf_true. This implicitly assumes that glusterd
is in 'control' of the brick path and glusterd's working dir in the sense
that the symlinks (if any) in any of the above mentioned paths are under
the 'storage administrator's control.
Change-Id: I7383ad5cff11b123e1e0d2fe6da0c81e03c52ed2
BUG: 823132
Signed-off-by: Krishnan Parthasarathi <kp@gluster.com>
Reviewed-on: http://review.gluster.com/3378
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- check if any prefix of the brick path has "trusted.gfid"
or "trusted.glusterfs.volume-id" set.
- set trusted.glusterfs.volume-id on the bricks as soon as
its induction into the volume is settled. Earlier, the setting of
"volume-id" used to happen during the first run of the brick process,
leaving of window for bricks part of one volume to be (ab)used by another
volume inadvertently.
- removed creation of brick directory (if missing), during start volume force.
This is to avoid directory creation as part 'force'ful starting of volume
and leave the responsibility with the user, who understands the
'availability' of the export directory (brick) better.
Change-Id: I4237ec4ea7a4e38a7501027e7de7112edd67de8c
BUG: 812214
Signed-off-by: Krishnan Parthasarathi <kp@gluster.com>
Reviewed-on: http://review.gluster.com/3280
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|