| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
snapshot taken first should be displayed first in
the snapshot list output.
Change-Id: Idd1b2374f842b3b70edfb3024094d4d81fbb1163
Signed-off-by: Sachin Pandit <spandit@redhat.com>
|
|
|
|
|
| |
Change-Id: I0ba50ba2963edf8d890a2dc78d48d42db7f71ae2
Signed-off-by: shishir gowda <sgowda@redhat.com>
|
|
|
|
|
|
|
|
|
| |
If user tries to list the snap details of volumes
which does not exist, then corresponding error
message is displayed.
Change-Id: I205738be3dc632ccb074b639a2088cdd44aa35a7
Signed-off-by: Sachin Pandit <spandit@redhat.com>
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
Also fixes snapshot config output
Change-Id: Ia50d94492009cf73dbb99ba20117b9fa4c41048a
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
|
|/
|
|
|
|
|
|
|
| |
GL-205: Gluster snapshot create crashing.
runner-arg should have NULL as the last argument.
Change-Id: I1bd0090160b53a04a8073c31d91fb77f96f625dc
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
| |
Change-Id: I58a743c92bbd021c3a42c5184ba8acf4db48878a
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
| |
Fixed it
Change-Id: Idafe3cdba149c2a66b89fb3fe0d4d3791d9d089c
Signed-off-by: Sachin Pandit <spandit@redhat.com>
|
|
|
|
|
|
|
| |
* op_errstr is allocated and set while returning if there is any error
Change-Id: I6e0de80d611aeeee3d25e8c20ab49b8ef42b0bf5
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
| |
Change-Id: I3404106a7e4fa7d32b1d5824e079040d2ed8d76b
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|\ |
|
| |
| |
| |
| |
| | |
Change-Id: Ia755e5c4af84827cc9b8876054cc48cfdc598876
Signed-off-by: shishir gowda <sgowda@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
GL-31: Ability to restore snapshot
Implemented snapshot restore for thin logical volume. As of now snapshot
restore for CG is not tested. Testing for snapshot restore of a volume
is done by changing the snapshot create process to create a thick snapshot.
This is done because --merge option to restore thin volume is not working in
the latest kernel.
Change-Id: Ia3ded7e6c4da5957a74e269a25ba3200e6fb2d8b
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
|
|/
|
|
|
|
|
| |
Fixed it now.
Change-Id: I2a3216fe079f546855fd17fa6ff69b023341772c
Signed-off-by: Sachin Pandit <spandit@redhat.com>
|
|
|
|
|
|
|
|
|
| |
* Also send the proper error back to cli incase of any failure
* Before taking the snap check whether a snap with the requested name
already exists.
Change-Id: I0830b31b1f095dd1d3d968c4f8b3cf46dc32d259
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
| |
Change-Id: I16b17ca60b5f9b34b7d238d8a3424a3b7a1dc435
Signed-off-by: shishir gowda <sgowda@redhat.com>
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Also refactored code in glusterd for create command
Additionally, removed brick-op func from jarvis_iniate_all_phases
Change-Id: Iddcc332009c5716adee7f2b04c93b352fb983446
Signed-off-by: shishir gowda <sgowda@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change in Naming convention:
"snap_details", "snap_count" and so on
is replaced by "snap-details", "snap-count" so on.
Total snapcount introduced.
Separate check is made for repeated Volume Name
Ex : "gluster snapshot list vol1 vol2 vol1 vol2"
is considered as "gluster snapshot list vol1 vol2"
*This is still a work in progress*
*have to test CG list once CG Store is ready*
Change-Id: I45e2904eb8bdbf78de8665f20ba9605c38320307
Signed-off-by: Sachin Pandit <spandit@redhat.com>
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
Added new XDR types for all the snapshot command.
Change-Id: I46c02ea8e9c81c7967a773386c4b77b5eb6d5075
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
|
|/
|
|
|
|
| |
Change-Id: I9d600b4d971b7fdcd54da50e4a069eab19648fa6
Original-author: Rajesh Joseph <rajeshatredhat@redhat.com>
Signed-off-by: shishir gowda <sgowda@redhat.com>
|
|
|
|
|
| |
Change-Id: I8b88fe94d0f9ee1089cafdda037abcf2f7a180ca
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
| |
Change-Id: I30cbbeb135c2d0a780e9e414ac0a96739e25647b
Signed-off-by: shishir gowda <sgowda@redhat.com>
|
|
|
|
|
| |
Change-Id: Iafbd0ec95de0c41455fb79953fb4bb07721334a5
Signed-off-by: shishir gowda <sgowda@redhat.com>
|
|
|
|
|
| |
Change-Id: I54db2fa67ebb6b57629f9536c296fbae07a1d159
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
| |
Change-Id: I6ec888a5553ad29ded032c02c80dd940b2aae007
Signed-off-by: shishir gowda <sgowda@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is still a work in progress.
As of now, these things are done:
* Take the snapshot of the backend brick
* Create the new volume for the snapshot
* Create the brick and the client volfiles
* Store the snapshot related info in /var/lib/glusterd
* Create the snap object representing the snapshot
TODO:
Start the brick processes for the snapshot
Change-Id: I26fbb0f8e5cf004d4c1dbca51819bab1cd1bac15
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
| |
Handles snapshot list command issued by cli. Details of all the snapshots
will be sent back to the caller in required format.
Change-Id: I01e512290548007c06e90b40a59cdde048fab954
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
| |
Change-Id: I58b1a6d19f7c74aef9075638602e6eed5367e5e1
Signed-off-by: shishir gowda <sgowda@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduced a new store for storing snapshot list for a given volume.
$GLUSTERD_INSTALL_PATH/vols/<volname>/snap_list.info
$GLUSTERD_INSTALL_PATH/vols/<volname>/snaps/
$GLUSTERD_INSTALL_PATH/vols/<volname>/snaps/<snap-name>/info <-snapshot volume info
$GLUSTERD_INSTALL_PATH/vols/<volname>/snaps/<snap-name>/bricks <-snapshot volume brick dir
$GLUSTERD_INSTALL_PATH/vols/<volname>/snaps/<snap-name>/bricks/<infos>
<-snapshot volume brick info files
store delete options
TODO -
$GLUSTERD_INSTALL_PATH/CG/ <-place holder for all cg's
.../CG/<cg-name>/info <- per cg information placeholder
Change-Id: I1f9fd8ff7cc0682d05b33965736a43dca6adb3e9
Signed-off-by: shishir gowda <sgowda@redhat.com>
|
|
|
|
|
|
|
| |
Also linking snap create command to Jarvis
Change-Id: If2ed29be072e10d0b0bd271d53e48eeaa6501ed7
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Defining separate interfaces for every phase to make use
of the rpcs and providing set of integrated interfaces for
commands to consume
Change-Id: I6d464326c5a8b5875a7c2539c9df072b23fe61a9
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Jarvis is nothing but a complete synctask approach for
glusterd to function. The commands making use of this
won't be using the op-state machine to inject events and
will be using the synctask framework to perform operations
across all nodes in the cluster. This patch defines the
program and the handlers used.
Change-Id: Ibff2c62b0187c40cdea7254c85786297bba60372
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
$ gluster snapshot help
snapshot help - display help for snapshot commands
snapshot create <volnames> [-n <snap-name/cg-name>] [-d <description>] - Snapshot Create.
$ gluster snapshot create vol1
snapshot create: ???: snap created successfully
$ gluster snapshot create vol1 vol2
snapshot create: ???: consistency group created successfully
(The ??? will be replaced by the glusterd snap create command with the
generated snap-name or cg-name)
$ gluster snapshot create vol1 vol2 -n CG1
snapshot create: CG1: consistency group created successfully
$ gluster snapshot create vol1 -n snap1 -d Description
snapshot create: snap1: snap created successfully
$ gluster snapshot create vol1 -n snap1 -d "Description can have -d within quotes"
snapshot create: snap1: snap created successfully
$ gluster snapshot create vol1 -n snap1 -d Description cant have -d without quotes
snapshot create: failed: Options(-n/-d) are not valid descriptions
Usage: snapshot create <volnames> [-n <snap-name/cg-name>] [-d <description>]
$ gluster snapshot create vol1 -n "Multi word snap name" -d Description
snapshot create: failed: Invalid snap name
Usage: snapshot create <volnames> [-n <snap-name/cg-name>] [-d <description>]
$ gluster snapshot create vol1 -d Description -n "-d"
snapshot create: failed: Options(-n/-d) are not valid snap names
Usage: snapshot create <volnames> [-n <snap-name/cg-name>] [-d <description>]
$ gluster snapshot create vol1 -d -n snap1
snapshot create: failed: No description provided
Usage: snapshot create <volnames> [-n <snap-name/cg-name>] [-d <description>]
Change-Id: I74b5a8406d72282fbb7ba7d07e0c7fe395148d38
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
|
|
|
|
|
| |
Change-Id: I03ff9e8094e7e36b28b521380949c7e9044c2e4e
Signed-off-by: shishir gowda <sgowda@redhat.com>
|
|
|
|
|
|
|
|
| |
API's for creating, adding, finding, removing snapshots
and consistency groups are provided.
Change-Id: Ic28da69a075b062aefdf14754c68259ca58bd427
Signed-off-by: shishir gowda <sgowda@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this patch we are replacing the existing cluster-wide
lock taken on glusterds across the cluster, with volume locks
which are also taken on glusterds across the cluster, but are
volume specific. So with the volume locks we are able to perform
more than one gluster operation at the same time, as long as the
operations are being performed on different volumes.
We maintain a global list of volume-locks (using a dict for a list)
where the key is the volume name, and which saves the uuid of the
originator glusterd. These locks are held and released per volume
transaction.
In order to acheive multiple gluster operations occuring at the
same time, we also separate opinfos in the op-state-machine, as a
part of this patch. To do so, we generate a unique transaction-id
(uuid) per gluster transaction. An opinfo is then associated with
this transaction id, which is used throughout the transaction. We
maintain a run-time global list(using a dict) of transaction-ids,
and their respective opinfos to achieve this.
Change-Id: Iaad505a854bac8de8f83beec0357eb6cde3f7ea8
Upstream Review Url: http://review.gluster.org/5994/
BUG: 1011470
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The volume op-versions are calculated during a volume set/reset, reading a
volume from disk and importing a volume during probe or volume sync. The
calculation of the volume op-version depends on the clusters op-version as some
features are enabled automatically depending on the clusters op-version. We
also don't store the volume op-versions persistently and don't export the
volume op-versions during sync. Due to this, there can occur cases which will
lead to inconsistencies in volumes in different peers. One such case is below,
Consider, a cluster made up 3 peers P1, P2 and P3, operating at op-version N.
The cluster has two volumes V1 and V2, which have volume op-versions N (since
volume op-version cannot be greater than cluster op-version). We have,
Cluster-op-version = N
V1 op-version = N
V2 op-version = N
A set operation on V1 causes the clusters op-version to be bumped up to N+1.
Assume that there exist some features that are automatically enabled on
op-version N+1. The op-version of V2 remains at N as no operation has been
performed on it. So,
Cluster op-version = N+1
V1 op-version = N+1
V2 op-version = N
Now, we probe a new peer P4. On the new peer we will have the following
op-versions,
Cluster op-version = N+1
V1 op-version = N+1
V2 op-version = N+1
This happens because we don't send volume op-versions during the sync after
probe. P4 will freshly calculate the op-version of V2 (assuming features have
been auto enabled due to the cluster op-version being N+1) as N+1.
Another case is when glusterd on a peer restarts. Assume P3 was restarted,
glusterd will recalculate the volume op-versions during the restore state.
Again, op-version of V2 will be calculated as N+1 assuming auto enabled
features. This will lead to inconsistency in the volume representation in
memory and on disk, as glusterd will assume the volume contains auto enabled
features, but the volfiles don't contain them as they were not regenrated.
These kind of issues can be solved by calculating the volume op-version only
when features are enabled and disabled (ie. during volume set/reset),
persisting the volume-op-versions and exporting/importing them.
Change-Id: I52de0668c92628622e85f4588fb28829a7231132
BUG: 1005043
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5568
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Glusterd would not store all the volumes when a global options were set.
When setting a global option, like 'nfs.*' options, glusterd used to
modify the volinfo for all the volumes, but would store only the volinfo
for the named volume. This lead to mismatch in the persisted and the
in-memory representation of those volumes, which lead to problems like
peers being rejected because of volume mismatches.
Change-Id: I8bca10585e34b7135cb32af0055dbd462b3fb9b5
BUG: 1012400
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6007
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I27f5f7cd54115d7b236b42f6beaaa05a8b379dd7
BUG: 1010153
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/5978
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Incorrect NFS ACL encoding causes "system.posix_acl_default"
setxattr failure on bricks on XFS file system. XFS (potentially
others?) doesn't understand when the 0x10 prefix is added to the
ACL type field for default ACLs (which the Linux NFS client adds)
which causes setfacl()->setxattr() to fail silently. NFS client
adds NFS_ACL_DEFAULT(0x1000) for default ACL.
FIX:
Mask the prefix (added by NFS client) OFF, so the setfacl is not
rejected when it hits the FS.
Original patch by: "Richard Wareing"
Change-Id: I17ad27d84f030cdea8396eb667ee031f0d41b396
BUG: 1009210
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/5980
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I8d670a228d3c1282aa7d70b151f166d04abc40e5
BUG: 764890
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/5909
Reviewed-by: Anand Avati <avati@redhat.com>
Tested-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I1841864273fc4242de15fbfcf76fd5de40269f28
BUG: 1006249
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5889
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While a gluster command holding lock is in execution,
any other gluster command which tries to run will fail to
acquire the lock. As a result command#2 will follow the
cleanup code flow, which also includes unlocking the held
locks. As both the commands are run from the same node,
command#2 will end up releasing the locks held by command#1
even before command#1 reaches completion.
Now we call the unlock routine in the code path, of the cluster
has been locked during the same transaction.
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Change-Id: I7b7aa4d4c7e565e982b75b8ed1e550fca528c834
BUG: 1008172
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5937
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Gluster NFS server is hard-coding the max rsize/wsize to 64KB
which is very less for NFS running over 10GE NIC. The existing
options nfs.read-size, nfs.write-size are not working as
expected.
FIX:
Make the options nfs.read-size (for rsize) and nfs.write-size
(for wsize) work to tune the NFS I/O size. Value range would
be 4KB(Min)-64KB(Default)-1MB(max).
NB: Credit to "Richard Wareing" for catching it.
Change-Id: I2754ecb0975692304308be8bcf496c713355f1c8
BUG: 1009223
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/5964
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currenly the CLI rebalance status command output does not indicate the
'type' of rebalance, i.e. whether a full rebalance or only a fix-layout
was carried out.
Fix: After the rebalance status of all peers is received by the
originator glusterd, alter it to reflect the type of rebalance
before passing it on to the CLI process.
Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3
BUG: 1004744
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/5826
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The rebalance status was being reset to 'Not started' when add-brick was
performed. This would lead to odd cases where a 'rebalance status' on a
volume would show status as 'not started' but would also include the
rebalance statistics. This also affected the showing of asynchronus task
status in 'volume status' command.
By not resetting the status prevent the above issues from happening.
Since we use the running/not-running of the rebalance process as the
check when performing other operations we can safely leave the rebalance
stats collected on an add-brick.
Change-Id: I4c69d9c789d081c6de7e7a81dd0d4eba2e83ec17
BUG: 1006247
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5895
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ia9ee44763a9c2798b26d3225bf03a974d7ece21f
BUG: 998962
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5666
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces task parameters for the asynchronus task shown in
volume status. The parameters are only given for xml output. The
parameters shown currently are,
- source and destination bricks for replace-brick tasks
......
<tasks>
<task>
<type>Replace brick</type>
<id>3d1a1005-9d2e-4ae0-bd62-577bc1d333a3</id>
<status>1</status>
<params>
<srcBrick>archm:/export/test4</srcBrick>
<dstBrick>archm:/export/test-replace1</dstBrick>
</params>
</task>
</tasks>
......
- list of bricks being removed for remove-brick tasks
......
<tasks>
<task>
<type>Remove brick</type>
<id>901c20ca-0da2-41de-8669-5f0caca6b846</id>
<status>1</status>
<params>
<brick>archm:/export/test2</brick>
<brick>archm:/export/test3</brick>
</params>
</task>
</tasks>
......
The changes for non-xml output will be done in a subsequent patch.
Change-Id: I322afe2f83ed8adeddb99f7962c25911204dc204
BUG: 916577
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5771
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|