| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
oVirt's Gluster Integration needs an inexpensive command that can be
executed every 10 seconds to monitor async tasks and their parameters,
for all volumes.
The solution involves adding a 'tasks' sub-command to 'volume status'
to fetch only the async task IDs, type and other relevant parameters.
Only the originator glusterd participates in this command as all the
information needed is available on all the nodes. This is to make the
command suitable for being executed every 10 seconds.
Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1
BUG: 1012346
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/6006
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds node uuid in rebalance/remove-brick status xml output.
Output XML will look like
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volRebalance>
<op>3</op>
<nodeCount>1</nodeCount>
<node>
<nodeName>localhost</nodeName>
==>> <id>883626f8-4d29-4d02-8c5d-c9f48c5b2445</id>
<files>0</files>
<size>0</size>
<lookups>0</lookups>
<failures>0</failures>
<status>3</status>
<statusStr>completed</statusStr>
</node>
<aggregate>
<files>0</files>
<size>0</size>
<lookups>0</lookups>
<failures>0</failures>
<status>3</status>
<statusStr>completed</statusStr>
</aggregate>
</volRebalance>
</cliOutput>
Change-Id: I5a1d4f9043b33b9e88150647a243ddb16154e843
BUG: 1012296
Signed-off-by: Bala.FA <barumuga@redhat.com>
Reviewed-on: http://review.gluster.org/6005
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I8ff6b48ef41fd6e9ea68c42dfb9878f8a08ed627
BUG: 1010874
Signed-off-by: Ajeet Jha <ajha@redhat.com>
Reviewed-on: http://review.gluster.org/5989
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The volume op-versions are calculated during a volume set/reset, reading a
volume from disk and importing a volume during probe or volume sync. The
calculation of the volume op-version depends on the clusters op-version as some
features are enabled automatically depending on the clusters op-version. We
also don't store the volume op-versions persistently and don't export the
volume op-versions during sync. Due to this, there can occur cases which will
lead to inconsistencies in volumes in different peers. One such case is below,
Consider, a cluster made up 3 peers P1, P2 and P3, operating at op-version N.
The cluster has two volumes V1 and V2, which have volume op-versions N (since
volume op-version cannot be greater than cluster op-version). We have,
Cluster-op-version = N
V1 op-version = N
V2 op-version = N
A set operation on V1 causes the clusters op-version to be bumped up to N+1.
Assume that there exist some features that are automatically enabled on
op-version N+1. The op-version of V2 remains at N as no operation has been
performed on it. So,
Cluster op-version = N+1
V1 op-version = N+1
V2 op-version = N
Now, we probe a new peer P4. On the new peer we will have the following
op-versions,
Cluster op-version = N+1
V1 op-version = N+1
V2 op-version = N+1
This happens because we don't send volume op-versions during the sync after
probe. P4 will freshly calculate the op-version of V2 (assuming features have
been auto enabled due to the cluster op-version being N+1) as N+1.
Another case is when glusterd on a peer restarts. Assume P3 was restarted,
glusterd will recalculate the volume op-versions during the restore state.
Again, op-version of V2 will be calculated as N+1 assuming auto enabled
features. This will lead to inconsistency in the volume representation in
memory and on disk, as glusterd will assume the volume contains auto enabled
features, but the volfiles don't contain them as they were not regenrated.
These kind of issues can be solved by calculating the volume op-version only
when features are enabled and disabled (ie. during volume set/reset),
persisting the volume-op-versions and exporting/importing them.
Change-Id: I52de0668c92628622e85f4588fb28829a7231132
BUG: 1005043
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5568
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Glusterd would not store all the volumes when a global options were set.
When setting a global option, like 'nfs.*' options, glusterd used to
modify the volinfo for all the volumes, but would store only the volinfo
for the named volume. This lead to mismatch in the persisted and the
in-memory representation of those volumes, which lead to problems like
peers being rejected because of volume mismatches.
Change-Id: I8bca10585e34b7135cb32af0055dbd462b3fb9b5
BUG: 1012400
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6007
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I27f5f7cd54115d7b236b42f6beaaa05a8b379dd7
BUG: 1010153
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/5978
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Incorrect NFS ACL encoding causes "system.posix_acl_default"
setxattr failure on bricks on XFS file system. XFS (potentially
others?) doesn't understand when the 0x10 prefix is added to the
ACL type field for default ACLs (which the Linux NFS client adds)
which causes setfacl()->setxattr() to fail silently. NFS client
adds NFS_ACL_DEFAULT(0x1000) for default ACL.
FIX:
Mask the prefix (added by NFS client) OFF, so the setfacl is not
rejected when it hits the FS.
Original patch by: "Richard Wareing"
Change-Id: I17ad27d84f030cdea8396eb667ee031f0d41b396
BUG: 1009210
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/5980
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I8d670a228d3c1282aa7d70b151f166d04abc40e5
BUG: 764890
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/5909
Reviewed-by: Anand Avati <avati@redhat.com>
Tested-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I1841864273fc4242de15fbfcf76fd5de40269f28
BUG: 1006249
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5889
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While a gluster command holding lock is in execution,
any other gluster command which tries to run will fail to
acquire the lock. As a result command#2 will follow the
cleanup code flow, which also includes unlocking the held
locks. As both the commands are run from the same node,
command#2 will end up releasing the locks held by command#1
even before command#1 reaches completion.
Now we call the unlock routine in the code path, of the cluster
has been locked during the same transaction.
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Change-Id: I7b7aa4d4c7e565e982b75b8ed1e550fca528c834
BUG: 1008172
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5937
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Gluster NFS server is hard-coding the max rsize/wsize to 64KB
which is very less for NFS running over 10GE NIC. The existing
options nfs.read-size, nfs.write-size are not working as
expected.
FIX:
Make the options nfs.read-size (for rsize) and nfs.write-size
(for wsize) work to tune the NFS I/O size. Value range would
be 4KB(Min)-64KB(Default)-1MB(max).
NB: Credit to "Richard Wareing" for catching it.
Change-Id: I2754ecb0975692304308be8bcf496c713355f1c8
BUG: 1009223
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/5964
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currenly the CLI rebalance status command output does not indicate the
'type' of rebalance, i.e. whether a full rebalance or only a fix-layout
was carried out.
Fix: After the rebalance status of all peers is received by the
originator glusterd, alter it to reflect the type of rebalance
before passing it on to the CLI process.
Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3
BUG: 1004744
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/5826
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The rebalance status was being reset to 'Not started' when add-brick was
performed. This would lead to odd cases where a 'rebalance status' on a
volume would show status as 'not started' but would also include the
rebalance statistics. This also affected the showing of asynchronus task
status in 'volume status' command.
By not resetting the status prevent the above issues from happening.
Since we use the running/not-running of the rebalance process as the
check when performing other operations we can safely leave the rebalance
stats collected on an add-brick.
Change-Id: I4c69d9c789d081c6de7e7a81dd0d4eba2e83ec17
BUG: 1006247
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5895
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ia9ee44763a9c2798b26d3225bf03a974d7ece21f
BUG: 998962
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5666
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces task parameters for the asynchronus task shown in
volume status. The parameters are only given for xml output. The
parameters shown currently are,
- source and destination bricks for replace-brick tasks
......
<tasks>
<task>
<type>Replace brick</type>
<id>3d1a1005-9d2e-4ae0-bd62-577bc1d333a3</id>
<status>1</status>
<params>
<srcBrick>archm:/export/test4</srcBrick>
<dstBrick>archm:/export/test-replace1</dstBrick>
</params>
</task>
</tasks>
......
- list of bricks being removed for remove-brick tasks
......
<tasks>
<task>
<type>Remove brick</type>
<id>901c20ca-0da2-41de-8669-5f0caca6b846</id>
<status>1</status>
<params>
<brick>archm:/export/test2</brick>
<brick>archm:/export/test3</brick>
</params>
</task>
</tasks>
......
The changes for non-xml output will be done in a subsequent patch.
Change-Id: I322afe2f83ed8adeddb99f7962c25911204dc204
BUG: 916577
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5771
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I7c17de39da03c6b2764790581e097936da406695
BUG: 1002556
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/5893
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier, a peer running a higher op-version couldn't be probed into a
cluster running at a lower op-version. This created issues when trying
to expand an upgraded cluster. This patch changes this behaviour.
The cluster no longer rejects a peer being probed if its op-version is
higher than the cluster op-version. The peer will reduce its op-version
if it doesn't have any volumes. If the peer contains volumes and needs
to reduce its op-version, it fails the handshake and the probe fails.
Change-Id: I12c6c873922799e1557b7184e956baea643d0dea
BUG: 1005038
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5715
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ia4bf607e044d50636fb0a599a2ff91059f5b17aa
BUG: 1006177
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5887
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Slave's xtime is now stored on the master itself (and that too only on
the root), which implies it cannot be propogated to the cascaded slave.
Thus the intermediate master now makes use of it's own volume information
to propogate volume-mark and xtime.
On starting Geo-Replication "geo-replication.ignore-pid-check" marker
option is enabled, which is an override for the client-pid check in
marker. This options triggers marker update only for geo-replication
auxillary mount (client-pid == -1). Since gsyncd not does setxattr()
directly on the bricks, this option won't trigger a chain of spurious
metadata updates that would need to be processed by gsyncd.
Change-Id: If50c5ef275dfb6b4ff4fd35be2565587e2fdf3e1
BUG: 996371
Original Author: Venky Shankar <vshankar@redhat.com>
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/5592
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is required by Geo-Replication that does auxillary mount
with client-pid as -1 (which has special treatment at specific
places in GlusterFS), to trigger xtime updates on the intermediate
master in a cascading setup.
Marker too had a check to "not" mark updates for geo-replication's
auxillary mounts. With the new geo-replication design, xtimes are
not set by the master on the slave for all entities. Due to this
cascading setups were broken.
This patch introduces "geo-replication.ignore-pid-check" option
as a "override" for the client-pid check for gsyncd's client-pid.
When this options is enabled, marker start "marking" even if the
updates are from the special client.
Geo-Replication on the detection of itself being an intermediate
master, enables this option.
Change-Id: I9f7140edd12fef5480595ee0f93f35b94cdb8345
BUG: 996371
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/5591
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added op-version checks to all geo-rep commands. Min
op-version should be 2.
Change-Id: I942d897404e11e4d53123409731ba5cd252668fe
BUG: 847839
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/5732
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I8c2d398114ad4534bcc052f9a5be8bbb2e7e2582
BUG: 999531
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/5677
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
non-root@hostname::slave-vol geo-rep sessions are not supported.
only hostname and root@hostname sessions are supported, and are
treated as the same.
Change-Id: I87551e1bd4ff4e0e6520c34eb3d944587cc65476
BUG: 998933
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/5659
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
create force will fail with proper message, if the ip is not
reachable, or is unable to fetch slave details.
Change-Id: I44a3ba777b37702ffd0e48e9cb46c51e293327d4
BUG: 988314
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/5516
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now saving the session details in
/var/lib/glusterd/geo-replication/<mastervol>_<slaveip>_<slavevol>
repo to distinguish between two master-slave sessions where the
slavename is same across two different clusters.
Change-Id: I57c93f55cc9bd4fe2bffe579028aaf5e4335b223
BUG: 991501
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/5488
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a translator to improve the performance of typical,
sequential directory reads (i.e., ls). readdir-ahead begins
preloading the contents of a directory on open and serves readdir
requests from the preloaded content. readdir-ahead is currently
implemented to only handle the single threaded directory read
case.
readdir-ahead is currently disabled by default. It can be enabled
with the following command:
gluster volume set <volname> readdir-ahead on
The following are results of a getdents test on a single brick
volume.
Test info:
- Single VM, gluster client/server.
- Volume mounted with native client using --gid-timeout=2.
- getdents on single directory with 100k 0-byte files.
Test results:
- !readdir-ahead
read 3120080 bytes from offset 0
3 MiB, 4348 ops, 0:00:07.00 (416.590 KiB/sec and 594.4737 ops/sec)
- readdir-ahead
read 3120080 bytes from offset 0
3 MiB, 4348 ops, 0:00:03.00 (820.116 KiB/sec and 1170.3043 ops/sec)
BUG: 980517
Change-Id: Ieceb9e1eb47d1d5b5af8da2bf03839537364653f
Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-on: http://review.gluster.org/4519
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I1442bc1d115a9c6ecf139a0ca9da74d07e0fe928
BUG: 1003855
Signed-off-by: shishir gowda <sgowda@redhat.com>
Reviewed-on: http://review.gluster.org/5764
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds support for internals snapshots using QCOW2 and
general framework for external snapshots (next patch) with
QCOW2 and QED.
For internal snapshots, the file must be "initialized" or
"formatted" into QCOW2 format, and specify a file size.
Snapshots can be created, deleted, and applied ("goto").
e.g:
// Format and Initialize
sh# setfattr -n trusted.glusterfs.block-format -v qcow2:10GB /mnt/imgfile
sh# ls -l /mnt/imgfile
-rw-r--r-- 1 root root 10G Jul 18 21:20 imgfile
// Create a snapshot
sh# setfattr -n trusted.glusterfs.block-snapshot-create -v name1 imgfile
// Apply a snapshot
sh# setfattr -n trusted.gluterfs.block-snapshot-goto -v name1 imgfile
Change-Id: If993e057a9455967ba3fa9dcabb7f74b8b2cf4c3
BUG: 986775
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/5367
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce /var/lib/glusterfs/nfs/rmtab to contain a list of NFS-clients
which have a volume mounted. The volume option 'nfs.mount-rmtab' can be
set to an alternative filename. When the file is located on shared
storage, multiple gNFS servers can use the same file to present a single
NFS-server.
This cache is read when a system administrator calls 'showmount -a' and
updated when an NFS-client calls MNT or UMNT from the MOUNT protocol.
Usage:
- create a volume for storing the shared rmtab file
- mount the volume on all storage servers, at the same location
- make sure that the volume is mounted at boot (add to /etc/fstab)
- place the rmtab file on the volume:
# gluster volume set <VOLUME> nfs.mount-rmtab <MOUNTPOINT>/<FILENAME>
- any subsequent mount requests will add an entry to this file
- 'showmount -a' requests will return the NFS-clients using the cluster
Note:
The NFS-server does currently not support reconfigure(). When a
configuration option is set/changed, the NFS-server glusterfs process
gets restarted. This causes the active NFS-clients to be forgotten (the
entries are saved in the old rmtab, but we do not have a reference to
that file any more, so we can't re-add them). Therefor a re-mount done
by the NFS-clients is needed before they get listed in the rmtab again.
Change-Id: I58f47135d60ad112849d647bea4e1129683dd2b3
BUG: 904065
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/4430
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Tested-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The 'start' variant of the remove-brick command only applies at the dht
level wherein we can remove all the bricks of a sub-volume (and remove
multiple such sub-volumes) but not select bricks of it.
This patch disallows removing individual replica bricks of multiple
sub-volumes (i.e. reducing the replcia count of the volume) using
remove-brick 'start'. The preferred method for such an operation is to use
commit force.
This patch also reverts the check to prevent removal of bricks from a
replicate volume (commit 0d415f7)
BUG: 961669
Change-Id: I447ad27f73a0963b5e09fb317bf7267a7a5a6147
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/5566
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Else things can deadlock in getspec v/s glusterd_do_mount()
Change-Id: Ie70b43916e495c1c8f93e4ed0836c2fb7b0e1f1d
BUG: 997576
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/5636
Tested-by: Joe Julian <joe@julianfamily.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A volume would fail to start if any one of the bricks fails staging or
fails to start, even with the 'force' option. With this patch, when the
'force' option is given for a volume start, glusterd will continue and
start other bricks even if one fails staging or starting.
Also did a small fix in changelog, to prevent it crashing when it fails
to init.
Change-Id: I7efbd9ab13d12d69b0335ae54143fa17586f8f98
BUG: 994375
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5510
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add server uuid as an attribute to the existing brick details in the
volume info cli xml output.
Currently, when a node has more than one ip, the oVirt-engine fails
to map the corresponding server using the ip alone.
If we get the host uuid along with brick details in volume info
command it will be easy for ovirt-engine to find out the
server and thereby we can avoid confusion in finding the server.
Change-Id: I3c9c9acea80e10e0b2977477759d9af045e48959
BUG: 955588
Signed-off-by: Timothy Asir <tjeyasin@redhat.com>
Reviewed-on: http://review.gluster.org/4875
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Confusing "Error" messages in logs can cause user panic
and false positives - avoid them as necessary in future.
Change-Id: I906c64eea879b19a8db099c89d1d7f874e5530db
BUG: 995784
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/5555
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, remove-brick supports removal of only one distributed
stripe/ replica pair at a time. Fix it to support removal of multiple
pairs. This is consistent with add-brick behaviour which supports adding
multiple stripe/replica pairs simultaneously.
Removal is successful irrespective of the order of the bricks given at
the CLI, as long as the bricks are from the same subvolume(s).
Change-Id: I7c11c1235ce07b124155978b9d48d0ea65396103
BUG: 974007
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/5210
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I4352f513fc5616daa20e9a4ad51a63fb13a27dff
BUG: 847839
Signed-off-by: M S Vishwanath Bhat <vbhat@redhat.com>
Reviewed-on: http://review.gluster.org/5472
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ic3c43291e0e1ead0d89c0436e8d70aa5dee2f543
BUG: 924488
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/5391
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Asynchronous tasks are shown in 'volume status' only for a normal volume
status request for either all volumes or a single volume.
Change-Id: I9d47101511776a179d213598782ca0bbdf32b8c2
BUG: 888752
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5308
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of using the cluster op-version, volume op-version is used to
enable open-behind during volgen. For doing this, the volume op-versions
are updated before regenerating the volfiles.
Change-Id: I675bb549bf7c7c0279030dca698fb530781addc6
BUG: 990830
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5385
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I42d08b3a6a7a3839f5e9953e1f83959222c080f8
Signed-off-by: shishir gowda <sgowda@redhat.com>
BUG: 989846
Reviewed-on: http://review.gluster.org/5446
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Performing statefile check in case of geo-rep stop, so as to provide
proper error message in case session is not created.
However in case of geo-rep stop force, we allow the command to succeed
even in case that the session is not created, because the stop command
is a failsafe command to stop running geo-rep sessions on any nodes.
Change-Id: I2b6a0253de977633606c422cbbc9e37cede9a268
BUG: 989541
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5417
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During add-brick, when a new brick is added in one of the
nodes that was already a part of the existing volume, and
gsyncd was already running on that node, then all gsyncd
processes running on that node, for that particular master
and any slave sessions will be restarted
If a new brick is added in a new node, then after adding the
brick, the user has to perform the following steps:
1. gluster system:: execute gsec_create
2. gluster volume geo-replication <master-vol> <slave-vol> create push-pem force
3. gluster volume geo-replication <master-vol> <slave-vol> start force
Change-Id: I4b9633e176c80e4a7cf33f42ebfa47ab8fc283f1
BUG: 989532
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5416
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Thanks to Patrick Matthäi <pmatthaei@debian.org> for the patch.
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Change-Id: I59da74298894ccc2ab30967ffe44cc844aa73f82
BUG: 814534
Reviewed-on: http://review.gluster.org/5436
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently rebalance/remove-brick op's display migration failed count even
for files which failed due to space issues (not enough space for file, or
migration leading to cluster imbalance)
These will now be counted as skipped, and rebalance/remove-brick status
will display the additional counter
Change-Id: I674904d380b5f8300e9ca9e6af557c3d30d6cff4
BUG: 989846
Signed-off-by: shishir gowda <sgowda@redhat.com>
Reviewed-on: http://review.gluster.org/5399
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now geo-rep create force will return true if a node is down, and log an
appropriate message. It will also return true with an appropriate log
message if the slave verification fails.
However it will not return true if the config file is deleted, ot corrupted,
so as not to get the state_file's path. It will also fail if the slave url
is invalid. If the push-pem option is given and
/var/lib/glusterd/geo-replication/common_secret.pem.pub is not present, then
also the create force command will fail.
Change-Id: Ie7532a0884ddf9c3008bd30832d171d5b53b540e
BUG: 988314
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5405
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch prevents messages of the form "bd op: %s : SUCCESS"
from being logged in .cmd_log_history.
Change-Id: Iebeb7e26d409bf99b9c8df0a5c1c5a5d30d78a61
BUG: 823081
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/4871
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-by: M. Mohan Kumar <mohan@in.ibm.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Till now all the brick processes were writing the valgrind information
to the same log file.
Change-Id: I0251c943935e2901b729c71f21d0677edb9f6867
BUG: 922877
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/5394
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commands:
gluster system:: execute gsec_create
gluster volume geo-rep <master> <slave-url> create [push-pem] [force]
gluster volume geo-rep <master> <slave-url> start [force]
gluster volume geo-rep <master> <slave-url> stop [force]
gluster volume geo-rep <master> <slave-url> delete
gluster volume geo-rep <master> <slave-url> config
gluster volume geo-rep <master> <slave-url> status
The geo-replication is distributed. The session will be created, and
gsyncd will be spawned on all relevant nodes, instead of only one
node.
geo-rep: Collecting status detail related data
Added persistent store for saving information about
TotalFilesSynced, TotalSyncTime, TotalBytesSynced
Changes in the status information in socket:
Existing(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;
New(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978;
TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640;
Persistent details stored in
/var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status
Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111
BUG: 847839
Original Author: Avra Sengupta <asengupt@redhat.com>
Original Author: Venky Shankar <vshankar@redhat.com>
Original Author: Aravinda VK <avishwan@redhat.com>
Original Author: Amar Tumballi <amarts@redhat.com>
Original Author: Csaba Henk <csaba@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5132
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* also consume changelog for change detection.
* Status fixes
* Use new libgfchangelog done API
* process (and sync) one changelog at a time
Change-Id: I24891615bb762e0741b1819ddfdef8802326cb16
BUG: 847839
Original Author: Csaba Henk <csaba@redhat.com>
Original Author: Aravinda VK <avishwan@redhat.com>
Original Author: Venky Shankar <vshankar@redhat.com>
Original Author: Amar Tumballi <amarts@redhat.com>
Original Author: Avra Sengupta <asengupt@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5131
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Because of the extra fsync()s issued by AFR transaction, they
could potentially "clog" all the io-threads denying unrelated
operations from making progress.
This patch assigns a dedicated thread to issues fsyncs, as
an experimental feature to understand performance characteristics
with the approach.
As a basis, incoming individual fsync requests are grouped into
batches, falling in the same @batch-fsync-delay-usec window of
time. These windows can extend in practice, as processing of
the previous batch can take longer than @batch-fsync-delay-usec
while new requests are getting batched.
The feature support three modes (similar to the -S modes of fs_mark)
- syncfs: In this mode one syncfs() is issued per batch, instead
of N fsync()s (one per file.)
- syncfs-single-fsync: In this mode one syncfs() is issued per
batch (which, on Linux, guarantees the completion of write-out
of dirty pages in the filesystem up to that point) and one single
fsync() to synchronize or flush the controller/drive cache. This
corresponds to -S 2 of fsmark.
- syncfs-reverse-fsync: In this mode, one syncfs() is issued per
batch, and all the open files in that batch are fsync()'ed in
the reverse order of the queue. This corresponds to -S 4 of
fsmark.
- reverse-fsync: In this mode, no syncfs() is issued and all the
files in the batch are fsync()'ed in the reverse order. This
corresponds to -S 3 of fsmark.
Change-Id: Ia1e170a810c780c8d80e02cf910accc4170c4cd4
BUG: 927146
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/4746
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|