| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/8498
As of now for both tcp only volumes and rdma only volumes, volfile
names are in the format <volname>-fuse.vol. This patch will change
the client volfile namings as shown below.
* TCP mounts always use <volname>-fuse.vol
* RDMA mounts always use <volname>.rdma-fuse.vol
Following the above naming convention, for tcp,rdma volumes both
volfiles will be present under /var/lib/glusterd/vols/<volname>/
such that rdma only volume can be mounted as
mount -t glusterfs -o transport=rdma <server/ip>:/<volname> <mount-point>
OR
mount -t glusterfs <server/ip>:/<volname>.rdma <mount-point>
The above command format can also be used to fuse mount a tcp,rdma
volume via rdma transport.
When we try to fuse mount a tcp,rdma volume with transport-type as rdma
it silently mounts via tcp. This change will also make sure that it
fetches the correct volfile based on the transport-type specified
from client side.
Change-Id: Id8b74c1c3e1e7fd323463061f8b13dd623fa6876
BUG: 1166515
Signed-off-by: Anoop C S <achiraya@redhat.com>
Reviewed-on: http://review.gluster.org/8498
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/9182
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/8934
For rdma type only volume client connection establishment
with server takes more than three seconds. Because for
tcp,rdma type volume, will have 2 ports one for tcp and
one for rdma, tcp port is stored with brickname and rdma
port is stored as "brickname.rdma" during pamap_sighin.
During the handshake when trying to get the brick port
for rdma clients, since we are not aware of server
transport type, we will append '.rdma' with brick name.
So for tcp,rdma volume there will be an entry with
'.rdma', but it will fail for rdma type only volume.
So we will try again, this time without appending '.rdma'
using a flag variable need_different_port, and it will succeed,
but the reconnection happens only after 3 seconds.
In this patch for rdma only type volume
we will append '.rdma' during the pmap_signin. So during the
handshake we will get the correct port for first try itself.
Since we don't need to retry , we can remove the
need_different_port flag variable.
Change-Id: I82a8a27f0e65a2e287f321e5e8292d86c6baf5b4
BUG: 1166515
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/8934
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/9177
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/8762
When we try to mount a tcp,rdma volume as rdma
transport using FUSE protocol, then mount will
hang if the brick is down. When we kill a process,
signal will be received in glusterfsd process and
it will call pmap_signout with port listening on tcp only.
In case of the tcp,rdma there will be two ports,
and port which is listening for rdma will not
called for sign out.
So the mount process will try to connect to a port
which is not open and it will keep trying to connect.
This patch will call pmap_signout for rdma port also,
So when mount tries to get the brick port,it will fail.
Change-Id: I73f90d7340afa3b0b1278924206f1488e4094a62
BUG: 1166515
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/8762
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/9176
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we mount rdma only volume or tcp,rdma volume using newly
peer probed IP's(nfs-server on new nodes) through nfs protocol,
mount fails for rdma only volume and mount happens with
help of tcp protocol in the case of tcp,rdma volumes. That is for
newly added servers will always get transport type as "socket".
This is due to nfs_transport_type is exported correctly and
imported wrongly.
This can be verified by the following ,
* Create a rdma only volume or tcp,rdma volume
* Add a new server into the trusted pool.
* Checkout the client transport type specified nfs-server
volgraph.It will be always tcp(socket type) instead of rdma.
* And also for rdma only volume in the nfs log, we can see
'connection refused' message for every reconnect between
nfs server and glusterfsd.
Backport of http://review.gluster.org/8975
cherry picked from commit f380e2029d608f97e3ba9a728605e1d798b09e8d
>BUG: 1157381
>Change-Id: I6bd4979e31adfc72af92c1da06a332557b6289e2
>Author: Jiffin Tony Thottan <jthottan@redhat.com>
>Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
>Reviewed-on: http://review.gluster.org/8975
>Reviewed-by: Meghana M <mmadhusu@redhat.com>
>Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
>Reviewed-by: Niels de Vos <ndevos@redhat.com>
>Tested-by: Niels de Vos <ndevos@redhat.com>
Change-Id: I328c17b07e877fe3b29ca832bf6f2291cea16bbe
BUG: 1166505
Reviewed-on: http://review.gluster.org/9172
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: soumya k <skoduri@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem : When glusterd is down on one of the nodes and during that
time if USS is disabled then snapd will still be running
in the node where glusterd was down.
Solution : during restart of glusterd check if USS is disabled,
if so then issue a kill for snapd.
NOTE : The test case which I wrote in my previous patchset
is facing some spurious failures, hence I thought of removing
that test case. I'll add the test case once the issue is resolved.
Change-Id: I2870ebb4b257d863cdfc319e8485b19e932576e9
BUG: 1175735
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9062
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9307
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Id13dc4cd3f5246446a9dfeabc9caa52f91477524
BUG: 1175755
Signed-off-by: Varun Shastry <vshastry@redhat.com>
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8133
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9304
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
original brick already has this option
Change-Id: I2841d2ac371a3e9505f6061f35d1d447946c0bae
BUG: 1175732
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8526
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9303
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Check if the LV is present before deleting the LV. In case where
the LV is absent (already deleted?), need not fail the snap delete
operation.
Also check if the LV is mounted before trying umount. In case it
isn't umounted, only remove the LV.
Change-Id: I0f5b2674797299d8748c6fac5b091f0caba65ca4
BUG: 1175754
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/8954
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9299
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For USS we have 1 snapd log per volume and as many snap logs for volume.
For example if there are 4 volumes having 256 snaps each and USS is
enabled than total number of logs under /var/log/glusterfs for USS would
be 1028 logs.
Total logs = (4(snapd per volume) + 4(volumes)*256(snaps)) = 1028
Hence, it makes sense to move into into sub-folder structure like
/var/log/glusterfs/snaps/<vol-name>/<snapd + snaps logs>
Change-Id: I29262e6458c3906916923cd67d1145d6ae10bec3
BUG: 1175728
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/9050
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9298
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of displaying all the snapshots in the uss world,
it is better if we display only the activated snapshots.
Change-Id: I70d3ec212b62ec15956ae3e826bc4201d8dedd17
BUG: 1170548
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8958
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9242
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By default snapshot should be deactivated and this should be a
configurable option.
This behaviour can be configured by the command below:
gluster snapshot config activate-on-create <enable|disable>
Change-Id: I1911595c32beed43bb2fca4bf99f0d264b422513
BUG: 1170921
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8985
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9241
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
set help"
gluster volume set help for uss shows "User Servicable Snapshots"
whereas it should be "User Serviceable Snapshots"
> Change-Id: I3cc8b3ea2cb6d209e1a12678eb7d0e68f4160d99
> BUG: 1160236
> Signed-off-by: vmallika <vmallika@redhat.com>
> Reviewed-on: http://review.gluster.org/9041
> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
> Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Change-Id: Id2de0e353d3307023da9239f6dee8b59e8eb0d8f
BUG: 1175645
Reviewed-on: http://review.gluster.org/9295
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Tested-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ibc75713d35c9cbafd493c8cf6b5294eaf29f05d4
BUG: 1163920
Signed-off-by: Petr Medonos <petr.medonos@etnetera.cz>
Reviewed-on: http://review.gluster.org/9126
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: glusterd crashes in non-originator slave node during geo-rep
create push-pem.
Cause: In glusterd_op_copy_file, the value of the key "common_pem_contents"
is freed explicitly even after dict_set is successful when it is
taken cared by dict_free.
Solution: Free only in failure cases before dict_set.
BUG: 1159210
Change-Id: I726f923915fc24de6588469c27f2cc996c20c59d
Reviewed-On: http://review.gluster.org/9018/
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/9026
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PROBLEM:
Geo-rep misses few a files to sync when I/O happenned during
geo-rep start.
ANALYSES:
To use the available changelogs to handle deletes/renames,
'xsync upper limit' is introduced which limits the xsync
crawl till the changelog register time. But there is a
small time interval between the changelog register time and
the time changelog actually enabled. If there is I/O between
this interval, it will not be synced through xsync as it is
beyond changelog register time and not through changelog also
as changelog is not actually enabled.
SOLUTION:
Enable changelog and marker during geo-rep create instead
of geo-rep start so that entries are captured in changelog
and above said interval is nullified.
BUG: 1159205
Change-Id: If5203eb1cfcbde3999f97a5f1a6a1af4875ac358
Reviewed-on: http://review.gluster.org/8650
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/9023
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When geo-rep is in paused state and a node in a cluster
is rebooted, the geo-rep status goes to "faulty (Paused)"
and no worker processes are started on that node yet. In
this state, when geo-rep is resumed, there is a race in
updating status file between glusterd and gsyncd itself
as geo-rep is resumed first and then status is updated.
glusterd tries to update to previous state and gsyncd
tries to update it to "Initializing...(Paused)" on
restart as it was paused previously. If gsyncd on restart
wins, the state is always paused but the process is not
acutally paused. So the solution is glusterd to update
the status file and then resume.
BUG: 1159195
Change-Id: I4c06f42226db98f5a3c49b90f31ecf6cf2b6d0cb
Reviewed-on: http://review.gluster.org/8911
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/9021
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When GlusterD starts the brick processes, these will listen on all
interfaces. When the 'transport.socket.bind-address' option is set in
glusterd.vol, the brick processes should only listen on the specified
hostname or IP-address.
Cherry picked from commit 430b874c4f1a171c106a9e1e6507e14e79805a1d:
> Change-Id: I8e7d1f294904081137c23f3446261329d0d13bba
> BUG: 1149863
> Signed-off-by: Niels de Vos <ndevos@redhat.com>
> Reviewed-on: http://review.gluster.org/8910
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Change-Id: I8e7d1f294904081137c23f3446261329d0d13bba
BUG: 1151745
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/8951
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the transport.socket.bind-address option is set to a hostname or
ip-address, the services started by GlusterD fail to connect to the
management daemon. GlusterD always forces the services to connect to the
"localhost" hostname, even if it is not listening on that address.
GlusterD should take the transport.socket.bind-address option into
consideration, and pass that to the glusterfs-clients with the -s or
--volfile commandline parameter.
Note that this is not a change that removes all hard-coded dependencies
on "localhost". This change merely makes it possible to start required
services when the transport.socket.bind-address option is set.
Cherry picked from commit 283fa797f4bf98130b42c36972305b8cb6e5aaaf:
> Change-Id: I36a0ed6c69342e6327adc258fea023929055d7f2
> BUG: 1149863
> Signed-off-by: Niels de Vos <ndevos@redhat.com>
> Reviewed-on: http://review.gluster.org/8908
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Change-Id: I36a0ed6c69342e6327adc258fea023929055d7f2
BUG: 1151745
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/8950
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Use a system-dependent macro for umount(8) location instead of
relying on $PATH to find it, for security and portability sake.
2) Introduce gf_umount_lazy() to replace umount -l (-l for lazy) invocations,
which is only supported on Linux; On Linux behavior in unchanged. On other
systems, we fork an external process (umountd) that will take care of
periodically attempt to unmount, and optionally rmdir.
Backport of Ia91167c0652f8ddab85136324b08f87c5ac1edd51d
BUG: 1138897
Change-Id: I9d82c87e85af0dee79f2de39bc697c486b7103c8
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8863
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Csaba Henk <csaba@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a backport of http://review.gluster.org/#/c/8878/
The pgfid extended attributes are used to construct the ancestry path
(from the file to the volume root) for nameless lookups on files.
As NFS relies on nameless lookups heavily, quota enforcement through NFS
would be inconsistent if quota were to be enabled on a volume with
existing data.
Solution is to heal the pgfid extended attributes as a part of lookup
perfomed by quota-crawl process. In a posix lookup check for pgfid xattr
and if it is missing set the xattr.
BUG: 1147953
Change-Id: I707d91a056e07452bfd1e070af5eddaa752a84ac
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8890
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Today, when glusterd's internal locking mechanism fails
with invalid type or when another competing lock is being
held, the log message doesn't provide enough information
directly as to which command saw this (first). Following
is a snippet of how a failure would look in the log file.
This would greatly assist in debugging.
[2014-09-03 04:57:58.549418] E
[glusterd-locks.c:520:glusterd_mgmt_v3_lock]
(-->/usr/local/lib/glusterfs/3.7dev/xlator/mgmt/glusterd.so(__glusterd_handle_create_volume+0x801)
[0x7f30b071e651]
(-->/usr/local/lib/glusterfs/3.7dev/xlator/mgmt/glusterd.so(glusterd_op_begin_synctask+0x2c)
[0x7f30b072e19c]
(-->/usr/local/lib/glusterfs/3.7dev/xlator/mgmt/glusterd.so(gd_sync_task_begin+0x55d)
[0x7f30b072de6d]))) 0-management: Invalid entity. Cannot perform locking
operation on vol types
Change-Id: I0595f49d60e620e8b065f3506bdb147ccee383a7
BUG: 1145093
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/8842
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Introduced in "1f6e992f1aaa676be5bd47d17e58f1171825cf43"
Change-Id: Id684e2f082def7d01ef3c258ea6598da6205591f
BUG: 1117822
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/8840
Reviewed-by: Justin Clift <justin@gluster.org>
Tested-by: Justin Clift <justin@gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Turn setfattr(1) absolute path into an OS-dependant macro. Let compiler
option override it to fit custom installation if needed.
Backport of I8f469c5741a85b6e8d8f6299a9540b3d64611d2f
BUG: 1138897
Change-Id: I279752f2ec5db1abc25830cb9a23290cc401d517
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8828
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also, moved the backtrace fetching logic to a separate function.
Modified the backtrace fetching logic able to work under memory pressure
conditions.
Change-Id: Ie38bea425a085770f41831314aeda95595177ece
BUG:1145093
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/8794
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of
371bb42 glusterd: Authenticate management handshake requests
from master.
Management handshake requests, which are used to validate op-version
supported by the peers, are now only allowed if,
- the glusterd doesn't have any other peer, or
- the request was sent by another peer.
This prevents the op-version of a peer being changed because of a
connection attempt by an invalid peer.
BUG: 1144978
Change-Id: I5a909dad37e9873efe8b75dad41b7af71ce91c3d
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/8819
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Default 'open FD limit' is 1024. As the number of volumes/bricks
increases, brick-to-glusterd socket FDs also increases in glusterd and
runs out of the limit.
Solution is to set the 'Open FD' limit to higher value in glusterd
Change-Id: Iaa60b2155df2fa5a0759e054bdebffbc09f63ec1
BUG: 1145095
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8578
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8807
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As one of the recommandations for taking a snapshot is not to have
an active geo-replication session, its better to display an error
saying session is active when snapshot create command is issued.
Change-Id: I94593dbd2659610e033ca316176dda1ac8dc5ce6
BUG: 1145091
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8461
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8804
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
creating snapshots.
When creating a snapshot a LVM is created at the backend and is mounted
under /var/run/gluster/snaps/... However, this mount does not inherit
the mount options for the original brick acting as the parent for the
snap.
If the snap is restored, this could lead to performance degredations,
functional limitations, or in extreme scenarios even potential data
loss.
Change-Id: I67d70fd83430d83dacc5380c6c928e27fb9c9e1b
BUG: 1145088
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8394
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8802
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
aligned with error messages of info and list.
When a snapshot operation like status, info, list
performed on a non-existing snapshot.
For Status error message is displayed as 'Snap not found'
For List and Info error message is displayed as 'Snapshot does not exist'
Have the consistant error message all the places
Change-Id: I7b241217dba62fda844481731a6858e4ecb12897
BUG: 1145087
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8309
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8801
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
problem: Snapshot command fails if one or more bricks are not thinly
provisioned. But the error message is a generic error message which
is confusing to the user.
fix: Provide correct error message in case of failure.
Change-Id: Iad247f966423a8f73ef6da57cab7ed6cddc05861
BUG: 1145086
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-on: http://review.gluster.org/8377
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8800
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
as-well-as to a particular volume
Problem :
With the current design we can only delete a single snapshot.
And the deletion of volume which contains snapshot is not allowed.
Because of that user might be forced to delete all the snapshots
manually before he is allowed to delete a volume.
Solution:
Following is the interface with which user can delete
all the snapshots of a system or belonging to a particular volume.
Syntax : gluster snapshot delete all
*To delete all the snapshots present in a system
Syntax : gluster snapshot delete volume <volname>
*To deletes all the snapshot present in a volume specified.
========================================================================
Sample Output:
Case 1 : Deleting a single snapshot.
[root@snapshot-24 glusterfs]# gluster snapshot delete snap1
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: snap1: snap removed successfully
-----------------------------------------------------------------
Case 2 : Deleting all the snapshots in a Volume.
[root@snapshot-24 glusterfs]# gluster snapshot delete volume vol1
Volume (vol1) contains 9 snapshot(s).
Do you still want to continue and delete them? (y/n) y
snapshot delete: snap2: snap removed successfully
snapshot delete: snap3: snap removed successfully
snapshot delete: snap4: snap removed successfully
snapshot delete: snap5: snap removed successfully
.
.
.
-----------------------------------------------------------------
Case 3 : Deleting all the snapshots in a system.
[root@snapshot-24 glusterfs]# gluster snapshot delete all
System contains 4 snapshot(s).
Do you still want to continue and delete them? (y/n) y
snapshot delete: snap7: snap removed successfully
snapshot delete: snap8: snap removed successfully
snapshot delete: snap9: snap removed successfully
snapshot delete: snap10: snap removed successfully
========================================================================
Change-Id: Ifec8e128ab2011cbbba208376b9c92cfbe7d8d71
BUG: 1145083
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8162
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8798
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
performed on a cluster with op-version less than 30600.
Currently we get error message as on cli 'Another transaction is in progress
Please try again after sometime' when a snapshot operation is performed
on a cluster with op-version less than 30600.
We need to print the correct error message in this case.
Change-Id: I5f144428d928393c3796bde96ce6e3a40fca8141
BUG: 1145068
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8371
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8796
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
set explicitly.
Problem : Even though snap-max-hard-limit, snap-max-soft-limit and
auto-delete values were not set explicitly, It was getting showed
in the output of gluster volume info.
Solution : Check if the value is already present in dictionary
(That means, it is set), If value is not present then consider
the default value,
NOTE : This patch doesn't solve the problem where the values
which is set globally are being displayed in gluster volume info
Change-Id: I61445b3d2a12eb68c38a19bea53b9051ad028050
BUG: 1145020
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8191
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8793
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
notifications
* As of now snapview-server is polling (sending rpc requests to glusterd) to
get the latest list of snapshots at some regular time intervals
(non configurable). Instead of that register a callback with glusterd so that
glusterd sends notifications to snapd whenever a snapshot is created/deleted
and snapview-server can configure itself.
rebase of the patch http://review.gluster.org/#/c/8150/
Change-Id: Iee2582b1a823d50c79233a41cf2106f458b40691
BUG: 1143961
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/8767
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Requirement:
Snapshot needs an API to fail the CLI if any geo-rep session is active
for that volume.
Solution:
A function "gd_vol_is_geo_rep_active" is provided to check if any
geo-rep session is active for that volume. An in memory dict called
'gsync_running_slaves' is maintained in 'volinfo' structure to keep
track of active geo-rep session for the volume. The key
'slavenode::slavevol' with value 'running' is added whenever geo-rep
is started/resumed into the dict and the same is removed if
stopped/paused. So the 'count' in dict is used to decide whether the
geo-rep is active or not for that volume.
Also added "this->name" in gf_log in routines which this patch is
touched.
BUG: 1138952
Change-Id: Ib13aeb509a56edf510651b77e20bf3cc43a3e763
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/8459
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/8645
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Geo-replicatoin does a full xsync crawl after snapshot
restoration of slave and master. It does not do history crawl.
Analysis:
Marker creates 'marker.tstamp' file when geo-rep is started
for the first time. The virtual extended attribute
'trusted.glusterfs.volume-mark' is maintained and whenever
it is queried on gluster mount point, marker fills it on
the fly and returns the combination of uuid, ctime of
marker.tstamp and others. So ctime of marker.tstamp, in other
sense 'volume-mark' marks the geo-rep start time when the
session is freshly created.
From the above, after the first filesystem crawl(xsync) is
done during first geo-rep start, stime should always be less
than 'volume-mark'. So whenever stime is less than volume-mark,
it does full filesystem crawl (xsync).
Root Cause:
When snapshot is restored, marker.tstamp file is freshly
created losing the timestamps, it was originally created with.
Solution:
1. Change is made to depend on mtime instead of ctime.
2. mtime and atime of marker.tstamp is restored back when
snapshot is created and restored.
BUG: 1138952
Change-Id: I0e19e1cb2593171b9a2b41d0d303330feb7fd2b3
Signed-off-by: Kotresh H R <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/8401
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/8642
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added a script check_goto.pl, that when run from
the source code root, will scan all .c files to match
the following pattern:
label:
if (condition)
goto label;
On finding such a pattern the script will print the file name
and the line number. There are certain cases where the above
recursive pattern is intended. Hence adding those labels to
ignore-labels. Thanks Vijaikumar Mallikarjuna for the perl
script.
Also fixed all such existing errors
BUG: 1138952
Change-Id: Ie6b75621711736e7e30f2f9d25e50435d58fc1e2
Signed-off-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/8307
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/8637
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Linux defines ENODATA and ENOATTR with the same value, which means that
code can miss on on the two without breaking.
FreeBSD does not have ENODATA and GlusterFS defines it as ENOATTR just
like Linux does.
On NetBSD, ENODATA != ENOATTR, hence we need to check for both values
to get portable behavior.
This is a backport of I003a3af055fdad285d235f2a0c192c9cce56fab8
BUG: 1138897
Change-Id: I272cd53e637993c7fd2ac74bd607001d3581ced7
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8634
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NetBSD's FUSE being pure userland implementation, there is no /dev/fuse
to open. Test /dev/puffs (kernel fs-in-userland subsystem supporting FUSE)
insead.
This is a backport of Ia65e95c246dc31ea2839cf64d7c851430828542e
BUG: 1138897
Change-Id: I9beb673cff08d429c8ae66a819266f6037086b3e
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8633
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On NetBSD and FreeBSD, doing a 'gluster volume start $volume force' causes
NFS server, quotad, snapd and glustershd to be undetected by glusterd once the
volume has restarted. 'gluster volume status' shows the three processes
as 'N' in the online column, while they have been launched successfully.
This happens because glusterd attempts to connect to its child processes
just between the child does a unlink() on the socket in
__socket_server_bind() and the time it calls bind() and listen().
Different scheduling policy may explain why the problem does not happen
on Linux, but it may pop up some day since we make no guaranteed
assumptions here.
This patchet works this around by introducing a boolean
transport.socket.ignore-enoent option, set by nfs and glustershd,
which prevents ENOENT to be fatal and cause glusterd to retry and
suceed later. Behavior of other clients is unaffected.
This is a backport of Ifdc4d45b2513743ed42ee235a5c61a086321644c
BUG: 1138897
Change-Id: I04472f045249c99a9492218ceebfab847474db2d
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8630
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport from master branch - http://review.gluster.org/#/c/8246/
- Break-way from '/var/lib/glusterd' hard-coded previously,
instead rely on 'configure' value from 'localstatedir'
- Provide 's/lib/db' as default working directory for gluster
management daemon for BSD and Darwin based installations
- loff_t is really off_t on Darwin
- fix-off the warnings generated by clang on FreeBSD/Darwin
- Now 'tests/*' use GLUSTERD_WORKDIR a common variable for all
platforms.
- Define proper environment for running tests, define correct PATH
and LD_LIBRARY_PATH when running tests, so that the desired version
of glusterfs is used, regardless where it is installed.
(Thanks to manu@netbsd.org for this additional work)
Change-Id: I06e684ac4c26d1e74c9daf76753403ad15f79276
BUG: 1130308
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/8486
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds xml output for geo-replication status
and status detail command.
sample:
--------------------------------------------------------------
<geoRep>
<volume>
<name>master</name>
<sessions>
<session>
<session_slave>:2a301d66-b9d2-44b4-b827-d680d67123eb:ssh://XXXXXXXXXX::slave</session_slave>
<pair>
<master_node>localhost.localdomain</master_node>
<master_node_uuid>2a301d66-b9d2-44b4-b827-d680d67123eb</master_node_uuid>
<master_brick>/root/master_b1</master_brick>
<slave>ssh://XXXXXXXXXXX::slave</slave>
<status>faulty</status>
<checkpoint_status>N/A</checkpoint_status>
<crawl_status>N/A</crawl_status>
</pair>
</session>
</sessions>
</volume>
</geoRep>
-------------------------------------------------------------
Change-Id: Ia19dbe751c3ab1ec7cb8923cdd6c8b99c374072f
BUG: 1133464
Signed-off-by: ndarshan <dnarayan@redhat.com>
Reviewed-on: http://review.gluster.org/8089
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: ndarshan <dnarayan@redhat.com>
Reviewed-on: http://review.gluster.org/8532
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I012899be08a06d39ea5c9fb98a66acf833d7213f
BUG: 1120589
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/8323
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I01afe64685a5794cce9265580c6c5de57a045201
BUG: 1119582
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/8310
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch improves the peer identification mechanism in glusterd and
lays down the framework for further improvements, including better multi
network support in glusterd.
This patch mainly does two things,
1. Extend the peerinfo object to store a list of addresses instead of a
single hostname as it does now. This also includes changes to make the
peer update behaviour of 'peer probe' to add to the list.
2. Improve glusterd_friend_find_by_hostname() to perform better matching
of hostnames. glusterd_friend_find_by_hostname() now does and initial
quick string compare against all the peer addresses known to glusterd,
after which it tries a more thorough search using address resolution and
matching the struc sockaddr's.
The above two changes together improve the peer identification situation
in glusterd a lot.
More information regarding the problem this patch attempts to resolve
and the approach chosen can be found at
http://www.gluster.org/community/documentation/index.php/Features/Better_peer_identification
This commit is a squashed commit of the following changes, the
development branch of which can be viewed at,
https://github.com/kshlm/glusterfs/tree/better-peer-identification or,
https://forge.gluster.org/~kshlm/glusterfs-core/kshlms-glusterfs/commits/better-peer-identification
commit 198f86e60fab74faf082eaa02657a4d8f60b92f0
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 15 14:34:06 2014 +0530
Update gluster.8
commit 35d597f3a6b3248373e727f7b7e889c92554d56c
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 15 09:01:01 2014 +0530
Address review comments
https://review.gluster.org/#/c/8238/3
commit 47b5331e17304477322bd2daed5bbed503c34ca1
Merge: c71b12c 78128af
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 15 08:41:39 2014 +0530
Merge branch 'master' into better-peer-identification
commit c71b12c164330e8d19d1df4734ab34ef9a8caad2
Merge: 57bc9de 0f5719a
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jul 10 19:50:19 2014 +0530
Merge branch 'master' into better-peer-identification
commit 57bc9de9e4f49ff2b1620df9906cda50a3527a25
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jul 10 19:49:08 2014 +0530
More fixes to review comments
commit 5482cc363a687a9e246a0780ec88acd53e218501
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jul 10 18:36:40 2014 +0530
Code refactoring in peer-utils based on review comments
https://review.gluster.org/#/c/8238/2/xlators/mgmt/glusterd/src/glusterd-peer-utils.c
commit 89b22c34757178f64d5fbaffa31e6302f841c060
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jul 10 12:30:00 2014 +0530
Hostnames in peer status
commit 63ebf9485cf50d736cf640238a1ab241671fcaf1
Merge: c8c8fdd f5f9721
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jul 10 12:06:33 2014 +0530
Merge remote-tracking branch 'origin/master' into better-peer-identification
commit c8c8fdd2104b5b6b8a1af739b1dd952b74e6dd66
Author: Kaushal M <kaushal@redhat.com>
Date: Wed Jul 9 18:35:27 2014 +0530
Hostnames in xml output
commit 732a92a0167ad7b1d70edbc35ebd8307c2766ae1
Author: Kaushal M <kaushal@redhat.com>
Date: Wed Jul 9 15:12:10 2014 +0530
Add hostnames to cli rsp dict during list-friends
commit fcf43e3e317508f0c225024738a988a4af8e9205
Merge: c0e2624 72d96e2
Author: Kaushal M <kaushal@redhat.com>
Date: Wed Jul 9 12:53:03 2014 +0530
Merge branch 'master' into better-peer-identification
commit c0e262416728a3c536a8347a216e471eb2251535
Author: Kaushal M <kaushal@redhat.com>
Date: Mon Jul 7 16:11:19 2014 +0530
Use list_for_each_entry_safe when cleaning peer hostnames
commit 6132e60224eb592f3657e535a12a3e72c772da42
Author: Kaushal M <kaushal@redhat.com>
Date: Mon Jul 7 15:52:19 2014 +0530
Fix crash in gd_add_friend_to_dict
commit 88ffa9a508fd5aac0b2a76e6e76487ce0cab786a
Author: Kaushal M <kaushal@redhat.com>
Date: Mon Jul 7 13:19:44 2014 +0530
gd_peerinfo_destroy -> glusterd_peerinfo_destroy
commit 4b36930a715b1e13cd1a77d136ef1cf78a06d574
Author: Kaushal M <kaushal@redhat.com>
Date: Mon Jul 7 12:50:12 2014 +0530
More refactoring
commit ee559b081d608c6501c10ae22166f26eeb65690e
Author: Kaushal M <kaushal@redhat.com>
Date: Mon Jul 7 12:14:40 2014 +0530
Major refactoring of code based on review comments at
https://review.gluster.org/#/c/8238/1/xlators/mgmt/glusterd/src/glusterd-peer-utils.h
commit e96dbc7bbb05fad2a9c424de41a394b8023fe48d
Merge: 2613d1d 83c09b7
Author: Kaushal M <kaushal@redhat.com>
Date: Mon Jul 7 09:47:05 2014 +0530
Merge remote-tracking branch 'origin/master' into better-peer-identification
commit 2613d1daebff0c56812de821c06ed4c16bb9d447
Merge: b242cf6 9a50211
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 15:28:57 2014 +0530
Merge remote-tracking branch 'origin/master' into better-peer-identification
commit b242cf66d95dd3dd5e3975aa430baa6bd74b8a29
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 15:08:18 2014 +0530
Fix a silly mistake, if (ctx->req) => if (ctx->req == NULL)
commit c835ed26433830ceed57289143f596cf60421558
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 14:58:23 2014 +0530
Fix reverse probe.
commit 9ede17f9329b854b02e8ad159f173244789fd08c
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 13:31:32 2014 +0530
Fix friend import for existing peers
commit 891bf74c7350064dfb008d1b7294bcec28d680fd
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 13:08:36 2014 +0530
Set first hostname in peerinfo->hostnames to peerinfo->hostname
commit 9421d6a217381a7427a7d84f369280883ca4297a
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 12:21:40 2014 +0530
Fix gf_asprintf return val check in glusterd_store_peer_write
commit defac978c1d94011ce8195e311839b9ffce057e7
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 11:16:13 2014 +0530
Fix store_retrieve_peers to correctly cleanup.
commit 00a799f5de1121b0cb7421da8285f9407063e1bd
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 10:52:11 2014 +0530
Update address list in glusterd_probe_cbk only when needed.
commit 7a628e8a9c562d85709c69cfa13fb1774c521b75
Merge: d191985 dc46d5e
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 09:24:12 2014 +0530
Merge remote-tracking branch 'origin/master' into better-peer-identification
commit d1919858e6639d2b54d716a61f662d9752ec5ff1
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 1 18:59:49 2014 +0530
gf_compare_addrinfo -> gf_compare_sockaddr
commit 31d8ef730d408f8d9ba8f504fa648f7dcd59da87
Merge: 93bbede 86ee233
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 1 18:16:13 2014 +0530
Merge remote-tracking branch 'origin/master' into better-peer-identification
commit 93bbedeac5181e29f59b2acd08f638146812ec41
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 1 18:15:16 2014 +0530
Improve glusterd_friend_find_by_hostname
glusterd_friend_find_by_hostname will now do an initial quick search for
the peerinfo performing string comparisions on the given host string. It
follows it with a more thorough match, by resolving the addresses and
comparing addrinfos instead of strings.
commit 2542cdbc45aa9cfcaf1f174686158d5565cdd07b
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 1 17:21:10 2014 +0530
New utility gf_compare_addrinfo
commit 338676e8389a44bd91136eebd110197429c2566c
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 1 14:55:56 2014 +0530
Use gd_peer_has_address instead of strcmp
commit 28d45be51f594328741c44455bd80ac9d64ca501
Merge: 728266e 991dd5e
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 1 14:54:40 2014 +0530
Merge branch 'master' into better-peer-identification
commit 728266eb16d5f5a4bf36266044425ae164337f99
Merge: 7d9b87b 2417de9
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 1 09:55:13 2014 +0530
Merge remote-tracking branch 'origin/master' into better-peer-identification
commit 7d9b87b84955ec17daeaf88a3e7462914039430f
Merge: b890625 e02275c
Author: Kaushal M <kshlmster@gmail.com>
Date: Tue Jul 1 08:41:40 2014 +0530
Merge pull request #4 from vpshastry/better-peer-identification
Better peer identification
commit e02275c52fb83c72ad082c098fd3e432c2b9c526
Merge: 75ee90d b890625
Author: Varun Shastry <vshastry@redhat.com>
Date: Mon Jun 30 16:44:29 2014 +0530
Merge branch 'better-peer-identification' of https://github.com/kshlm/glusterfs into better-peer-identification-kaushal-github
commit 75ee90d2f272e49b94d24c9ca4571e89a83055ff
Author: Varun Shastry <vshastry@redhat.com>
Date: Mon Jun 30 15:36:10 2014 +0530
glusterd: add to the list if the probed uuid pre-exists
Signed-off-by: Varun Shastry <vshastry@redhat.com>
commit b890625d8164c660695daef3285c67979eef723e
Merge: 04c5d60 187a7a9
Author: Kaushal M <kaushal@redhat.com>
Date: Mon Jun 30 11:44:13 2014 +0530
Merge remote-tracking branch 'origin/master' into better-peer-identification
commit 04c5d60cb938c8d94b214689580b40abb1b0ffcd
Merge: 3a5bfa1 e01edb6
Author: Kaushal M <kshlmster@gmail.com>
Date: Sat Jun 28 19:23:33 2014 +0530
Merge pull request #3 from vpshastry/better-peer-identification
glusterd: search through the list of hostnames in the peerinfo
commit 0c64f3346a977f9165ac55a84a1e03c40a7573a7
Merge: e01edb6 3a5bfa1
Author: Varun Shastry <vshastry@redhat.com>
Date: Sat Jun 28 10:43:29 2014 +0530
Merge branch 'better-peer-identification' of https://github.com/kshlm/glusterfs into better-peer-identification-kaushal-github
commit e01edb63153a1008db70b8fa76ae5b535e099326
Author: Varun Shastry <vshastry@redhat.com>
Date: Fri Jun 27 12:29:36 2014 +0530
glusterd: search through the list of hostnames in the peerinfo
Signed-off-by: Varun Shastry <vshastry@redhat.com>
commit 3a5bfa15855e660db2bfde644727371dd2d618cc
Merge: cda6d31 371ea35
Author: Kaushal M <kshlmster@gmail.com>
Date: Fri Jun 27 11:31:17 2014 +0530
Merge pull request #1 from vpshastry/better-peer-identification
glusterd: Add hostname to list instead of replaceing upon update
commit 371ea354f198b4182382d5403c5960c0b2add6b6
Author: Varun Shastry <vshastry@redhat.com>
Date: Fri Jun 27 11:24:54 2014 +0530
glusterd: Add hostname to list instead of replaceing upon update
Signed-off-by: Varun Shastry <vshastry@redhat.com>
commit cda6d3152886623ecbf46baf0048ebe0119b30b6
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jun 26 19:52:52 2014 +0530
Import address lists
commit 6649b54aa0440130c08e827e0a1d1bbfb840eca9
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jun 26 19:15:37 2014 +0530
Implement export address list
commit 55990034eead92bc9b936240029e460a4bf152d5
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jun 26 18:11:59 2014 +0530
Use first address in list to when setting up the peer RPC.
commit a35fde8d19b9988eb04c652fb3a5e4f84d90ad00
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jun 26 18:03:04 2014 +0530
Properly free addresses on glusterd_peer_destroy
commit 1988081db09ac9205f3dc7268cef8be267f3ce8b
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jun 26 17:52:35 2014 +0530
Restore peerinfo with address list implemented.
commit 66f524d5749a12f4910dd6b06c9d91f37e1d831e
Author: Kaushal M <kaushal@redhat.com>
Date: Mon Jun 23 13:02:23 2014 +0530
Move out all peer related utilities from glusterd-utils to glusterd-peer-utils
commit 14a2a326a4dff11b55490dca2a14f39320931340
Author: Kaushal M <kaushal@redhat.com>
Date: Tue May 27 12:16:41 2014 +0530
Compilation fix
commit c59cd351d0a102d0d5f3ea9001fd33c4edcb262f
Author: Kaushal M <kaushal@redhat.com>
Date: Mon May 5 12:51:11 2014 +0530
Add store support for hostname list
commit b70325f0beb884ad12645ef40185f0bf6cedd741
Author: Kaushal M <kaushal@redhat.com>
Date: Fri May 2 15:58:07 2014 +0530
Add a hostnames list to glusterd_peerinfo_t
glusterd_peerinfo_new will now init this list and add the given hostname
as the lists first member.
Signed-off-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: Varun Shastry <vshastry@redhat.com>
Change-Id: Ief3c5d6d6f16571ee2fab0a45e638b9d6506a06e
BUG: 1119547
Reviewed-on: http://review.gluster.org/8238
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I96261e7f5cd7b5550d3100750c80190dd932a8ab
BUG: 789278
Signed-off-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-on: http://review.gluster.org/8252
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While creating snapshot, update fstype for local bricks only
and not for bricks hosted on other nodes
Also returning ret as 0, in case no cleanup is required in
post-validation, so that a post-validation failure is not
logged, every time a pre-validation failure happens.
Change-Id: I6364e33cfd9528e0a988ee48f3443239ee884336
BUG: 1111060
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/8272
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now --xml option can be used with all snapshot command. It
will form the cli output in xml form.
Change-Id: Ifc0ac31d2a9f91e136e87f3b51a629df7dba94e8
BUG: 1096610
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-on: http://review.gluster.org/7663
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Calculation of layouts now considers the size of each brick, so that
smaller bricks don't get an "unfair" share of allocations and start
returning ENOSPC while the larger bricks still have plenty of space.
The observation has been made that some clients might get ENOTCONN when
trying to fetch disk-size information, and end up calculating layouts
differently. The following meta-observations can be made.
(1) This scenario is extremely unlikely in configurations with AFR.
(2) The most likely consequence of this scenario is that some files will
be placed sub-optimally by the client with the obsolete (non-weighted)
layout. They'll still be found anyway, so this isn't a show stopper.
(3) Without this patch it's *guaranteed* that some files will be placed
sub-optimally, because any layout that fails to account for brick sizes
is sub-optimal.
(4) We shouldn't be doing fix-layout from two nodes simultaneously
anyway. That's inefficient at best. Any instances of such behavior are
separate bugs, which should be fixed separately.
(5) In the most extreme edge case, two nodes doing weighted and
non-weighted layout fixes could race and end up creating an internally
inconsistent layout. This condition is still transient; it will be
detected and repaired automatically the next time anyone fetches the
layout. (If it's not that's also a preexisting bug that can show up in
other contexts.)
In conclusion, it's not the purpose of this patch to fix bugs elsewhere
in DHT. Its purpose is to make life incrementally better for users who
add new hardware with larger disks etc. than the older equipment. It's
only one part of an ongoing process to improve layout management and
repair, all the way up to support for multiple hash rings or tiering.
Change-Id: I05eb6f9eface9cdaf8622e0260c8c7f29020447f
BUG: 1114680
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/8093
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Two new options have been added to the 'create' command of the cli
interface:
disperse [<count>] redundancy <count>
Both are optional. A dispersed volume is created by specifying, at
least, one of them. If 'disperse' is missing or it's present but
'<count>' does not, the number of bricks enumerated in the command
line is taken as the disperse count.
If 'redundancy' is missing, the lowest optimal value is assumed. A
configuration is considered optimal (for most workloads) when the
disperse count - redundancy count is a power of 2. If the resulting
redundancy is 1, the volume is created normally, but if it's greater
than 1, a warning is shown to the user and he/she must answer yes/no
to continue volume creation. If there isn't any optimal value for
the given number of bricks, a warning is also shown and, if the user
accepts, a redundancy of 1 is used.
If 'redundancy' is specified and the resulting volume is not optimal,
another warning is shown to the user.
A distributed-disperse volume can be created using a number of bricks
multiple of the disperse count.
Change-Id: Iab93efbe78e905cdb91f54f3741599f7ea6645e4
BUG: 1118629
Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-on: http://review.gluster.org/7782
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|