| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I7f95682faada16308314bfbf84298b02d1198efa
BUG: 1188184
Signed-off-by: Poornima G <pgurusid@redhat.com>
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Reviewed-on: http://review.gluster.org/9534
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch ports nfs, shd, quotad & snapd with the approach suggested in
http://www.gluster.org/pipermail/gluster-devel/2014-December/043180.html
Change-Id: I4ea5b38793f87fc85cc9d2cf873727351dedffd2
BUG: 1191486
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9428
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd process, when try to initialize default vol file, will
always through an error if there is no rdma device. Changing the
log levels and log messages to more appropriately.
Change-Id: I75b919581c6738446dd2d5bddb7b7658a91efcf4
BUG: 1188232
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/9559
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
| |
Change-Id: I4ddc371d01ec763706a168a215410015ee2a3787
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9578
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New column introduced in Status output, "SLAVE USER",
Slave user is not "root" in non root Geo-replication setup.
Added additional tag in XML output <slave_user>
BUG: 1180459
Change-Id: Ia48a5a8eb892ce883b9ec114be7bb2d46eff8535
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: http://review.gluster.org/9409
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kotresh HR <khiremat@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For volumes with replicate, disperse xlators, self-heal daemon should do
healing. This patch provides enable/disable functionality for the xlators to be
part of self-heal-daemon. Replicate already had this functionality with
'gluster volume set cluster.self-heal-daemon on/off'. But this patch makes it
uniform for both types of volumes. Internally it still does 'volume set' based
on the volume type.
Change-Id: Ie0f3799b74c2afef9ac658ef3d50dce3e8072b29
BUG: 1177601
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9358
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Extend the AFR heal command to include automated split-brain resolution.
This patch [3/3] is the final patch for afr automated split-brain resolution
implementation.
"gluster volume heal <VOLNAME> [full | statistics [heal-count [replica
<HOSTNAME:BRICKNAME>]] |info [healed | heal-failed | split-brain]| split-brain
{bigger-file <FILE> |source-brick <HOSTNAME:BRICKNAME> [<FILE>]}]"
The new additions being:
1.gluster volume heal <VOLNAME> split-brain bigger-file <FILE>
Locates the replica containing the FILE, selects bigger-file as source and
completes heal.
2.gluster volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME>
<FILE>
Selects <FILE> present in <HOSTNAME:BRICKNAME> as source and completes heal.
3.gluster volume heal <VOLNAME> split-brain <HOSTNAME:BRICKNAME>
Selects all split-brained files in <HOSTNAME:BRICKNAME> as source and completes
heal.
Note: <FILE> can be either the full file name as seen from the root of the
volume (or) the gfid-string representation of the file, which sometimes gets
displayed in the heal info command's output.
Entry/gfid split-brain resolution is not supported.
Example can be found in the test case.
Change-Id: I4649733922d406f14f28ee9033a5cb627b9538b3
BUG: 1136769
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/9377
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Initialising the transport's list, meant to hold clients connected to
it, on the first connection event is prone to race, especially with the
introduction of multi-threaded event layer.
BUG: 1181203
Change-Id: I6a20686a2012c1f49a279cc9cd55a03b8c7615fc
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9413
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This would allow users/developers to associate rpc layer log messages
to the corresponding connection.
Change-Id: I040f79248dced7174a4364d9f995612ed3540dd4
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/8535
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For rdma write with payload count greater than one
will fail due to insuffient memory to hold the
buffers in rpc transport layer. It was expecting
only one vector in payload, So it can only able
to decode the first iovec from payload, and the
rest will be discarded.
Thnaks to Raghavendra Gowdappa for fixing the
same.
Change-Id: I82a649a34abe6320d6216c8ce73e69d9b5e99326
BUG: 1171142
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/9247
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After enabling nfs.mount-udp, mounting a subdir on a volume over
NFS fails. Because mountudpproc3_mnt_3_svc() invokes nfs3_rootfh()
which internally calls mnt3_mntpath_to_export() to resolve the
mount path. mnt3_mntpath_to_export() just works if the mount path
requested is volume itself. It is not able to resolve, if the path
is a subdir inside the volume.
MOUNT over TCP uses mnt3_find_export() to resolve subdir path but
UDP can't use this routine because mnt3_find_export() needs the
req data (of type rpcsvc_request_t) and it's available only for
TCP version of RPC.
FIX:
(1) Use syncop_lookup() framework to resolve the MOUNT PATH by
breaking it into components and resolve component-by-component.
i.e. glfs_resolve_at () API from libgfapi shared object.
(2) If MOUNT PATH is subdir, then make sure subdir export is not
disabled.
(3) Add auth mechanism to respect nfs.rpc-auth-allow/reject and
subdir auth i.e. nfs.export-dir
(4) Enhanced error handling for MOUNT over UDP
Change-Id: I42ee69415d064b98af4f49773026562824f684d1
BUG: 1118311
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/8346
Reviewed-by: soumya k <skoduri@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* As of now snapview-server is polling (sending rpc requests to glusterd) to
get the latest list of snapshots at some regular time intervals
(non configurable). Instead of that register a callback with glusterd so that
glusterd sends notifications to snapd whenever a snapshot is created/deleted
and snapview-server can configure itself.
Change-Id: I17a274fd2ab487d030678f0077feb2b0f35e5896
BUG: 1119628
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/8150
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces a cli command to display a specific volume option/all
volume options of a specific volume with the following usage:
Usage: volume get <VOLNAME> <key|all>
Change-Id: Ic88edb33c5509d7a37cd5ade6341e45e3cdbf59d
BUG: 983317
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/8305
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Break-way from '/var/lib/glusterd' hard-coded previously,
instead rely on 'configure' value from 'localstatedir'
- Provide 's/lib/db' as default working directory for gluster
management daemon for BSD and Darwin based installations
- loff_t is really off_t on Darwin
- fix-off the warnings generated by clang on FreeBSD/Darwin
- Now 'tests/*' use GLUSTERD_WORKDIR a common variable for all
platforms.
- Define proper environment for running tests, define correct PATH
and LD_LIBRARY_PATH when running tests, so that the desired version
of glusterfs is used, regardless where it is installed.
(Thanks to manu@netbsd.org for this additional work)
Change-Id: I2339a0d9275de5939ccad3e52b535598064a35e7
BUG: 1111774
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8246
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rpc_clnt_disable() sets rpc->conn->trans to NULL, hence we should not
call rpc_transport_unref() afterwards. I moved it before the
rpc_clnt_disable() call, but I am not sure it should be called at all,
perhaps it should just go away.
BUG: 764655
Change-Id: I488d0207494e3a3fad52e64e67b2e740b236b864
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8393
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This can be seen as below,
># cat $META/graphs/active/vol-client-0/private |grep ping_msgs_sent
ping_msgs_sent = 2
># cat $META/graphs/active/vol-client-0/private |grep "^msgs_sent"
msgs_sent = 13
where $META is /<fuse-mountpt>/.meta
Change-Id: I2107ec2b045bac701377760635e18758adb943a3
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/8285
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* A wrong memory allocator was used to (de)allocate nodes (not data in
them) of rb tree. This patch uses default allocator, since that suits
our purpose.
* Fix reference counting of client, though hitting the codepath
containing this bug is highly unlikely.
Change-Id: I7692097351d6e54288fee01da5af18e761fd0e8c
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
BUG: 1067256
Reviewed-on: http://review.gluster.org/7816
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds xml output for geo-replication status
and status detail command.
sample:
--------------------------------------------------------------
<geoRep>
<volume>
<name>master</name>
<sessions>
<session>
<session_slave>:2a301d66-b9d2-44b4-b827-d680d67123eb:ssh://XXXXXXXXXX::slave</session_slave>
<pair>
<master_node>localhost.localdomain</master_node>
<master_node_uuid>2a301d66-b9d2-44b4-b827-d680d67123eb</master_node_uuid>
<master_brick>/root/master_b1</master_brick>
<slave>ssh://XXXXXXXXXXX::slave</slave>
<status>faulty</status>
<checkpoint_status>N/A</checkpoint_status>
<crawl_status>N/A</crawl_status>
</pair>
</session>
</sessions>
</volume>
</geoRep>
-------------------------------------------------------------
Change-Id: Ia19dbe751c3ab1ec7cb8923cdd6c8b99c374072f
BUG: 1121518
Signed-off-by: ndarshan <dnarayan@redhat.com>
Reviewed-on: http://review.gluster.org/8089
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is to avoid indefinite recursion of the following kind, that could
lead to a stack overflow:
rpc_clnt_start_ping() -> rpc_clnt_ping() -> rpc_clnt_submit() ->
rpc_clnt_start_ping() -> rpc_clnt_ping() -> rpc_clnt_submit() ...
and so on,
since it is possible that before rpc_clnt_start_ping() is called a
second time by the thread executing this codepath, the response to
previous ping request could ALWAYS come by and cause epoll thread to
reset conn->ping_started to 0.
This patch also fixes the issue of excessive ping traffic, which was
due to the client sending one ping rpc for every fop in the worst case.
Also removed dead code in glusterd.
Change-Id: I7c5e6ae3b1c9d23407c0a12a319bdcb43ba7a359
BUG: 1116243
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/8257
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Provides a working Gluster Management Daemon, CLI
- Provides a working GlusterFS server, GlusterNFS server
- Provides a working GlusterFS client
- execinfo port from FreeBSD is moved into ./contrib/libexecinfo
for ease of portability on NetBSD. (FreeBSD 10 and OSX provide
execinfo natively)
- More portability cleanups for Darwin, FreeBSD and NetBSD
- Provides a new rc script for FreeBSD
Change-Id: I8dff336f97479ca5a7f9b8c6b730051c0f8ac46f
BUG: 1111774
Original-Author: Mike Ma <mikemandarine@gmail.com>
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/8141
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Access to a volume is now controlled by the following options, based on
whether SSL is enabled or not.
* server.ssl-allow: get identity from certificate, no password needed
* auth.allow: get identity and matching password from command line
It is not possible to allow both simultaneously, since the connection
itself is either using SSL or it isn't.
Change-Id: I5a5be66520f56778563d62f4b3ab35c66cc41ac0
BUG: 1114604
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/3695
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NFS subdir authentication doesn't correctly handle multi-homed
(host with multiple NIC having multiple IP addr) OR multi-protocol
(IPv4 and IPv6) network addresses.
When user/admin sets HOSTNAME in gluster CLI for NFS subdir auth,
mnt3_verify_auth() routine does not iterate over all the resolved
n/w addrs returned by getaddrinfo() n/w API. Instead, it just tests
with the one returned first.
1. Iterate over all the n/w addrs (linked list) returned by getaddrinfo().
2. Move the n/w mask calculation part to mnt3_export_fill_hostspec()
instead of doing it in mnt3_verify_auth() i.e. calculating for each
mount request. It does not change for MOUNT req.
3. Integrate "subnet support code rpc-auth.addr.<volname>.allow"
and "NFS subdir auth code" to remove code duplication.
Change-Id: I26b0def52c22cda35ca11766afca3df5fd4360bf
BUG: 1102293
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/8048
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ifd8099bc235eb395e8fd9ead3197bef71c78042b
BUG: 1109812
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/8079
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
refresh snaplist
BUG: 1105439
Change-Id: I4bb312a53d88f6f4955e69a3ef2b4955ec17f26d
Signed-off-by: Anand Subramanian <anands@redhat.com>
Reviewed-on: http://review.gluster.org/8001
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Iae85cdfc223875688ea17155fffcf2a3a435d245
BUG: 764890
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/8044
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I554b9bcacf6c8acd6dffea0a485fc50e82c3dc04
BUG: 764890
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/8043
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
If volume is set for rpc-auth.addr.<volname>.reject with value
as "host1", ideally the NFS mount from "host1" should FAIL. It
works as expected. But when the volume is RESET, then previous
value set for auth-reject should go off, and further NFS mount
from "host1" should PASS. But it FAILs because of stale value
in dict for key "rpc-auth.addr.<volname>.reject".
It does not impact rpc-auth.addr.<volname>.allow key because,
each time NFS volfile gets generated, allow key ll have "*"
as default value. But reject key does not have default value.
FIX:
Delete the OLD value for key irrespective of anything. Add
NEW value for the key, if and only if that is SET in the
reconfigured new volfile.
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Change-Id: Ie80bd16cd1f9e32c51f324f2236122f6d118d860
BUG: 1103050
Reviewed-on: http://review.gluster.org/7931
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
DRC in NFS causes memory bloat and there are known memory corruptions.
It would be good to disable drc by default till the feature is stable.
Change-Id: I93db6ef5298672c56fb117370bb582a5e5550b17
BUG: 1105524
Original-patch-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/8004
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently rpc_connect calls the notification function on failure in the
same thread, glusterd notification holds the big_lock and
hence big_lock is released before rpc_connect
In snapshot creation, releasing the big-lock before completeing
operation can cause problem like deadlock or memory corruption.
Bricks are started as part of snapshot created operation.
brick_start releases the big_lock when doing brick_connect and this
might cause glusterd crash.
There is a similar issue in bug# 1088355.
Solution is let the event handler handle the failure than doing it in
the rpc_connect.
Change-Id: I088d44092ce845a07516c1d67abd02b220e08b38
BUG: 1101507
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/7843
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When there are too many IO happening, brick process epoll thread
will be busy and fails to respond to the glusterd pick packet within
30sec.
Also epoll thread can be blocked by a big-lock.
Solution is to disable ping-timer by default and only enable where ever
required
Later when the epoll thread model changed and made lighter,
we need to revert back this change. http://review.gluster.com/3842 is
one such approach.
Change-Id: I7f80ad3eb00f7d9c4d4527305932f7cf4920e73f
BUG: 1097224
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/7753
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While accessing the procedures of given RPC program in,
rpcsvc_get_program_vector_sizer(), It was not checking boundary
conditions which would cause buffer overflow and subsequently SEGV.
Make sure rpcsvc_actor_t arrays have numactors number of actors.
FIX:
Validate the RPC procedure number before fetching the actor.
Special Thanks to: Murray Ketchion, Grant Byers
Change-Id: I8b5abd406d47fab8fca65b3beb73cdfe8cd85b72
BUG: 1096020
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/7726
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glfs object.
Defined new APIs in the libgfapi module, given a glfs object,
* to send handshake RPC call to glusterd process to fetch UUID of the volume
* store it in the glusterfs_context linked to the glfs object.
* to parse UUID from its cannonical string format into 16-byte array
before sending it to the libgfapi users.
Defined a RPC call in glusterd which can be used to query volume related
info by other processes using 'clnt_handshake_procs'.
Note - Currently this RPC call to glusterd process is used only to fetch UUID.
But it can be extended to get other volume related structures as well.
In addition to the above, defined a new variable to keep track of such handshake
RPCs still in progress to make sure all the corresponding RPC callbacks have been
processed before libgfapi returns the glfs object initialized.
Also bumping up the GFAPI current version number since there is a new API
"glfs_get_volume_id" defined and exposed by libgfapi as part of these changes.
Change-Id: I303f76d7177d32d25bdb301b1dbcf5cd73f42807
BUG: 1090363
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Reviewed-on: http://review.gluster.org/7218
Reviewed-by: Anand Avati <avati@redhat.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Removed an unnecessary ref on rpc_clnt object.
- Removed saved_frames_delete function, which was unused.
Change-Id: Ie8a9c4bb20c1fd59744b64b56eb043eca095e5e3
BUG: 1094655
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/7678
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While creating a volume and adding a brick validation for _POSIX_PATH_MAX is
done on absolute pathname instead of relative pathname due to which a brickpath
having less than _POSIX_PATH_MAX may also fail the validation if the directory
length is greater than (_POSIX_PATH_MAX -strlen(brickpath/volume name).
Also this fix addresses one cli response message correction which says the
volume file is too long instead of brick path is too long (when brickpath
length validation doesn't fail and vol file length validation fails.)
It is also important to note that with the current design of volfile naming, it
can not be guranteed that volname and brickpath can have max of _POSIX_PATH_MAX
characters.
Change-Id: I1283d1f9dea96ae797620002c8723719f26a866d
BUG: 1085330
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/7420
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch refactors the existing client ping timer implementation, and makes
use of the common code for implementing both client ping timer and the
glusterd ping timer.
A new gluster rpc program for ping is introduced. The ping timer is only
started for peers that have this new program. The deafult glusterd ping
timeout is 30 seconds. It is configurable by setting the option
'ping-timeout' in glusterd.vol .
Also, this patch introduces changes in the glusterd-handshake path. The client
programs for a peer are now set in the callback of dump_versions, for both
the older handshake and the newer op-version handshake. This is the only place
in the handshake process where we know what programs a peer supports.
Change-Id: I035815ac13449ca47080ecc3253c0a9afbe9016a
BUG: 1038261
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/5202
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds a new
'gluster volume barrier <VOLNAME> {enable|disable}'
cli command. This helps in testing the brick op code path when testing
the barrier xlator.
This patch can be reverted later if not required for end users.
Change-Id: Icd86a2d13e7f276dda1ecbb2593d60638ece7dcd
BUG: 1060002
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6958
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces a new 'barrier' brick-op which will be used to
activate/deactivate the barriering on the bricks. This includes
barriering in the barrier xlator and in the changelog xlator. All the
required code has been including a bricks select function, a payload
builder and a brick-op handler.
Change-Id: I91d9d77f691c2e89823f7dc4e84900ec40dc4dd2
BUG: 1060002
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6943
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During a peer-handshake, after the volumes have synced, and the list of
missed snapshots have synced, the node will perform the pending deletes
and restores on this list. At this point, the current snapshot list in
the node will be updated, and hence in case of conflicts arising during
snapshot handshake, the peer hosting the bricks will be given precedence
Likewise, if there will be a conflict, and both peers will be in the same
state, i.e either both would be hosting bricks or both would not be hosting
bricks, then a decision can't be taken and a peer-reject will happen.
glusterd_compare_and_update_snap() implements the following algorithm to
perform the above task:
Step 1: Start.
Step 2: Check if the peer is missing a delete on the said snap.
If yes, goto step 6.
Step 3: Check if there is a conflict between the peer's data and the
local snap. If no, goto step 5.
Step 4: As there is a conflict, check if both the peer and the local nodes
are hosting bricks. Based on the results perform the following:
Peer Hosts Bricks Local Node Hosts Bricks Action
Yes Yes Goto Step 7
No No Goto Step 7
Yes No Goto Step 8
No Yes Goto Step 6
Step 5: Check if the local node is missing the peer's data.
If yes, goto step 9.
Step 6: It's a no-op. Goto step 10
Step 7: Peer Reject. Goto step 10
Step 8: Delete local node's data.
Step 9: Accept Peer Data.
Step 10: Stop
Change-Id: I79be0f0f5f2a4f5c72277a4e77c2be732af432e1
BUG: 1061685
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7525
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After the drc cache get full, message "DRC failed to detect duplicates" keep
printed in the log.
The root cause is drc_compare_reqs use the wrong compare type. This function
should use type drc_cache_op_t as its input type. Since all rbtree related code
(except function rpcsvc_drc_lookup) in drc cache pass drc_cache_op_t as compare
type. Only rpcsvc_drc_lookup use type rpcsvc_request_t. It has been modified
too.
Change-Id: I925c097debe6b82f267986961fd4e7755f3de9af
BUG: 1089676
Signed-off-by: Yuan Ding <beback198611@gmail.com>
Reviewed-on: http://review.gluster.org/7519
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In a handshake, create a union of the missed_snap_lists of the two peers.
If an entry is present, its no op.
If an entry is pendng, and the peer entry is done, mark own entry as done.
If an entry is done, and the peer ertry is pending, its a no-op.
If its a new entry, add it.
Change-Id: Idbfa49cc34871631ba8c7c56d915666311024887
BUG: 1061685
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7453
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid modifying autogenerated files and keeping them in
repository - autogenerate them on demand from ".x" files
Change-Id: I2cdb1fe9b99768ceb80a8cb100fa00bd1d8fe2c6
BUG: 1090807
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/7526
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
When iozone is in progress, number of blocking inodelks sometimes becomes greater
than the threshold number of rpc requests allowed for that client
(RPCSVC_DEFAULT_OUTSTANDING_RPC_LIMIT). Subsequent requests from that client
will not be read until all the outstanding requests are processed and replied
to. But because no more requests are read from that client, unlocks on the
already granted locks will never come thus the number of outstanding requests
would never come down. This leads to a ping-timeout on the client.
Fix:
Do not account INODELK/ENTRYLK/LK for throttling
Change-Id: I59c6b54e7ec24ed7375ff977e817a9cb73469806
BUG: 1089470
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/7531
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs
Working functionality on MacOSX
- GlusterD (management daemon)
- GlusterCLI (management cli)
- GlusterFS FUSE (using OSXFUSE)
- GlusterNFS (without NLM - issues with rpc.statd)
Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac
BUG: 1089172
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Signed-off-by: Dennis Schafroth <dennis@schafroth.com>
Tested-by: Harshavardhana <harsha@harshavardhana.net>
Tested-by: Dennis Schafroth <dennis@schafroth.com>
Reviewed-on: http://review.gluster.org/7503
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RFE: Support wildcard in "nfs.rpc-auth-allow" and
"nfs.rpc-auth-reject". e.g.
*.redhat.com
192.168.1[1-5].*
192.168.1[1-5].*, *.redhat.com, 192.168.21.9
Along with wildcard, support for subnetwork or IP range e.g.
192.168.10.23/24
The option will be validated for following categories:
1) Anonymous i.e. "*"
2) Wildcard pattern i.e. string containing any ('*', '?', '[')
3) IPv4 address
4) IPv6 address
5) FQDN
6) subnetwork or IPv4 range
Currently this does not support IPv6 subnetwork.
Change-Id: Iac8caf5e490c8174d61111dad47fd547d4f67bf4
BUG: 1086097
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/7485
Reviewed-by: Poornima G <pgurusid@redhat.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial patch for the Snapshot feature. Current patch
includes following features:
* Snapshot create
* Snapshot delete
* Snapshot restore
* Snapshot list
* Snapshot info
* Snapshot status
* Snapshot config
Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9
BUG: 1061685
Signed-off-by: shishir gowda <sgowda@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7128
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GlusterFS protocol currently uses AUTH_GLUSTERFS_V2 in the RPC/AUTH
header. This header contains the uid, gid and auxiliary groups of the
user/process that accesses the Gluster Volume.
The AUTH_GLUSTERFS_V2 structure allows up to 65535 auxiliary groups to
be passed on. Unfortunately, the RPC/AUTH header is limited to 400 bytes
by the RPC specification: http://tools.ietf.org/html/rfc5531#section-8.2
In order to not cause complete failures on the client-side when trying
to encode a AUTH_GLUSTERFS_V2 that would result in more than 400 bytes,
we can calculate the expected size of the other elements:
1 | pid
1 | uid
1 | gid
1 | groups_len
XX | groups_val (GF_MAX_AUX_GROUPS=65535)
1 | lk_owner_len
YY | lk_owner_val (GF_MAX_LOCK_OWNER_LEN=1024)
----+-------------------------------------------
5 | total xdr-units
one XDR-unit is defined as BYTES_PER_XDR_UNIT = 4 bytes
MAX_AUTH_BYTES = 400 is the maximum, this is 100 xdr-units.
XX + YY can be 95 to fill the 100 xdr-units.
Note that the on-wire protocol has tighter requirements than the
internal structures. It is possible for xlators to use more groups and
a bigger lk_owner than that can be sent by a GlusterFS-client.
This change prevents overflows when allocating the RPC/AUTH header. Two
new macros are introduced to calculate the number of groups that fit in
the RPC/AUTH header, when taking the size of the lk_owner in account. In
case the list of groups exceeds the maximum possible, only the first
groups are passed over the RPC/GlusterFS protocol to the bricks.
A warning is added to the logs, so that most system administrators will
get informed.
The reducing of the number of groups is not a new inventions. The
RPC/AUTH header (AUTH_SYS or AUTH_UNIX) that NFS uses has a limit of 16
groups. Most, if not all, NFS-clients will reduce any bigger number of
groups to 16. (nfs.server-aux-gids can be used to workaround the limit
of 16 groups, but the Gluster NFS-server will be limited to a maximum of
93 groups, or fewer in case the lk_owner structure contains more items.)
Change-Id: I8410e59d0fd246d601b54b961d3ae9cb5a858c10
BUG: 1053579
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/7202
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rpc_clnt object is destroyed after the corresponding transport object is
destroyed. But rpc_clnt_reconnect, a timer driven function, refers to
the transport object beyond its 'life'. Instead, using the embedded
connection object prevents use after free problem wrt transport object.
Also, access transport object under conn->lock.
Change-Id: Iae28e8a657d02689963c510114ad7cb7e6764e62
BUG: 962619
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/6751
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ib2bf6dd564fb7e754d5441c96715b65ad2e21441
BUG: 1065611
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/7007
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* As of now clients mounting within the storage pool using that machine's
ip/hostname are trusted clients (i.e clients local to the glusterd).
* Be careful when the request itself comes in as nfsnobody (ex: posix tests).
So move the squashing part to protocol/server when it creates a new frame
for the request, instead of auth part of rpc layer.
* For nfs servers do root-squashing without checking if it is trusted client,
as all the nfs servers would be running within the storage pool, hence will
be trusted clients for the bricks.
* Provide one more option for mounting which actually says root-squash
should/should not happen. This value is given priority only for the trusted
clients. For non trusted clients, the volume option takes the priority. But
for trusted clients if root-squash should not happen, then they have to be
mounted with root-squash=no option. (This is done because by default
blocking root-squashing for the trusted clients will cause problems for smb
and UFO clients for which the requests have to be squashed if the option is
enabled).
* For geo-replication and defrag clients do not do root-squashing.
* Introduce a new option in open-behind for doing read after successful open.
Change-Id: I8a8359840313dffc34824f3ea80a9c48375067f0
BUG: 954057
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/4863
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this patch we are replacing the existing cluster-wide
lock taken on glusterds across the cluster, with volume locks
which are also taken on glusterds across the cluster, but are
volume specific. So with the volume locks we are able to perform
more than one gluster operation at the same time, as long as the
operations are being performed on different volumes.
We maintain a global list of volume-locks (using a dict for a list)
where the key is the volume name, and which saves the uuid of the
originator glusterd. These locks are held and released per volume
transaction.
In order to acheive multiple gluster operations occuring at the
same time, we also separate opinfos in the op-state-machine, as a
part of this patch. To do so, we generate a unique transaction-id
(uuid) per gluster transaction. An opinfo is then associated with
this transaction id, which is used throughout the transaction. We
maintain a run-time global list(using a dict) of transaction-ids,
and their respective opinfos to achieve this.
Upstream Feature Page: http://www.gluster.org/community/documentation/index.php/Features/glusterd-volume-locks
Change-Id: Iaad505a854bac8de8f83beec0357eb6cde3f7ea8
BUG: 1011470
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5994
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|