| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Newer versions of gcc complain loudly about the incorrect usage of
'inline'.
These changes have been part of commit a3cb38e3 in other branches. There
is no need to backport everything from that commit, but silencing the
warnings would be good.
Change-Id: I2319d9682b47e8533e9c179056a5152f75aeddc3
BUG: 1244147
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/11711
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is related to CVE-2014-3566 a.k.a. POODLE.
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566
POODLE is specific to CBC cipher modes in SSLv3. Because there is no
way to prevent SSLv3 fallback on a system with an unpatched version of
OpenSSL, users of such systems can only be protected by disallowing CBC
modes. The default cipher-mode specification in our code has been
changed accordingly. Users can still set their own cipher modes if they
wish. To support them, the ssl-authz.t test script provides an example
of how to combine the CBC exclusion with other criteria in a script.
Cherry picked from commit 378a0a19d95e552220d71b13be685f4772c576cd:
> Change-Id: Ib1fa547082fbb7de9df94ffd182b1800d6e354e5
> BUG: 1155328
> Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
> Reviewed-on: http://review.gluster.org/8962
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
ssl-auth.t has been modified to not set the auth.ssl-allow option. This
option is not available in the 3.5 branch.
Change-Id: Ib1fa547082fbb7de9df94ffd182b1800d6e354e5
BUG: 1157661
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/8979
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In __socket_proto_state_machine(), when parsing RPC records containing
multi fragments, just change the state of parsing process, had not
processed the memory to coalesce the multi fragments.
Cherry picked from commit fb6702b7f8ba19333b7ba4af543d908e3f5e1923:
> Change-Id: I5583e578603bd7290814a5d26885b31759c73115
> BUG: 1139598
> Signed-off-by: Gu Feng <flygoast@126.com>
> Reviewed-on: http://review.gluster.org/8662
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Niels de Vos <ndevos@redhat.com>
> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
> Tested-by: Raghavendra G <rgowdapp@redhat.com>
Change-Id: I5583e578603bd7290814a5d26885b31759c73115
BUG: 1136221
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/8848
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While accessing the procedures of given RPC program in,
rpcsvc_get_program_vector_sizer(), It was not checking boundary
conditions which would cause buffer overflow and subsequently SEGV.
Make sure rpcsvc_actor_t arrays have numactors number of actors.
FIX:
Validate the RPC procedure number before fetching the actor.
Upstream main review: http://review.gluster.org/7726
BUG: 1096020
Change-Id: Iaf207ee976cb56fa9a554ec82c9eab36d3b289ed
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/8228
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RFE: Support wildcard in "nfs.rpc-auth-allow" and
"nfs.rpc-auth-reject". e.g.
*.redhat.com
192.168.1[1-5].*
192.168.1[1-5].*, *.redhat.com, 192.168.21.9
Along with wildcard, support for subnetwork or IP range e.g.
192.168.10.23/24
The option will be validated for following categories:
1) Anonymous i.e. "*"
2) Wildcard pattern i.e. string containing any ('*', '?', '[')
3) IPv4 address
4) IPv6 address
5) FQDN
6) subnetwork or IPv4 range
Currently this does not support IPv6 subnetwork.
Cherry-picked from 00e247ee44067f2b3e7ca5f7e6dc2f7934c97181:
> Change-Id: Iac8caf5e490c8174d61111dad47fd547d4f67bf4
> BUG: 1086097
> Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
> Reviewed-on: http://review.gluster.org/7485
> Reviewed-by: Poornima G <pgurusid@redhat.com>
> Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Change-Id: I18ef0a914cd403c1f9e66d1b03ecd29465cbce95
BUG: 1115369
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/8223
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NFS subdir authentication doesn't correctly handle multi-homed
(host with multiple NIC having multiple IP addr) OR multi-protocol
(IPv4 and IPv6) network addresses.
When user/admin sets HOSTNAME in gluster CLI for NFS subdir auth,
mnt3_verify_auth() routine does not iterate over all the resolved
n/w addrs returned by getaddrinfo() n/w API. Instead, it just tests
with the one returned first.
1. Iterate over all the n/w addrs (linked list) returned by getaddrinfo().
2. Move the n/w mask calculation part to mnt3_export_fill_hostspec()
instead of doing it in mnt3_verify_auth() i.e. calculating for each
mount request. It does not change for MOUNT req.
3. Integrate "subnet support code rpc-auth.addr.<volname>.allow"
and "NFS subdir auth code" to remove code duplication.
Cherry-picked from commit d3f0de90d0c5166e63f5764d2f21703fd29ce976:
> Change-Id: I26b0def52c22cda35ca11766afca3df5fd4360bf
> BUG: 1102293
> Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
> Reviewed-on: http://review.gluster.org/8048
> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Niels de Vos <ndevos@redhat.com>
Change-Id: Ie92a8ac602bec2cd77268acb7b23ad8ba3c52f5f
BUG: 1112980
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/8198
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
If volume is set for rpc-auth.addr.<volname>.reject with value
as "host1", ideally the NFS mount from "host1" should FAIL. It
works as expected. But when the volume is RESET, then previous
value set for auth-reject should go off, and further NFS mount
from "host1" should PASS. But it FAILs because of stale value
in dict for key "rpc-auth.addr.<volname>.reject".
It does not impact rpc-auth.addr.<volname>.allow key because,
each time NFS volfile gets generated, allow key ll have "*"
as default value. But reject key does not have default value.
FIX:
Delete the OLD value for key irrespective of anything. Add
NEW value for the key, if and only if that is SET in the
reconfigured new volfile.
Upstream review: http://review.gluster.org/7931
Change-Id: I9d1cb37002aad978a3a59e4b45b42d881d0d20e3
BUG: 1103050
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/8022
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
DRC in NFS causes memory bloat and there are known memory corruptions.
It would be good to disable drc by default till the feature is stable.
Cherry picked from 4215d071cec4fc8a62ca4fd6212d83f931838829:
> Change-Id: I93db6ef5298672c56fb117370bb582a5e5550b17
> BUG: 1105524
> Original-patch-by: Santosh Kumar Pradhan <spradhan@redhat.com>
> Signed-off-by: Niels de Vos <ndevos@redhat.com>
> Reviewed-on: http://review.gluster.org/8004
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
> Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Change-Id: I93db6ef5298672c56fb117370bb582a5e5550b17
BUG: 1105524
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/8013
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Make sure __THROW is definedThis is a backport of I6e7cb1eb59b84988e155e9a8b696e842b7ff8f7f
2) include <rpc/xdr.h> before <rpc/auth.h> so that XDR is defined
This was fixed in master within I20193d3f8904388e47344e523b3787dbeab044acbut weonly pull up
3) NetBSD's gettext is in libintl, hence search it at configure time
This is a backport of I651a74fe49c3f087fe135dab3453fd5b18b4268a
4) include <sys/wait.h> to have WEXITSTATUS defined
This problem does not exist in master as WEXITSTATUS is not used
5) Do not define popcountl() on NetBSD as it is in <strings.h>
This is a backport of I4428a88b1e0d7c5f6740022861ffe230dbbd84bd
BUG: 764655
Change-Id: Ieea5a2a627e2b7930525d6c525f1602073574a97
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/7925
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GlusterFS protocol currently uses AUTH_GLUSTERFS_V2 in the RPC/AUTH
header. This header contains the uid, gid and auxiliary groups of the
user/process that accesses the Gluster Volume.
The AUTH_GLUSTERFS_V2 structure allows up to 65535 auxiliary groups to
be passed on. Unfortunately, the RPC/AUTH header is limited to 400 bytes
by the RPC specification: http://tools.ietf.org/html/rfc5531#section-8.2
In order to not cause complete failures on the client-side when trying
to encode a AUTH_GLUSTERFS_V2 that would result in more than 400 bytes,
we can calculate the expected size of the other elements:
1 | pid
1 | uid
1 | gid
1 | groups_len
XX | groups_val (GF_MAX_AUX_GROUPS=65535)
1 | lk_owner_len
YY | lk_owner_val (GF_MAX_LOCK_OWNER_LEN=1024)
----+-------------------------------------------
5 | total xdr-units
one XDR-unit is defined as BYTES_PER_XDR_UNIT = 4 bytes
MAX_AUTH_BYTES = 400 is the maximum, this is 100 xdr-units.
XX + YY can be 95 to fill the 100 xdr-units.
Note that the on-wire protocol has tighter requirements than the
internal structures. It is possible for xlators to use more groups and
a bigger lk_owner than that can be sent by a GlusterFS-client.
This change prevents overflows when allocating the RPC/AUTH header. Two
new macros are introduced to calculate the number of groups that fit in
the RPC/AUTH header, when taking the size of the lk_owner in account. In
case the list of groups exceeds the maximum possible, only the first
groups are passed over the RPC/GlusterFS protocol to the bricks.
A warning is added to the logs, so that most system administrators will
get informed.
The reducing of the number of groups is not a new inventions. The
RPC/AUTH header (AUTH_SYS or AUTH_UNIX) that NFS uses has a limit of 16
groups. Most, if not all, NFS-clients will reduce any bigger number of
groups to 16. (nfs.server-aux-gids can be used to workaround the limit
of 16 groups, but the Gluster NFS-server will be limited to a maximum of
93 groups, or fewer in case the lk_owner structure contains more items.)
Cherry picked from commit 8235de189845986a535d676b1fd2c894b9c02e52:
> BUG: 1053579
> Signed-off-by: Niels de Vos <ndevos@redhat.com>
> Reviewed-on: http://review.gluster.org/7202
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
> Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Change-Id: I8410e59d0fd246d601b54b961d3ae9cb5a858c10
BUG: 1096425
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/7829
Reviewed-by: Lalatendu Mohanty <lmohanty@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glfs object.
Defined new APIs in the libgfapi module, given a glfs object,
* to send handshake RPC call to glusterd process to fetch UUID of the volume
* store it in the glusterfs_context linked to the glfs object.
* to parse UUID from its cannonical string format into 16-byte array
before sending it to the libgfapi users.
Defined a RPC call in glusterd which can be used to query volume related
info by other processes using 'clnt_handshake_procs'.
Note - Currently this RPC call to glusterd process is used only to fetch UUID.
But it can be extended to get other volume related structures as well.
In addition to the above, defined a new variable to keep track of such handshake
RPCs still in progress to make sure all the corresponding RPC callbacks have been
processed before libgfapi returns the glfs object initialized.
Also bumping up the GFAPI current version number since there is a new API
"glfs_get_volume_id" defined and exposed by libgfapi as part of these changes.
Cherry picked from commit 5adb10b9ac1c634334f29732e062b12d747ae8c5:
> Change-Id: I303f76d7177d32d25bdb301b1dbcf5cd73f42807
> BUG: 1095775
> Signed-off-by: Soumya Koduri <skoduri@redhat.com>
> Reviewed-on: http://review.gluster.org/7218
> Reviewed-by: Anand Avati <avati@redhat.com>
> Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
This change differs a little from the patch in the master branch:
- libgfapi in 3.5 does not have glfs_get_volfile(), so there were some
merge conflicts resolved,
- libgfapi in 3.5 is not versioned, the configure.ac changes related to
the versioning have been skipped,
- in the master branch only the XDR .x files are available, release-3.5
requires a manual re-generation of the relates .h and .c files.
Change-Id: I52c32d0e69a52a7f4285f74164bca6fd83c4f3b3
BUG: 1095775
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Reviewed-on: http://review.gluster.org/7741
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
When iozone is in progress, number of blocking inodelks sometimes becomes greater
than the threshold number of rpc requests allowed for that client
(RPCSVC_DEFAULT_OUTSTANDING_RPC_LIMIT). Subsequent requests from that client
will not be read until all the outstanding requests are processed and replied
to. But because no more requests are read from that client, unlocks on the
already granted locks will never come thus the number of outstanding requests
would never come down. This leads to a ping-timeout on the client.
Fix:
Do not account INODELK/ENTRYLK/LK for throttling
BUG: 1089470
Change-Id: I9cc2c259d2462159cea913d95f98a565acb8e0c0
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/7570
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ie99ad6db42d3a52bcc7748448c6d5e97e5bad69e
BUG: 1088848
Reported-by: Patrick Matthäi <pmatthaei@debian.org>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/7499
Reviewed-by: Vikhyat Umrao <vumrao@redhat.com>
Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/#/c/6696/
With 64, NFS server hangs with large I/O load (~ 64 threads writing
to NFS server). The test results from Ben England (Performance expert)
suggest to set it as 16 instead of 64.
Change-Id: Iaa9dda512904d2e359d8122a05e5bf65f99a7e78
BUG: 1073441
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/7200
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
quotad before marking quota as enabled.
without this patch there is a window of time when quota is marked as
enabled in quota-enforcer, but connection to quotad wouldn't have been
established. Any checklimit done during this period can result in a
failed fop because of unavailability of quotad.
Change-Id: I0d509fabc434dd55ce9ec59157123524197fcc80
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
BUG: 969461
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/6572
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/6820
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> handle option validation cases in reset case.
-> Creating valid conf path when glusterd restarts.
-> Reading the gsyncd worker thread status and displaying it.
-> Displaying status-detail per worker.
-> Fetch checkpoint info in geo-rep status.
-> use-tarssh value validation added.
misc: misc geo-rep fixes based on cluster, logrotate etc..
-> cluster/dht: fix 'stime' getxattr getting overwritten.
-> cluster/afr: return max of 'stime' values in subvol.
-> geo-rep-logrotate: Sending SIGHUP to geo-rep auxiliary.
-> cluster/dht: fix convoluted logic while aggregating.
-> cluster/*: fix 'stime' min/max fetch logic.
Change-Id: I811acea0bbd6194797a3e55d89295d1ea021ac85
BUG: 1036552
Signed-off-by: Ajeet Jha <ajha@redhat.com>
Reviewed-on: http://review.gluster.org/6405
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@gmail.com>
Reviewed-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/6810
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/5512
rpc:
- On a RPC_TRANSPORT_CLEANUP event, rpc_clnt_notify calls the registered
notifyfn with a RPC_CLNT_DESTROY event. The notifyfn should properly
cleanup the saved mydata on this event.
- Break the reconnect chain when an rpc client is disabled. This will
prevent new disconnect events which can lead to crashes.
glusterd:
- Added support for RPC_CLNT_DESTROY in glusterd_brick_rpc_notify
- Use a common glusterd_rpc_clnt_unref() function throught glusterd in
place of rpc_clnt_unref(). This function correctly gives up the
big-lock before performing the unref.
Change-Id: I93230441c5089039643fc9f5632477ef1b695348
BUG: 962619
Signed-off-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/6566
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ibd2edc5608ae6d3370607bff1c626c8347c4deda
BUG: 1031887
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/6561
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
IOV_MAX is the maximum supported vector count on a given platform.
Limit the count to IOV_MAX if higher. As we are performing non-blocking
IO getting a smaller return value is handled naturally.
Change-Id: I94ef67a03ed0e10da67a776af2b55506bf721611
BUG: 1034398
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/6354
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
re-work.
Following are the cli commands that are new/re-worked:
======================================================
volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} |
volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} |
volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>}
volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [detail|clients|mem|inode|fd|callpool]
volume statedump <VOLNAME> [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history]
glusterd changes:
=================
* Quota limits are now set as extended attributes by glusterd from
the aux mount created by the cli.
* The gfids of the directories on which quota limits are set
for a given volume are stored in
/var/lib/glusterd/vols/<volname>/quota.conf file in binary format,
and whose cksum and version is stored in
/var/lib/glusterd/vols/<volname>/quota.cksum.
Original-author: Krutika Dhananjay <kdhananj@redhat.com>
Original-author: Krishnan Parthasarathi <kparthas@redhat.com>
BUG: 969461
Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/6003
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Two stages of quota enforcement is done:
Soft and hard quota Upon reaching soft quota limit on the directory
it logs/alerts in the quota daemon log (ie DEFAULT_LOG_DIR/quotad.log)
and no more writes allowed after hard
quota limit. After reaching the soft-limit the daemon alerts the
user/admin repeatively for every 'alert-time', which is
configurable.
* Quota enforcer is moved to server-side.
It takes care of enforcing quota. Since enforcer doesn't have the
cluster view, it relies on another service called
quota-aggregator. Aggregator, on query can return the size of a
directory based on the cluster view.
Enforcer is always loaded in the server graph and is by passed if
the feature is not enabled.
Options specific to enforcer:
server-quota - Specifies whether the feature is on/off. It is used
to by pass the quota if turned off.
deem-statfs - If set to on, it takes quota limits into consideration
while estimating fs size. (df command). The algorithm followed is,
i. Adjust statvfs based on limit configured on root.
ii. If limit is set on the inode passed, use size/limits on that inode to
populate statvfs. Otherwise, use size/limits configured on root.
iii. Upon statvfs, update the ctx->size on the inode.
iv. Don't let DHT aggregate, instead take the maximum of the usages from the
subvols of the DHT, since each of it contains the complete information.
Enforcer also makes use of gfid-to-path conversion functionality to
work correctly when a client like nfs predominently relies on
nameless lookups.
* Quota Aggregator acts as a thin client to provide cluster view
Its a lightweight *gluster client* process with no mount point,
started upon enabling quota or restarting the volume. This is a
single process run on each brick, which can answer queries on all
volumes in the cluster. Its volfile stored in
GLUSTERD_DEFAULT_WORKING_DIR/quotad/quotad.vol.
Credits:
Raghavendra Bhat <rabhat@redhat.com>
Varun Shastry <vshastry@redhat.com>
Shishir Gowda <sgowda@redhat.com>
Kruthika Dhananjay <kdhananj@redhat.com>
Brian Foster <bfoster@redhat.com>
Krishnan Parthasarathi <kparthas@redhat.com>
Change-Id: Id1cb25b414951da34c665a55f77385d482e0f9de
BUG: 969461
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/5952
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Fix the typo in NFS default ACL
The typo was introduced as part of the Fix to BZ 1009210 i.e.
http://review.gluster.org/5980. The user ACL xattr structure
was passed to default ACL xattr.
2) Clean up NFS code to avoid unnecessary SEGV in
rpcsvc_drc_reconfigure() which was not validating the
svc->drc. Add a routine rpcsvc_drc_deinit() to handle
the clean up of DRC specific data structures. For init(),
use rpcsvc_drc_init().
3) nfs_init_state() was returning wrong value even if the
registration with portmapper failed, causing the NFS
server process to hang around. As a result it used to
get SEGV during rpcsvc_drc_reconfigure().
4) Clean up memfactor usage across nfs.c nfs3.c.
Change-Id: I5cea26cb68dd8a822ec0ae104952f67fe63fa703
BUG: 1009210
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/6329
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement reconfigure() for NFS xlator so that volume set/reset wont
restart the NFS server process. But few options can not be reconfigured
dynamically e.g. nfs.mem-factor, nfs.port etc which needs NFS to be
restarted.
Change-Id: Ic586fd55b7933c0a3175708d8c41ed0475d74a1c
BUG: 1027409
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/6236
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Remove bd_map xlator and CLI related changes.
Change-Id: If7086205df1907127c1a1fa4ba603f1c48421d09
BUG: 1028672
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/5747
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for a new ZEROFILL fop. Zerofill writes zeroes to a file in
the specified range. This fop will be useful when a whole file needs to
be initialized with zero (could be useful for zero filled VM disk image
provisioning or during scrubbing of VM disk images).
Client/application can issue this FOP for zeroing out. Gluster server
will zero out required range of bytes ie server offloaded zeroing. In
the absence of this fop, client/application has to repetitively issue
write (zero) fop to the server, which is very inefficient method because
of the overheads involved in RPC calls and acknowledgements.
WRITESAME is a SCSI T10 command that takes a block of data as input and
writes the same data to other blocks and this write is handled
completely within the storage and hence is known as offload . Linux ,now
has support for SCSI WRITESAME command which is exposed to the user in
the form of BLKZEROOUT ioctl. BD Xlator can exploit BLKZEROOUT ioctl to
implement this fop. Thus zeroing out operations can be completely
offloaded to the storage device , making it highly efficient.
The fop takes two arguments offset and size. It zeroes out 'size' number
of bytes in an opened file starting from 'offset' position.
This patch adds zerofill support to the following areas:
- libglusterfs
- io-stats
- performance/md-cache,open-behind
- quota
- cluster/afr,dht,stripe
- rpc/xdr
- protocol/client,server
- io-threads
- marker
- storage/posix
- libgfapi
Client applications can exloit this fop by using glfs_zerofill introduced in
libgfapi.FUSE support to this fop has not been added as there is no system call
for this fop.
Changes from previous version 3:
* Removed redundant memory failure log messages
Changes from previous version 2:
* Rebased and fixed build error
Changes from previous version 1:
* Rebased for latest master
TODO :
* Add zerofill support to trace xlator
* Expose zerofill capability as part of gluster volume info
Here is a performance comparison of server offloaded zeofill vs zeroing
out using repeated writes.
[root@llmvm02 remote]# time ./offloaded aakash-test log 20
real 3m34.155s
user 0m0.018s
sys 0m0.040s
[root@llmvm02 remote]# time ./manually aakash-test log 20
real 4m23.043s
user 0m2.197s
sys 0m14.457s
[root@llmvm02 remote]# time ./offloaded aakash-test log 25;
real 4m28.363s
user 0m0.021s
sys 0m0.025s
[root@llmvm02 remote]# time ./manually aakash-test log 25
real 5m34.278s
user 0m2.957s
sys 0m18.808s
The argument log is a file which we want to set for logging purpose and
the third argument is size in GB .
As we can see there is a performance improvement of around 20% with this
fop.
Change-Id: I081159f5f7edde0ddb78169fb4c21c776ec91a18
BUG: 1028673
Signed-off-by: Aakash Lal Das <aakash@linux.vnet.ibm.com>
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/5327
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement a limit on the total number of outstanding RPC requests
from a given cient. Once the limit is reached the client socket
is removed from POLL-IN event polling.
Change-Id: I8071b8c89b78d02e830e6af5a540308199d6bdcd
BUG: 1008301
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/6114
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I982cf7619463983c04b401d70a76635991d072d2
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/6091
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gettimeofday() returns the current wall clock time and timezone.
Using these functions in order to measure the passage of time
(how long an operation took) therefore seems like a no-brainer.
This time suffer's from some limitations:
a. They have a low resolution: “High-performance” timing by
definition, requires clock resolutions into the microseconds
or better.
b. They can jump forwards and backwards in time: Computer
clocks all tick at slightly different rates, which causes
the time to drift. Most systems have NTP enabled which
periodically adjusts the system clock to keep them in sync
with “actual” time. The adjustment can cause the clock to
suddenly jump forward (artificially inflating your timing
numbers) or jump backwards (causing your timing calculations
to go negative or hugely positive). In such cases timer
thread could go into an infinite loop.
From 'man gettimeofday':
----------
..
..
The time returned by gettimeofday() is affected by discontinuous
jumps in the system time (e.g., if the system administrator manually
changes the system time). If you need a monotonically increasing
clock, see clock_gettime(2).
..
..
----------
Rationale:
For calculating interval timing for Timer thread, all that’s
needed should be clock as a simple counter that increments
at a stable rate.
This is necessary to avoid the jumps which are caused by using
"wall time", this counter must be monotonic that can never
“tick” backwards, ever.
Change-Id: I701d31e71a85a73d21a6c5cd15583e7a5a645eeb
BUG: 1017993
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/6070
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently to know the number of files to be healed, either user
has to go to backend and check the number of entries present in
indices/xattrop directory. But if a volume consists of large
number of bricks, going to each backend and counting the number
of entries is a time-taking task. Otherwise user can give
gluster volume heal vol-name info command but with this
approach if no. of entries are very hugh in the indices/
xattrop directory, it will comsume time.
So as a feature, new command is implemented.
Command 1: gluster volume heal vn statistics heal-count
This command will get the number of entries present in
every brick of a volume. The output displays only entries
count.
Command 2: gluster volume heal vn statistics heal-count
replica 192.168.122.1:/home/user/brickname
Here if we are concerned with just one replica.
So providing any one of the brick of a replica will get
the number of entries to be healed for that replica only.
Example:
Replicate volume with replica count 2.
Backend status:
--------------
[root@dhcp-0-17 xattrop]# ls -lia | wc -l
1918
NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy
entries so actual no. of entries to be healed are
1916.
[root@dhcp-0-17 xattrop]# pwd
/home/user/2ty/.glusterfs/indices/xattrop
Command output:
--------------
Gathering count of entries to be healed on volume volume3 has been successful
Brick 192.168.122.1:/home/user/22iu
Status: Brick is Not connected
Entries count is not available
Brick 192.168.122.1:/home/user/2ty
Number of entries: 1916
Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290
BUG: 1015990
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/6044
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"gluster volume heal volumename statistics" command gives the summary
of the afr crawl done based on the entries present in the xattrop
directory. Whenever afr crawls are attempted, the beginning time of
crawl, end time of crawl, no of files healed, heal-failed count and
number of files in split brain are shown along with the type of the
crawl. If crawl is already in progress then it will give the number
of files healed, heal failed count and number of files in split-brain
from the beginning of the crawl and instead of telling the end time of
the crawl, "CRAWL IN PROGRESS" message will be shown.
Output format:
command: "gluster volume heal volume-name statistics"
Output:
Gathering afr crawl statistics crawl statistics on volume volume-name
has been successful
------------------------------------------------
Crawl statistics for brick no 0
Hostname of brick 192.168.122.248
Starting time of crawl: Wed Jul 10 15:52:38 2013
Ending time of crawl: Wed Jul 10 15:52:38 2013
Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0
Starting time of crawl: Wed Jul 10 15:52:38 2013
Ending time of crawl: Wed Jul 10 15:52:38 2013
Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0
------------------------------------------------
Crawl statistics for brick no 1
Hostname of brick 192.168.122.1
Starting time of crawl: Wed Jul 10 15:52:42 2013
Ending time of crawl: Wed Jul 10 15:52:42 2013
Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0
Starting time of crawl: Wed Jul 10 15:52:42 2013
Ending time of crawl: Wed Jul 10 15:52:42 2013
Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0
--------------------------------------------------
Change-Id: I10bf9d10b005741db9973fb1352e0dd59ed99aa9
BUG: 949400
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/4790
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I290cd983bd0dff2e32e5ee90a12e888a3b31c6fd
BUG: 969461
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/5954
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
oVirt's Gluster Integration needs an inexpensive command that can be
executed every 10 seconds to monitor async tasks and their parameters,
for all volumes.
The solution involves adding a 'tasks' sub-command to 'volume status'
to fetch only the async task IDs, type and other relevant parameters.
Only the originator glusterd participates in this command as all the
information needed is available on all the nodes. This is to make the
command suitable for being executed every 10 seconds.
Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1
BUG: 1012346
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/6006
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Incorrect NFS ACL encoding causes "system.posix_acl_default"
setxattr failure on bricks on XFS file system. XFS (potentially
others?) doesn't understand when the 0x10 prefix is added to the
ACL type field for default ACLs (which the Linux NFS client adds)
which causes setfacl()->setxattr() to fail silently. NFS client
adds NFS_ACL_DEFAULT(0x1000) for default ACL.
FIX:
Mask the prefix (added by NFS client) OFF, so the setfacl is not
rejected when it hits the FS.
Original patch by: "Richard Wareing"
Change-Id: I17ad27d84f030cdea8396eb667ee031f0d41b396
BUG: 1009210
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/5980
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Block all signal except those which are set for explicit handling
in glusterfs_signals_setup(). Since thread spawning code in
libglusterfs and xlators can get called from application threads
when used through libgfapi, it is necessary to do this blocking.
Change-Id: Ia320f80521a83d2edcda50b9ad414583a0175281
BUG: 1011662
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/5995
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currenly the CLI rebalance status command output does not indicate the
'type' of rebalance, i.e. whether a full rebalance or only a fix-layout
was carried out.
Fix: After the rebalance status of all peers is received by the
originator glusterd, alter it to reflect the type of rebalance
before passing it on to the CLI process.
Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3
BUG: 1004744
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/5826
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For rpc requests having large aux group list, allocate large list
on demand. Else use small static array by default.
Without this patch, glusterfsd allocates 140+MB of resident memory
just to get started and initialized.
Change-Id: I3a07212b0076079cff67cdde18926e8f3b196258
Signed-off-by: Anand Avati <avati@redhat.com>
BUG: 953694
Reviewed-on: http://review.gluster.org/5927
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
config service availability for clients.
Backupvolfile server as it stands is slow and prone to errors
with mount script and its combination with RRDNS. Instead in
theory it should use all the available nodes in 'trusted pool'
by default (Right now we don't have a mechanism in place for
this)
Nevertheless this patch provides a scenario where a list of
volfile-server can be provided on command as shown below
-----------------------------------------------------------------
$ glusterfs -s server1 .. -s serverN --volfile-id=<volname> \
<mount_point>
-----------------------------------------------------------------
OR
-----------------------------------------------------------------
$ mount -t glusterfs -obackup-volfile-servers=<server2>: \
<server3>:...:<serverN> <server1>:/<volname> <mount_point>
-----------------------------------------------------------------
Here ':' is used as a separator for mount script parsing
Now these will be remembered and recursively attempted for
fetching vol-file until exhausted. This would ensure that the
clients get 'volume' configs in a consistent manner avoiding the
need to poll through RRDNS.
Change-Id: If808bb8a52e6034c61574cdae3ac4e7e83513a40
BUG: 986429
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/5400
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Provides detailed status info in the following format
MASTER <master-vol> SLAVE <slave-vol>
NODE HEALTH UPTIME FILES SYNCD FILES PENDING BYTES PENDING DELETES PENDING
-----------------------------------------------------------------------------------
This patch introdues "status detail" command to show crawl related
information in CLI. These values are "pulled" from gsyncd when
"status detail" is executed.
Change-Id: I1fdaf7180eacce054a864d34971dc160bd7301e1
BUG: 990420
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/5590
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A typo which read MAX_AUTH_BYTES instead of GF_MAX_AUTH_BYTES was
picking the value 400 instead of the larger 2048. This causes
failures when number of aux group ids is a large number.
Change-Id: Idb8d59aee2690fd53e24c2e09f58a16fe387ef27
BUG: 1000131
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/5695
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implementation of client_t
The feature page for client_t is at
http://www.gluster.org/community/documentation/index.php/Planning34/client_t
In addition to adding libglusterfs/client_t.[ch] it also extracts/moves
the locktable functionality from xlators/protocol/server to libglusterfs,
where it is used; thus it may now be shared by other xlators too.
This patch is large as it is. Hooking up the state dump is left to do
in phase 2 of this patch set.
(N.B. this change/patch-set supercedes previous change 3689, which was
corrupted during a rebase. That change will be abandoned.)
BUG: 849630
Change-Id: I1433743190630a6d8119a72b81439c0c4c990340
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/3957
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commands:
gluster system:: execute gsec_create
gluster volume geo-rep <master> <slave-url> create [push-pem] [force]
gluster volume geo-rep <master> <slave-url> start [force]
gluster volume geo-rep <master> <slave-url> stop [force]
gluster volume geo-rep <master> <slave-url> delete
gluster volume geo-rep <master> <slave-url> config
gluster volume geo-rep <master> <slave-url> status
The geo-replication is distributed. The session will be created, and
gsyncd will be spawned on all relevant nodes, instead of only one
node.
geo-rep: Collecting status detail related data
Added persistent store for saving information about
TotalFilesSynced, TotalSyncTime, TotalBytesSynced
Changes in the status information in socket:
Existing(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;
New(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978;
TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640;
Persistent details stored in
/var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status
Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111
BUG: 847839
Original Author: Avra Sengupta <asengupt@redhat.com>
Original Author: Venky Shankar <vshankar@redhat.com>
Original Author: Aravinda VK <avishwan@redhat.com>
Original Author: Amar Tumballi <amarts@redhat.com>
Original Author: Csaba Henk <csaba@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5132
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When nfs.addr-namelookup is turned on, if the
getaddrinfo call fails while authenticating client's
ip/hostname, the mount request is denied
Change-Id: I744f1c6b9c7aae91b9363bba6c6987b42f7f0cc9
BUG: 947055
Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com>
Reviewed-on: http://review.gluster.org/5143
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Duplicate request cache provides a mechanism for detecting
duplicate rpc requests from clients. DRC caches replies
and on duplicate requests, sends the cached reply instead of
re-processing the request.
Change-Id: I3d62a6c4aa86c92bf61f1038ca62a1a46bf1c303
BUG: 847624
Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com>
Reviewed-on: http://review.gluster.org/4049
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ia8e1af082078f2f791708ba4faa4992bf291dd6e
BUG: 961339
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/5023
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While looking at the newly introduced procedures FALLOCATE and DISCARD,
it seems that these were added with already existing procedure numbers.
This makes the protocol incompatible with existing roll-outs.
It is very confusing when new procedures are added somewhere in the
middle of the array. This will cause the number of existing procedures
to change. It is much preferred to add new procedures at the end of the
array. This changes not only corrects the enum that generates the
procedure numbers, but also the ordering in the client and server
fops-array for clarity.
Correcting this greatly simplifies adding support for these new
procedures in Wireshark and will prevent confusion to the people reading
network traces (with or without Wireshark).
Change-Id: Ib9e7978531d016c7230d756b855cb94cb0793b0f
BUG: 974976
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/5215
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rpc_transport object should be alive as long as the rpc_clnt object is
alive. To ensure this, on rpc_clnt's last unref, we cleanup the
corresponding rpc_transport object and complete the rpc_clnt cleanup
later, in a bottom-up fashion.
Introduced rpc_clnt_is_disabled, to allow higher layers to differentiate
between the 'final'[1] disconnect triggered from upper layers, and a
normal disconnect. This differentiation helps in cleaning up resources,
at higher layers, in a race-free manner.
[1] - 'final' here means that the rpc and the associated connection, is
not going to be used anymore. eg - glusterd_brick_disconnect on
volume-stop.
Change-Id: I2ecf891a36e3b02cd9eacca964e659525d1bbc6e
BUG: 962619
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/5107
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for the DISCARD file operation. Discard punches a hole
in a file in the provided range. Block de-allocation is implemented
via fallocate() (as requested via fuse and passed on to the brick
fs) but a separate fop is created within gluster to emphasize the
fact that discard changes file data (the discarded region is
replaced with zeroes) and must invalidate caches where appropriate.
BUG: 963678
Change-Id: I34633a0bfff2187afeab4292a15f3cc9adf261af
Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-on: http://review.gluster.org/5090
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement support for the fallocate file operation. fallocate
allocates blocks for a particular inode such that future writes
to the associated region of the file are guaranteed not to fail
with ENOSPC.
This patch adds fallocate support to the following areas:
- libglusterfs
- mount/fuse
- io-stats
- performance/md-cache,open-behind
- quota
- cluster/afr,dht,stripe
- rpc/xdr
- protocol/client,server
- io-threads
- marker
- storage/posix
- libgfapi
BUG: 949242
Change-Id: Ice8e61351f9d6115c5df68768bc844abbf0ce8bd
Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-on: http://review.gluster.org/4969
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For unix path based sockets, the socket path is
cryptic (md5sum of path) and may not be useful for
the user in debugging so log it in DEBUG.
Changed logging in brick_rpc_notify to log brickinfo
for disconnects.
Change-Id: I69174bbbbde8352d38837723e950ad8fc15232aa
BUG: 963153
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/5009
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Usage: gluster system:: uuid get
This is needed since we generate uuid of a node in a lazy manner. ie, we
generate a uuid for the node only on the first volume or peer operation,
when the node needs an external identity. With this command, we can
force[1] the uuid generation, without a volume or peer operation performed.
[1]: Querying for uuid (or uuid get), forces uuid to come into
existence.
Change-Id: I62c8b6754117756aa4d773dd48af4ddeb1a1d878
BUG: 971661
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/5175
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|