| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently there are quite a slew of logs in Gluster that do not
lend themselves to trivial analysis by various tools that help
collect and monitor logs, due to the textual nature of the logs.
This FEAT is to make this better by giving logs message IDs so
that the tools do not have to do complex log parsing to break
it down to problem areas and suggest troubleshooting options.
With this patch, a new set of logging APIs are introduced that
take additionally a message ID and an error number, so as to
print the message ID and the descriptive string for the error.
New APIs:
- gf_msg, gf_msg_debug/trace, gf_msg_nomem, gf_msg_callingfn
These APIs follow the functionality of the previous gf_log*
counterparts, and hence are 1:1 replacements, with the delta
that, gf_msg, gf_msg_callingfn take additional parameters as
specified above.
Defining the log messages:
Each invocation of gf_msg/gf_msg_callingfn, should provide an ID
and an errnum (if available). Towards this, a common message id
file is provided, which contains defines to various messages and
their respective strings. As other messages are changed to the
new infrastructure APIs, it is intended that this file is edited
to add these messages as well.
Framework enhanced:
The logging framework is also enhanced to be able to support
different logging backends in the future. Hence new configuration
options for logging framework and logging formats are introduced.
Backward compatibility:
Currently the framework supports logging in the traditional
format, with the inclusion of an error string based on the errnum
passed in. Hence the shift to these new APIs would retain the log
file names, locations, and format with the exception of an
additional error string where applicable.
Testing done:
Tested the new APIs with different messages in normal code paths
Tested with configurations set to gluster logs (syslog pending)
Tested nomem variants, inducing the message in normal code paths
Tested ident generation for normal code paths (other paths
pending)
Tested with sample gfapi program for gfapi messages
Test code is stripped from the commit
Pending work (not to be addressed in this patch (future)):
- Logging framework should be configurable
- Logging format should be configurable
- Once all messages move to the new APIs deprecate/delete older
APIs to prevent misuse/abuse using the same
- Repeated log messages should be suppressed (as a configurable
option)
- Logging framework assumes that only one init is possible, but
there is no protection around the same (in existing code)
- gf_log_fini is not invoked anywhere and does very little
cleanup (in existing code)
- DOxygen comments to message id headers for each message
Change-Id: Ia043fda99a1c6cf7817517ef9e279bfcf35dcc24
BUG: 1075611
Signed-off-by: ShyamsundarR <srangana@redhat.com>
Reviewed-on: http://review.gluster.org/6547
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rpc_clnt object is destroyed after the corresponding transport object is
destroyed. But rpc_clnt_reconnect, a timer driven function, refers to
the transport object beyond its 'life'. Instead, using the embedded
connection object prevents use after free problem wrt transport object.
Also, access transport object under conn->lock.
Change-Id: Iae28e8a657d02689963c510114ad7cb7e6764e62
BUG: 962619
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/6751
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
| |
Change-Id: If4e4b95fe025a412f25313d83c780046dfec5116
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/6716
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I7f4dd2ae4225c8d3783417d0c3d415178f04c0da
BUG: 1067011
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/7031
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ib2bf6dd564fb7e754d5441c96715b65ad2e21441
BUG: 1065611
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/7007
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
| |
Change-Id: I6bf6101aa0b5d6624891a8ebed2ac1fec2e11e1c
Reviewed-on: http://review.gluster.org/6948
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix the memory leaks in socket and glusterd in failure code
paths reported by Coverity.
CIDs: 1124777, 1124781, 124782
Change-Id: I63472c6b5900f308f19e64fc93bf7ed2f7b06ade
BUG: 789278
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/6954
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* As of now clients mounting within the storage pool using that machine's
ip/hostname are trusted clients (i.e clients local to the glusterd).
* Be careful when the request itself comes in as nfsnobody (ex: posix tests).
So move the squashing part to protocol/server when it creates a new frame
for the request, instead of auth part of rpc layer.
* For nfs servers do root-squashing without checking if it is trusted client,
as all the nfs servers would be running within the storage pool, hence will
be trusted clients for the bricks.
* Provide one more option for mounting which actually says root-squash
should/should not happen. This value is given priority only for the trusted
clients. For non trusted clients, the volume option takes the priority. But
for trusted clients if root-squash should not happen, then they have to be
mounted with root-squash=no option. (This is done because by default
blocking root-squashing for the trusted clients will cause problems for smb
and UFO clients for which the requests have to be squashed if the option is
enabled).
* For geo-replication and defrag clients do not do root-squashing.
* Introduce a new option in open-behind for doing read after successful open.
Change-Id: I8a8359840313dffc34824f3ea80a9c48375067f0
BUG: 954057
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/4863
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this patch we are replacing the existing cluster-wide
lock taken on glusterds across the cluster, with volume locks
which are also taken on glusterds across the cluster, but are
volume specific. So with the volume locks we are able to perform
more than one gluster operation at the same time, as long as the
operations are being performed on different volumes.
We maintain a global list of volume-locks (using a dict for a list)
where the key is the volume name, and which saves the uuid of the
originator glusterd. These locks are held and released per volume
transaction.
In order to acheive multiple gluster operations occuring at the
same time, we also separate opinfos in the op-state-machine, as a
part of this patch. To do so, we generate a unique transaction-id
(uuid) per gluster transaction. An opinfo is then associated with
this transaction id, which is used throughout the transaction. We
maintain a run-time global list(using a dict) of transaction-ids,
and their respective opinfos to achieve this.
Upstream Feature Page: http://www.gluster.org/community/documentation/index.php/Features/glusterd-volume-locks
Change-Id: Iaad505a854bac8de8f83beec0357eb6cde3f7ea8
BUG: 1011470
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5994
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
quotad before marking quota as enabled.
without this patch there is a window of time when quota is marked as
enabled in quota-enforcer, but connection to quotad wouldn't have been
established. Any checklimit done during this period can result in a
failed fop because of unavailability of quotad.
Change-Id: I0d509fabc434dd55ce9ec59157123524197fcc80
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
BUG: 969461
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/6572
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
stats
"volume profile info" automatically clears incremental stats. There
isn't a command to:
- fetch stats without clearing incremental stats and
- clear cumulative and incremental stats
This change introduces two arguments (i.e. peek and clear). 'clear'
will wipe both incremental and cumulative stats. 'peek' fetches stats
without wiping incremental stats.
'volume profile info peek' - fetches incremental and cumulative stats
without wiping incremental stats
'volume profile info incremental peek' - fetches incremental stats
without wiping incremental stats
'volume profile info clear' - clears both incremental and cumultiave
stats
Change-Id: I91834515ad672eca5f882809941147d7d997c4c9
BUG: 1047416
Signed-off-by: Dawit Alemu <dalemu@redhat.com>
Reviewed-on: http://review.gluster.org/6620
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
| |
Change-Id: I522c30a600e712be9cc09393104e228e4d8e13f5
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/6752
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
According to libtool three individual numbers stand for
CURRENT:REVISION:AGE, or C:R:A for short. The libtool
script typically tacks these three numbers onto the end
of the name of the .so file it creates. The formula for
calculating the file numbers on Linux and Solaris is
/path/to/library/<library_name>.(C - A).(A).(R)
As you release new versions of your library, you will
update the library's C:R:A. Although the rules for changing
these version numbers can quickly become confusing, a few
simple tips should help keep you on track. The libtool
documentation goes into greater depth.
In essence, every time you make a change to the library and
release it, the C:R:A should change. A new library should start
with 0:0:0. Each time you change the public interface
(i.e., your installed header files), you should increment the
CURRENT number. This is called your interface number. The main
use of this interface number is to tag successive revisions
of your API.
The AGE number is how many consecutive versions of the API the
current implementation supports. Thus if the CURRENT library
API is the sixth published version of the interface and it is
also binary compatible with the fourth and fifth versions
(i.e., the last two), the C:R:A might be 6:0:2. When you break
binary compatibility, you need to set AGE to 0 and of course
increment CURRENT.
The REVISION marks a change in the source code of the library
that doesn't affect the interface-for example, a minor bug fix.
Anytime you increment CURRENT, you should set REVISION back to 0.
Change-Id: Id72e74c1642c804fea6f93ec109135c7c16f1810
BUG: 862082
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/5645
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With 64, NFS server hangs with large I/O load (~ 64 threads writing
to NFS server). The test results from Ben England (Performance expert)
suggest to set it as 16 instead of 64.
Change-Id: I418ff5ba0a3e9fdb14f395b8736438ee1bbd95f4
BUG: 1008301
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/6696
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: ben england <bengland@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch prevents spurious handling of pollin/pollout events on an
'un-connected' socket, when outgoing packets to its remote endpoint are
'dropped' using iptables(8) rules.
For eg,
iptables -I OUTPUT -p tcp --dport 24007 -j DROP
Change-Id: I1d3f3259dc536adca32330bfb7566e0b9a521e3c
BUG: 1048188
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/6627
Reviewed-by: Anand Avati <avati@redhat.com>
Tested-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code section is bogus!
------------------------------------------
370: if (!auth->authops->request_init)
371: ret = auth->authops->request_init (req, auth->authprivate);
------------------------------------------
Seems to have been never been used historically since
logically above code has never been true to actually execute
"authops->request_init() --> auth_glusterfs_{v2,}_request_init()"
On top of that under "rpcsvc_request_init()"
verf.flavour and verf.datalen are initialized from what is
provided through 'callmsg'.
------------------------------------------
req->verf.flavour = rpc_call_verf_flavour (callmsg);
req->verf.datalen = rpc_call_verf_len (callmsg);
/* AUTH */
rpcsvc_auth_request_init (req);
return req;
------------------------------------------
So the code in 'auth_glusterfs_{v2,}_request_init()'
performing this operation will over-write the original
flavour and datalen.
------------------------------------------
if (!req)
return -1;
memset (req->verf.authdata, 0, GF_MAX_AUTH_BYTES);
req->verf.datalen = 0;
req->verf.flavour = AUTH_NULL;
------------------------------------------
Refactoring the whole code into a more understandable version
and also avoiding a potential NULL dereference
Change-Id: I1a430fcb4d26de8de219bd0cb3c46c141649d47d
BUG: 1049735
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/6591
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. The routine rpcsvc_drc_init() is only used while initialization of
NFS xlator. It should just check for nfs.drc option and init DRC feature
accordingly. If it's set to OFF, then rpcsvc_drc_init() allocates memory
for svc.drc and set ret value to 0 and goes to out: block where drc is
leaked.
2. rpcsvc_drc_init() should just allocate svc.drc and init it. Here
svc.drc can never be valid.
3. If svc.drc gets init'd here, no point of checking for drc type here.
Change-Id: I4085771cdb8c9c15d1b9c548b77929a37f27c124
BUG: 1047902
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/6628
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In rpcsvc_submit_generic(), FILE: rpc/rpc-lib/src/rpcsvc.c, while caching
the reply (DRC), the code does not check if DRC is ON and goes ahead
assuming DRC is on and try to take a LOCK on drc.
FIX: Put a check on svc->drc by rpcsvc_need_drc().
Change-Id: I52c57280487e6061c68fd0b784e1cafceb2f3690
BUG: 1048072
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/6632
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce new options to modify the behaviour of server.root-squash.
With server.anonuid and server.anongid the uid/gid can be specified and
the root user (uid=0 and gid=0) will be mapped to the given uid/gid
instead of nfsnobody (uid=65534 and gid=65534).
Many thanks to Vikhyat Umrao for writing the majority of the test-case!
Change-Id: I6379a3d2ef52b9b9707f2f6f0529657580c8d779
BUG: 1043886
CC: Vikhyat Umrao <vumrao@redhat.com>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/6546
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Reviewed-by: Vikhyat Umrao <vumrao@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I74788b63dd1c14507aa6d65182ea4b87a2e1f389
BUG: 1046308
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/6589
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rpc:
- On a RPC_TRANSPORT_CLEANUP event, rpc_clnt_notify calls the registered
notifyfn with a RPC_CLNT_DESTROY event. The notifyfn should properly
cleanup the saved mydata on this event.
- Break the reconnect chain when an rpc client is disabled. This will
prevent new disconnect events which can lead to crashes.
glusterd:
- Added support for RPC_CLNT_DESTROY in glusterd_brick_rpc_notify
- Use a common glusterd_rpc_clnt_unref() function throught glusterd in
place of rpc_clnt_unref(). This function correctly gives up the
big-lock before performing the unref.
Change-Id: I93230441c5089039643fc9f5632477ef1b695348
BUG: 962619
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5512
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> handle option validation cases in reset case.
-> Creating valid conf path when glusterd restarts.
-> Reading the gsyncd worker thread status and displaying it.
-> Displaying status-detail per worker.
-> Fetch checkpoint info in geo-rep status.
-> use-tarssh value validation added.
misc: misc geo-rep fixes based on cluster, logrotate etc..
-> cluster/dht: fix 'stime' getxattr getting overwritten.
-> cluster/afr: return max of 'stime' values in subvol.
-> geo-rep-logrotate: Sending SIGHUP to geo-rep auxiliary.
-> cluster/dht: fix convoluted logic while aggregating.
-> cluster/*: fix 'stime' min/max fetch logic.
Change-Id: I811acea0bbd6194797a3e55d89295d1ea021ac85
BUG: 1036552
Signed-off-by: Ajeet Jha <ajha@redhat.com>
Reviewed-on: http://review.gluster.org/6405
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@gmail.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
information
'volume profile info' fetches both cumulative and incremental
I/O statistics. There isn't a way to fetch just cumulative or
incremental statistics.
This change introduces two optional arguments, namely "incremental"
and "cumulative", that can be tacked on to 'volume profile info'.
In other words, the new command format is
volume profile <VOLNAME> {start | info [incremental | cumulative]
| stop} [nfs]
'volume profile info incremental' - fetches incremental stats
'volume profile info cumulative' - fetches cumulative stats
'volume profile info' - fetches incremental and cumulative stats
Change-Id: I5ddb45d990542ea611d23d251efebfec46f472d0
BUG: 1030580
Signed-off-by: Dawit Alemu <dalemu@redhat.com>
Reviewed-on: http://review.gluster.org/6264
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ibd2edc5608ae6d3370607bff1c626c8347c4deda
BUG: 1031887
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6337
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
IOV_MAX is the maximum supported vector count on a given platform.
Limit the count to IOV_MAX if higher. As we are performing non-blocking
IO getting a smaller return value is handled naturally.
Change-Id: I94ef67a03ed0e10da67a776af2b55506bf721611
BUG: 1034398
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/6354
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
re-work.
Following are the cli commands that are new/re-worked:
======================================================
volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} |
volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} |
volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>}
volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [detail|clients|mem|inode|fd|callpool]
volume statedump <VOLNAME> [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history]
glusterd changes:
=================
* Quota limits are now set as extended attributes by glusterd from
the aux mount created by the cli.
* The gfids of the directories on which quota limits are set
for a given volume are stored in
/var/lib/glusterd/vols/<volname>/quota.conf file in binary format,
and whose cksum and version is stored in
/var/lib/glusterd/vols/<volname>/quota.cksum.
Original-author: Krutika Dhananjay <kdhananj@redhat.com>
Original-author: Krishnan Parthasarathi <kparthas@redhat.com>
BUG: 969461
Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/6003
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Two stages of quota enforcement is done:
Soft and hard quota Upon reaching soft quota limit on the directory
it logs/alerts in the quota daemon log (ie DEFAULT_LOG_DIR/quotad.log)
and no more writes allowed after hard
quota limit. After reaching the soft-limit the daemon alerts the
user/admin repeatively for every 'alert-time', which is
configurable.
* Quota enforcer is moved to server-side.
It takes care of enforcing quota. Since enforcer doesn't have the
cluster view, it relies on another service called
quota-aggregator. Aggregator, on query can return the size of a
directory based on the cluster view.
Enforcer is always loaded in the server graph and is by passed if
the feature is not enabled.
Options specific to enforcer:
server-quota - Specifies whether the feature is on/off. It is used
to by pass the quota if turned off.
deem-statfs - If set to on, it takes quota limits into consideration
while estimating fs size. (df command). The algorithm followed is,
i. Adjust statvfs based on limit configured on root.
ii. If limit is set on the inode passed, use size/limits on that inode to
populate statvfs. Otherwise, use size/limits configured on root.
iii. Upon statvfs, update the ctx->size on the inode.
iv. Don't let DHT aggregate, instead take the maximum of the usages from the
subvols of the DHT, since each of it contains the complete information.
Enforcer also makes use of gfid-to-path conversion functionality to
work correctly when a client like nfs predominently relies on
nameless lookups.
* Quota Aggregator acts as a thin client to provide cluster view
Its a lightweight *gluster client* process with no mount point,
started upon enabling quota or restarting the volume. This is a
single process run on each brick, which can answer queries on all
volumes in the cluster. Its volfile stored in
GLUSTERD_DEFAULT_WORKING_DIR/quotad/quotad.vol.
Credits:
Raghavendra Bhat <rabhat@redhat.com>
Varun Shastry <vshastry@redhat.com>
Shishir Gowda <sgowda@redhat.com>
Kruthika Dhananjay <kdhananj@redhat.com>
Brian Foster <bfoster@redhat.com>
Krishnan Parthasarathi <kparthas@redhat.com>
Change-Id: Id1cb25b414951da34c665a55f77385d482e0f9de
BUG: 969461
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/5952
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Fix the typo in NFS default ACL
The typo was introduced as part of the Fix to BZ 1009210 i.e.
http://review.gluster.org/5980. The user ACL xattr structure
was passed to default ACL xattr.
2) Clean up NFS code to avoid unnecessary SEGV in
rpcsvc_drc_reconfigure() which was not validating the
svc->drc. Add a routine rpcsvc_drc_deinit() to handle
the clean up of DRC specific data structures. For init(),
use rpcsvc_drc_init().
3) nfs_init_state() was returning wrong value even if the
registration with portmapper failed, causing the NFS
server process to hang around. As a result it used to
get SEGV during rpcsvc_drc_reconfigure().
4) Clean up memfactor usage across nfs.c nfs3.c.
Change-Id: I5cea26cb68dd8a822ec0ae104952f67fe63fa703
BUG: 1009210
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/6329
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement reconfigure() for NFS xlator so that volume set/reset wont
restart the NFS server process. But few options can not be reconfigured
dynamically e.g. nfs.mem-factor, nfs.port etc which needs NFS to be
restarted.
Change-Id: Ic586fd55b7933c0a3175708d8c41ed0475d74a1c
BUG: 1027409
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/6236
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Remove bd_map xlator and CLI related changes.
Change-Id: If7086205df1907127c1a1fa4ba603f1c48421d09
BUG: 1028672
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/5747
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for a new ZEROFILL fop. Zerofill writes zeroes to a file in
the specified range. This fop will be useful when a whole file needs to
be initialized with zero (could be useful for zero filled VM disk image
provisioning or during scrubbing of VM disk images).
Client/application can issue this FOP for zeroing out. Gluster server
will zero out required range of bytes ie server offloaded zeroing. In
the absence of this fop, client/application has to repetitively issue
write (zero) fop to the server, which is very inefficient method because
of the overheads involved in RPC calls and acknowledgements.
WRITESAME is a SCSI T10 command that takes a block of data as input and
writes the same data to other blocks and this write is handled
completely within the storage and hence is known as offload . Linux ,now
has support for SCSI WRITESAME command which is exposed to the user in
the form of BLKZEROOUT ioctl. BD Xlator can exploit BLKZEROOUT ioctl to
implement this fop. Thus zeroing out operations can be completely
offloaded to the storage device , making it highly efficient.
The fop takes two arguments offset and size. It zeroes out 'size' number
of bytes in an opened file starting from 'offset' position.
This patch adds zerofill support to the following areas:
- libglusterfs
- io-stats
- performance/md-cache,open-behind
- quota
- cluster/afr,dht,stripe
- rpc/xdr
- protocol/client,server
- io-threads
- marker
- storage/posix
- libgfapi
Client applications can exloit this fop by using glfs_zerofill introduced in
libgfapi.FUSE support to this fop has not been added as there is no system call
for this fop.
Changes from previous version 3:
* Removed redundant memory failure log messages
Changes from previous version 2:
* Rebased and fixed build error
Changes from previous version 1:
* Rebased for latest master
TODO :
* Add zerofill support to trace xlator
* Expose zerofill capability as part of gluster volume info
Here is a performance comparison of server offloaded zeofill vs zeroing
out using repeated writes.
[root@llmvm02 remote]# time ./offloaded aakash-test log 20
real 3m34.155s
user 0m0.018s
sys 0m0.040s
[root@llmvm02 remote]# time ./manually aakash-test log 20
real 4m23.043s
user 0m2.197s
sys 0m14.457s
[root@llmvm02 remote]# time ./offloaded aakash-test log 25;
real 4m28.363s
user 0m0.021s
sys 0m0.025s
[root@llmvm02 remote]# time ./manually aakash-test log 25
real 5m34.278s
user 0m2.957s
sys 0m18.808s
The argument log is a file which we want to set for logging purpose and
the third argument is size in GB .
As we can see there is a performance improvement of around 20% with this
fop.
Change-Id: I081159f5f7edde0ddb78169fb4c21c776ec91a18
BUG: 1028673
Signed-off-by: Aakash Lal Das <aakash@linux.vnet.ibm.com>
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/5327
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement a limit on the total number of outstanding RPC requests
from a given cient. Once the limit is reached the client socket
is removed from POLL-IN event polling.
Change-Id: I8071b8c89b78d02e830e6af5a540308199d6bdcd
BUG: 1008301
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/6114
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I982cf7619463983c04b401d70a76635991d072d2
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/6091
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gettimeofday() returns the current wall clock time and timezone.
Using these functions in order to measure the passage of time
(how long an operation took) therefore seems like a no-brainer.
This time suffer's from some limitations:
a. They have a low resolution: “High-performance” timing by
definition, requires clock resolutions into the microseconds
or better.
b. They can jump forwards and backwards in time: Computer
clocks all tick at slightly different rates, which causes
the time to drift. Most systems have NTP enabled which
periodically adjusts the system clock to keep them in sync
with “actual” time. The adjustment can cause the clock to
suddenly jump forward (artificially inflating your timing
numbers) or jump backwards (causing your timing calculations
to go negative or hugely positive). In such cases timer
thread could go into an infinite loop.
From 'man gettimeofday':
----------
..
..
The time returned by gettimeofday() is affected by discontinuous
jumps in the system time (e.g., if the system administrator manually
changes the system time). If you need a monotonically increasing
clock, see clock_gettime(2).
..
..
----------
Rationale:
For calculating interval timing for Timer thread, all that’s
needed should be clock as a simple counter that increments
at a stable rate.
This is necessary to avoid the jumps which are caused by using
"wall time", this counter must be monotonic that can never
“tick” backwards, ever.
Change-Id: I701d31e71a85a73d21a6c5cd15583e7a5a645eeb
BUG: 1017993
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/6070
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently to know the number of files to be healed, either user
has to go to backend and check the number of entries present in
indices/xattrop directory. But if a volume consists of large
number of bricks, going to each backend and counting the number
of entries is a time-taking task. Otherwise user can give
gluster volume heal vol-name info command but with this
approach if no. of entries are very hugh in the indices/
xattrop directory, it will comsume time.
So as a feature, new command is implemented.
Command 1: gluster volume heal vn statistics heal-count
This command will get the number of entries present in
every brick of a volume. The output displays only entries
count.
Command 2: gluster volume heal vn statistics heal-count
replica 192.168.122.1:/home/user/brickname
Here if we are concerned with just one replica.
So providing any one of the brick of a replica will get
the number of entries to be healed for that replica only.
Example:
Replicate volume with replica count 2.
Backend status:
--------------
[root@dhcp-0-17 xattrop]# ls -lia | wc -l
1918
NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy
entries so actual no. of entries to be healed are
1916.
[root@dhcp-0-17 xattrop]# pwd
/home/user/2ty/.glusterfs/indices/xattrop
Command output:
--------------
Gathering count of entries to be healed on volume volume3 has been successful
Brick 192.168.122.1:/home/user/22iu
Status: Brick is Not connected
Entries count is not available
Brick 192.168.122.1:/home/user/2ty
Number of entries: 1916
Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290
BUG: 1015990
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/6044
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"gluster volume heal volumename statistics" command gives the summary
of the afr crawl done based on the entries present in the xattrop
directory. Whenever afr crawls are attempted, the beginning time of
crawl, end time of crawl, no of files healed, heal-failed count and
number of files in split brain are shown along with the type of the
crawl. If crawl is already in progress then it will give the number
of files healed, heal failed count and number of files in split-brain
from the beginning of the crawl and instead of telling the end time of
the crawl, "CRAWL IN PROGRESS" message will be shown.
Output format:
command: "gluster volume heal volume-name statistics"
Output:
Gathering afr crawl statistics crawl statistics on volume volume-name
has been successful
------------------------------------------------
Crawl statistics for brick no 0
Hostname of brick 192.168.122.248
Starting time of crawl: Wed Jul 10 15:52:38 2013
Ending time of crawl: Wed Jul 10 15:52:38 2013
Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0
Starting time of crawl: Wed Jul 10 15:52:38 2013
Ending time of crawl: Wed Jul 10 15:52:38 2013
Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0
------------------------------------------------
Crawl statistics for brick no 1
Hostname of brick 192.168.122.1
Starting time of crawl: Wed Jul 10 15:52:42 2013
Ending time of crawl: Wed Jul 10 15:52:42 2013
Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0
Starting time of crawl: Wed Jul 10 15:52:42 2013
Ending time of crawl: Wed Jul 10 15:52:42 2013
Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0
--------------------------------------------------
Change-Id: I10bf9d10b005741db9973fb1352e0dd59ed99aa9
BUG: 949400
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/4790
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I290cd983bd0dff2e32e5ee90a12e888a3b31c6fd
BUG: 969461
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/5954
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
oVirt's Gluster Integration needs an inexpensive command that can be
executed every 10 seconds to monitor async tasks and their parameters,
for all volumes.
The solution involves adding a 'tasks' sub-command to 'volume status'
to fetch only the async task IDs, type and other relevant parameters.
Only the originator glusterd participates in this command as all the
information needed is available on all the nodes. This is to make the
command suitable for being executed every 10 seconds.
Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1
BUG: 1012346
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/6006
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Incorrect NFS ACL encoding causes "system.posix_acl_default"
setxattr failure on bricks on XFS file system. XFS (potentially
others?) doesn't understand when the 0x10 prefix is added to the
ACL type field for default ACLs (which the Linux NFS client adds)
which causes setfacl()->setxattr() to fail silently. NFS client
adds NFS_ACL_DEFAULT(0x1000) for default ACL.
FIX:
Mask the prefix (added by NFS client) OFF, so the setfacl is not
rejected when it hits the FS.
Original patch by: "Richard Wareing"
Change-Id: I17ad27d84f030cdea8396eb667ee031f0d41b396
BUG: 1009210
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/5980
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Block all signal except those which are set for explicit handling
in glusterfs_signals_setup(). Since thread spawning code in
libglusterfs and xlators can get called from application threads
when used through libgfapi, it is necessary to do this blocking.
Change-Id: Ia320f80521a83d2edcda50b9ad414583a0175281
BUG: 1011662
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/5995
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currenly the CLI rebalance status command output does not indicate the
'type' of rebalance, i.e. whether a full rebalance or only a fix-layout
was carried out.
Fix: After the rebalance status of all peers is received by the
originator glusterd, alter it to reflect the type of rebalance
before passing it on to the CLI process.
Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3
BUG: 1004744
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/5826
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For rpc requests having large aux group list, allocate large list
on demand. Else use small static array by default.
Without this patch, glusterfsd allocates 140+MB of resident memory
just to get started and initialized.
Change-Id: I3a07212b0076079cff67cdde18926e8f3b196258
Signed-off-by: Anand Avati <avati@redhat.com>
BUG: 953694
Reviewed-on: http://review.gluster.org/5927
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
config service availability for clients.
Backupvolfile server as it stands is slow and prone to errors
with mount script and its combination with RRDNS. Instead in
theory it should use all the available nodes in 'trusted pool'
by default (Right now we don't have a mechanism in place for
this)
Nevertheless this patch provides a scenario where a list of
volfile-server can be provided on command as shown below
-----------------------------------------------------------------
$ glusterfs -s server1 .. -s serverN --volfile-id=<volname> \
<mount_point>
-----------------------------------------------------------------
OR
-----------------------------------------------------------------
$ mount -t glusterfs -obackup-volfile-servers=<server2>: \
<server3>:...:<serverN> <server1>:/<volname> <mount_point>
-----------------------------------------------------------------
Here ':' is used as a separator for mount script parsing
Now these will be remembered and recursively attempted for
fetching vol-file until exhausted. This would ensure that the
clients get 'volume' configs in a consistent manner avoiding the
need to poll through RRDNS.
Change-Id: If808bb8a52e6034c61574cdae3ac4e7e83513a40
BUG: 986429
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/5400
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Provides detailed status info in the following format
MASTER <master-vol> SLAVE <slave-vol>
NODE HEALTH UPTIME FILES SYNCD FILES PENDING BYTES PENDING DELETES PENDING
-----------------------------------------------------------------------------------
This patch introdues "status detail" command to show crawl related
information in CLI. These values are "pulled" from gsyncd when
"status detail" is executed.
Change-Id: I1fdaf7180eacce054a864d34971dc160bd7301e1
BUG: 990420
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/5590
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A typo which read MAX_AUTH_BYTES instead of GF_MAX_AUTH_BYTES was
picking the value 400 instead of the larger 2048. This causes
failures when number of aux group ids is a large number.
Change-Id: Idb8d59aee2690fd53e24c2e09f58a16fe387ef27
BUG: 1000131
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/5695
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implementation of client_t
The feature page for client_t is at
http://www.gluster.org/community/documentation/index.php/Planning34/client_t
In addition to adding libglusterfs/client_t.[ch] it also extracts/moves
the locktable functionality from xlators/protocol/server to libglusterfs,
where it is used; thus it may now be shared by other xlators too.
This patch is large as it is. Hooking up the state dump is left to do
in phase 2 of this patch set.
(N.B. this change/patch-set supercedes previous change 3689, which was
corrupted during a rebase. That change will be abandoned.)
BUG: 849630
Change-Id: I1433743190630a6d8119a72b81439c0c4c990340
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/3957
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commands:
gluster system:: execute gsec_create
gluster volume geo-rep <master> <slave-url> create [push-pem] [force]
gluster volume geo-rep <master> <slave-url> start [force]
gluster volume geo-rep <master> <slave-url> stop [force]
gluster volume geo-rep <master> <slave-url> delete
gluster volume geo-rep <master> <slave-url> config
gluster volume geo-rep <master> <slave-url> status
The geo-replication is distributed. The session will be created, and
gsyncd will be spawned on all relevant nodes, instead of only one
node.
geo-rep: Collecting status detail related data
Added persistent store for saving information about
TotalFilesSynced, TotalSyncTime, TotalBytesSynced
Changes in the status information in socket:
Existing(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;
New(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978;
TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640;
Persistent details stored in
/var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status
Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111
BUG: 847839
Original Author: Avra Sengupta <asengupt@redhat.com>
Original Author: Venky Shankar <vshankar@redhat.com>
Original Author: Aravinda VK <avishwan@redhat.com>
Original Author: Amar Tumballi <amarts@redhat.com>
Original Author: Csaba Henk <csaba@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5132
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When nfs.addr-namelookup is turned on, if the
getaddrinfo call fails while authenticating client's
ip/hostname, the mount request is denied
Change-Id: I744f1c6b9c7aae91b9363bba6c6987b42f7f0cc9
BUG: 947055
Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com>
Reviewed-on: http://review.gluster.org/5143
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Duplicate request cache provides a mechanism for detecting
duplicate rpc requests from clients. DRC caches replies
and on duplicate requests, sends the cached reply instead of
re-processing the request.
Change-Id: I3d62a6c4aa86c92bf61f1038ca62a1a46bf1c303
BUG: 847624
Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com>
Reviewed-on: http://review.gluster.org/4049
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ia8e1af082078f2f791708ba4faa4992bf291dd6e
BUG: 961339
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/5023
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|