| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently with rmdir, if a directory contains only the linkfiles
we remove all the linkfiles and this is causing the problem when the cached
sub volume is down and end-up with duplicate files showing on the mount point.
Solution: Before removing a linkfile check if the
files exists in cached subvolume.
Change-Id: Iedffd0d9298ec8bb95d5ce27c341c9ade81f0d3c
BUG: 1042725
Signed-off-by: Vijaykumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/6500
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ica246f99b8cdb6c0cf0e9143f50be056e37d3b7f
BUG: 1045309
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/6550
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Moves rpc-coverage.sh from extras/ to tests/basic/
2. Fixes a symlink test
Change-Id: I2fb8f8441434acfd7bd7fff72deedfbd2410d08c
BUG: 764966
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/6609
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
quotad before marking quota as enabled.
without this patch there is a window of time when quota is marked as
enabled in quota-enforcer, but connection to quotad wouldn't have been
established. Any checklimit done during this period can result in a
failed fop because of unavailability of quotad.
Change-Id: I0d509fabc434dd55ce9ec59157123524197fcc80
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
BUG: 969461
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/6572
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Accounting was done in enforcer (though marker is the ultimate source
of truth) to offset cached directory size becoming stale. However,
with enforcer being moved to brick we can no longer maintain correct
cluster wide size for a directory. Hence removing accounting code from
enforcer.
Change-Id: I5ea94234da4da85ed5f5ced1354d8de3454b3fcb
BUG: 969461
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/6434
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of just strings, provide the ability to specify a regex
of the pattern to expect
Change-Id: I6ada978197dceecc28490a2a40de73a04ab9abcd
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/6788
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
stats
"volume profile info" automatically clears incremental stats. There
isn't a command to:
- fetch stats without clearing incremental stats and
- clear cumulative and incremental stats
This change introduces two arguments (i.e. peek and clear). 'clear'
will wipe both incremental and cumulative stats. 'peek' fetches stats
without wiping incremental stats.
'volume profile info peek' - fetches incremental and cumulative stats
without wiping incremental stats
'volume profile info incremental peek' - fetches incremental stats
without wiping incremental stats
'volume profile info clear' - clears both incremental and cumultiave
stats
Change-Id: I91834515ad672eca5f882809941147d7d997c4c9
BUG: 1047416
Signed-off-by: Dawit Alemu <dalemu@redhat.com>
Reviewed-on: http://review.gluster.org/6620
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Prevent mistaking the "compress" options for storage (at rest)
compression. The cdc-xlator is implemented to support compressing of
network traffic (READ and WRITE FOPs).
URL: http://www.gluster.org/community/documentation/index.php/Features/On-Wire_Compression_+_Decompression
Change-Id: I9fedf4106dcb226d135ab92e4b533aff284881d7
BUG: 1053670
CC: Venky Shankar <vshankar@redhat.com>
CC: Prashanth Pai <ppai@redhat.com>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/6765
Reviewed-by: Prashanth Pai <ppai@redhat.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also fixed check in dht_is_subvol_in_layout to check if the
layouts are zero'ed out.
Change-Id: I4bf8ebf66d3ef1946309b6c9aac9e79bf8a6d495
BUG: 969461
Signed-off-by: shishir gowda <sgowda@redhat.com>
Signed-off-by: Varun Shastry <vshastry@redhat.com>
Reviewed-on: http://review.gluster.org/6392
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I86ebe02735ee88598640240aa888e02b48ecc06c
BUG: 1040423
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/6490
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: If18cab5992ddc91457782786942971deb1b51ead
BUG: 1023974
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/6155
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously df -h used to display "Transport end point not connected"
for quota auxiliary mount after volume is stopped. This patch
unmounts the auxiliary mount when the volume is stopped in all peer
nodes for that volume.
Change-Id: I78abb44386cd8242a532f92c13df8bdb57c78e31
BUG: 1049323
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/6656
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
We have not meddled with mount point to check for it again.
Change-Id: I88eed777b6573a320065b9e14c2031db964e36d0
BUG: 1053362
Signed-off-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-on: http://review.gluster.org/6675
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I801c6e6ecd6c5a91e487e8e54ec5f684d450a080
BUG: 1047378
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6687
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. errno was being set after gf_log() in posix_{f}handle_pair, this
would cause errno to be overwritten.
2. dht would expect -1 for indication of failure in setxattr
callback (dht_err_cbk()). posix_{f}setxattr has been changed to set
op_ret as -1 instead of -op_errno.
3. dict_foreach() has been changed to return an error if the invoked
fn() returns < 0.
Bug report and test case credits to Zorro Lang <zlang@redhat.com>
Change-Id: I96c15f12a5d7717b7584ba392f390a0b4f704a98
BUG: 1051896
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/6684
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- implement ref/unref of entry locks (and fix bad pointer deref crashes)
- code cleanup and deleted various data types
- fix improper read/write lock conflict detection in entrylk
- fix indefinite hang of blocked locks on disconnect
- register locks in client_t synchronously, fix crashes in disconnect path
Change-Id: Id273690c9111b8052139d1847060d1fb5a711924
BUG: 849630
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/6638
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Under the entry self heal, readlink is done at the
source and sink. When readlink is done at the sink,
because link is not present at the sink, afr expects
ENOENT. AFR translator takes decisions for new link
creation based on ENOENT but server translator is modified
to return ESTALE because of which afr xlator is not able
to heal.
Fix:
The check for inode absence at server includes ESTALE as
well.
Change-Id: I319e4cb4156a243afee79365b7b7a5a7823e9a24
BUG: 1046624
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/6599
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NFS subdir mount does not respect nfs.rpc-auth-reject option
in the volume. If the volume is being mounted, then it would
validate the AUTH by mnt3_check_client_net() but if the client
is mounting a subdir, the control takes a different code path
i.e. mnt3_find_export() which does not bother about the AUTH.
FIX:
Enforce the AUTH check in mnt3_parse_dir_exports() which is
invoked by mnt3_find_export() for subdir mount.
Change-Id: I6fdd3e6bd6cbd32b0d9ca620cc4c30fdaff9ca30
BUG: 1049225
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/6655
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update the subvol_count when a peer imports information about the friend
volumes.
Change-Id: Id3884bd5727ff22be7ed87f43a1ec1b5fe34813c
BUG: 1047955
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/6629
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce new options to modify the behaviour of server.root-squash.
With server.anonuid and server.anongid the uid/gid can be specified and
the root user (uid=0 and gid=0) will be mapped to the given uid/gid
instead of nfsnobody (uid=65534 and gid=65534).
Many thanks to Vikhyat Umrao for writing the majority of the test-case!
Change-Id: I6379a3d2ef52b9b9707f2f6f0529657580c8d779
BUG: 1043886
CC: Vikhyat Umrao <vumrao@redhat.com>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/6546
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Reviewed-by: Vikhyat Umrao <vumrao@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I3114009681d49249fe292f94a464efc419c944cb
BUG: 1037501
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/6596
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I74788b63dd1c14507aa6d65182ea4b87a2e1f389
BUG: 1046308
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/6589
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Whenever a new brick is added into a replicate volume, all
source bricks are not marked as source. Only one of them is
marked as source. Here marked as source refers to adding
extended attribute at the backend of a file corresponding to
the newly added brick. As well as source bricks should point
to the newly added brick so that heal can be triggered.
Fix:
All source bricks will now point to newly added bricks and heal
can be triggered based on the extended attributes.
Change-Id: I318e1f779a380c16c448a2d05c0140d8e4647fd4
BUG: 1037501
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/6540
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Reducing replica count of a volume using remove-brick command fails
if bricks are specified in a random order.
Fix: Modify subvol_matcher_verify() to permit order agnostic
replica count reduction.
Change-Id: I1f3d33e82a70d9b69c297f69c4c1b847937d1031
BUG: 1040408
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/6489
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current code would set owner uid/gid explicitly to 0/0 on start
even if none was specified. Fix it.
Change-Id: I72dec9e79c51bd1eb3af5334c42b7c23b01d0258
BUG: 1040275
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/6476
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: Lukáš Bezdička <lukas.bezdicka@gooddata.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After a graph switch, a PARENT_DOWN event from fuse indicates
protocol/client to shutdown all its sockets. However, this event will
be sent only when the first fop is received from fuse mount after
graph switch. Also, this event is sent only on previously active graph.
So, if there are multiple graph switches when there is no activity on
mountpoint, we'll be left with a list of graphs with their sockets
still open.
This patch fixes the issue by sending PARENT_DOWN to previously
non-active graph when a new graph is available irrespective of whether
there is an activity on mount-point.
Change-Id: I9e676658d797c0b2dd3ed5ca5a56271bd94b4bb6
BUG: 924726
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/4713
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also added an option 'wignore' to save ourselves the trouble
of modifying test scripts in our regression test suite as well
as those that are still under review.
Change-Id: Id320c03595506e9da187e766991c19640bd000c5
BUG: 1028281
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/6409
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ibf47ea848bbb7ae37ccf91c40e5fe0e2338438b7
BUG: 1027171
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/6233
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
information
'volume profile info' fetches both cumulative and incremental
I/O statistics. There isn't a way to fetch just cumulative or
incremental statistics.
This change introduces two optional arguments, namely "incremental"
and "cumulative", that can be tacked on to 'volume profile info'.
In other words, the new command format is
volume profile <VOLNAME> {start | info [incremental | cumulative]
| stop} [nfs]
'volume profile info incremental' - fetches incremental stats
'volume profile info cumulative' - fetches cumulative stats
'volume profile info' - fetches incremental and cumulative stats
Change-Id: I5ddb45d990542ea611d23d251efebfec46f472d0
BUG: 1030580
Signed-off-by: Dawit Alemu <dalemu@redhat.com>
Reviewed-on: http://review.gluster.org/6264
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Quota contributions of a file/directory are tracked by quota
xlator using xattrs on the file. Quota allows these xattrs to be
healed as part of metadata self-heal. This leads to
wrong quota calculations on this brick after self-heal because
quota xattrs don't represent the actual contributions on the
brick anymore.
Fix:
Don't let self-heal of this xattr happen as part of self-heal
by filtering quota xattrs on file in listxattr.
Change-Id: Iea68a116595ba271e58c6fdcc3dd21c7bb55ebb3
BUG: 1035576
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/6374
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Missing indices directory in the bricks leads to unwanted log messages.
Therefore, indices directory needs to be created as soon as the brick
comes up.
This patch results in creation of indices/xattrop directory as required.
Also includes a testcase to test the same.
Change-Id: Ic2aedce00c6c1fb24929ca41f6085aed28709d2c
BUG: 1034085
Signed-off-by: Anuradha <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/6343
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I2e9a2264b1fd5ebc1ed0aff30225e89acbd0bcb4
BUG: 1034716
Signed-off-by: Vijaykumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/6361
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
re-work.
Following are the cli commands that are new/re-worked:
======================================================
volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} |
volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} |
volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>}
volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [detail|clients|mem|inode|fd|callpool]
volume statedump <VOLNAME> [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history]
glusterd changes:
=================
* Quota limits are now set as extended attributes by glusterd from
the aux mount created by the cli.
* The gfids of the directories on which quota limits are set
for a given volume are stored in
/var/lib/glusterd/vols/<volname>/quota.conf file in binary format,
and whose cksum and version is stored in
/var/lib/glusterd/vols/<volname>/quota.cksum.
Original-author: Krutika Dhananjay <kdhananj@redhat.com>
Original-author: Krishnan Parthasarathi <kparthas@redhat.com>
BUG: 969461
Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/6003
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Two stages of quota enforcement is done:
Soft and hard quota Upon reaching soft quota limit on the directory
it logs/alerts in the quota daemon log (ie DEFAULT_LOG_DIR/quotad.log)
and no more writes allowed after hard
quota limit. After reaching the soft-limit the daemon alerts the
user/admin repeatively for every 'alert-time', which is
configurable.
* Quota enforcer is moved to server-side.
It takes care of enforcing quota. Since enforcer doesn't have the
cluster view, it relies on another service called
quota-aggregator. Aggregator, on query can return the size of a
directory based on the cluster view.
Enforcer is always loaded in the server graph and is by passed if
the feature is not enabled.
Options specific to enforcer:
server-quota - Specifies whether the feature is on/off. It is used
to by pass the quota if turned off.
deem-statfs - If set to on, it takes quota limits into consideration
while estimating fs size. (df command). The algorithm followed is,
i. Adjust statvfs based on limit configured on root.
ii. If limit is set on the inode passed, use size/limits on that inode to
populate statvfs. Otherwise, use size/limits configured on root.
iii. Upon statvfs, update the ctx->size on the inode.
iv. Don't let DHT aggregate, instead take the maximum of the usages from the
subvols of the DHT, since each of it contains the complete information.
Enforcer also makes use of gfid-to-path conversion functionality to
work correctly when a client like nfs predominently relies on
nameless lookups.
* Quota Aggregator acts as a thin client to provide cluster view
Its a lightweight *gluster client* process with no mount point,
started upon enabling quota or restarting the volume. This is a
single process run on each brick, which can answer queries on all
volumes in the cluster. Its volfile stored in
GLUSTERD_DEFAULT_WORKING_DIR/quotad/quotad.vol.
Credits:
Raghavendra Bhat <rabhat@redhat.com>
Varun Shastry <vshastry@redhat.com>
Shishir Gowda <sgowda@redhat.com>
Kruthika Dhananjay <kdhananj@redhat.com>
Brian Foster <bfoster@redhat.com>
Krishnan Parthasarathi <kparthas@redhat.com>
Change-Id: Id1cb25b414951da34c665a55f77385d482e0f9de
BUG: 969461
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/5952
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
what?
=====
The following is an attempt to generate the paths of a file when
only its gfid is known.
To find the path of a directory, the symlink handle to the
directory maintained in the ".glusterfs" backend directory is
read. The symlink handle is generated using the gfid of the
directory. It (handle) contains the directory's name and parent
gfid, which are used to recursively construct the absolute path as
seen by the user from the mount point.
A similar approach cannot be used for a regular file or a symbolic
link since its hardlink handle, generated using its gfid, doesn't
contain its parent gfid and basename. So xattrs are set to store
the parent gfids and the number of hardlinks to a file or a
symlink having the same parent gfid. When an user/application
requests for the paths of a regular file or a symlink with
multiple hardlinks, using the parent gfids stored in the xattrs,
the paths of the parent directories are generated as mentioned
earlier. The base names of the hardlinks (with the same parent
gfid) are determined by matching the actual backend inode numbers
of each entry in the parent directory with that of the hardlink
handle.
Xattr is set on a regular file, link, and symbolic link as
follows, Xattr name : trusted.pgfid.<pargfidstr> Xattr value :
<number of hardlinks to a regular file/symlink with the same
parentgfid>
If a regular file, hard link, symbolic link is created then an
xattr in the above format is set in the backend.
how to use?
===========
This functionality can be used through getxattr interface. Two
keys - glusterfs.ancestry.dentry and glusterfs.ancestry.path - enable
usage of this functionality. A successful getxattr will have the
result stored under same keys. Values will be,
glusterfs.ancestry.dentry:
--------------------------
A linked list of gf-dirent structures for all possible paths from
root to this gfid. If there are multiple paths, the linked-list
will be a series of paths one after another. Each path will be a
series of dentries representing all components of the path. This
key is primarily for internal usage within glusterfs.
glusterfs.ancestry.path:
------------------------
A string containing all possible paths from root to this gfid.
Multiple hardlinks of a file or a symlink are displayed as a colon
seperated list (this could interfere with path components
containing ':').
e.g. If there is a file "file1" in root directory with two hardlinks,
"/dir2/link2tofile1" and "/dir1/link1tofile1", then
[root@alpha gfsmntpt]# getfattr -n glusterfs.ancestry.path -e text
file1
glusterfs.ancestry.path="/file1:/dir2/link2tofile1:/dir1/link1tofile1"
Thanks Amar, Avati and Venky for the inputs.
Original Author: Ramana Raja <rraja@redhat.com>
BUG: 990028
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Change-Id: I0eaa9101e333e0c1f66ccefd9e95944dd4a27497
Reviewed-on: http://review.gluster.org/5951
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
afr_[f]getxattr_pathinfo_cbks fail the fop even when it succeeded on
one of the bricks. This can happen if the last response to pathinfo
[f]getxattr is a failure.
Fix:
Remember if any of the [f]getxattr_pathinfos are successful and send
that as the op_ret/op_errno value to the xlators above.
Note:
Winding fop to a client xlator that is not connected to server produces
an error log. Preventing that by not even winding fop when client xlator
is DOWN.
Change-Id: I846e8c47423ffcfa2eabffe8924534781a36841a
BUG: 1032927
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/6332
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We needed this macro while writing test cases for quota. With quota,
a directory size is only guaranteed to be within some margin of quota
limit, but not an accurate number. With not knowing what size to
expect and EXPECT macro not complete enough to accept ranges of sizes,
we can atleast write test-cases with EXPECT_NOT macro. After copying
data to an empty file, it will be guaranteed the size will not be
zero. This is good enough for quota test cases.
Change-Id: I722ebd68044716a5eeaf0bd7e9aae61df8469017
BUG: 1022995
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/6253
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I85fc6dd393449d365bb908b38c2827b58cb08171
BUG: 1030208
Signed-off-by: Vijaykumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/6262
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I73a0bfa7085d2e71b2489687fa53f5fe7d1e8ea1
BUG: 1028672
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/6050
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Remove bd_map xlator and CLI related changes.
Change-Id: If7086205df1907127c1a1fa4ba603f1c48421d09
BUG: 1028672
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/5747
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* When a writev call occurs, the client compresses the data before
sending it to server. On the server, compressed data is decompressed.
Similarly, when a readv call occurs, the server compresses the data
before sending it to client. On the client, the compressed data is
decompressed. Thus the amount of data sent over the wire is minimized.
* Compression/Decompression is done using Zlib library.
* During normal operation, this is the format of data sent over wire :
<compressed-data> + trailer(8)
The trailer contains the CRC32 checksum and length of original
uncompressed data. This is used for validation.
HOW TO USE
----------
Turning on compression xlator:
gluster volume set <vol_name> compress on
Configurable options:
gluster volume set <vol_name> compress.compression-level 8
gluster volume set <vol_name> compress.min-size 50
Change-Id: Ib7a66b6f1f70fe002b7c513588cdf75c69370805
BUG: 923540
Original-author : Venky Shankar <vshankar@redhat.com>
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Signed-off-by: Prashanth Pai <nullpai@gmail.com>
Signed-off-by: Prashanth Pai <ppai@redhat.com>
Reviewed-on: http://review.gluster.org/3251
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
... so that subsequent volume commands don't block waiting forever,
for the lock to be released.
Change-Id: I24b5ec47f6982900ab74ff1b492d523f31ecfb7f
BUG: 1022055
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/6122
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Quota volume reset command without "force"
option fixed, doesn't fail anymore. It resets
unprotected fields and not the protected ones.
Also, an appropriate message is provided to the user
for the following cases :
1. only unprotected fields are reset, "force" option
should be used to reset protected fields.
2. Both protected and unprotected fields are reset.
3. No field was reset, "force" option required.
Test case for the same also added.
Change-Id: I24e8f1be87b79ccd81bf6f933e00608b861c7a16
BUG: 1022905
Signed-off-by: Anuradha <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/6135
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Glusterd changes:
With this patch, glusterd creates a socket file in
DATADIR/run/glusterd.socket , and listen on it for cli requests. It
listens for 2 rpc programs on the socket file,
- The glusterd cli rpc program, for all cli commands
- A reduced glusterd handshake program, just for the 'system:: getspec'
command
The location of the socket file can be changed with the glusterd option
'glusterd-sockfile'.
To retain compatibility with the '--remote-host' cli option, glusterd
also listens for the cli requests on port 24007. But, for the sake of
security, it listens using a reduced cli rpc program on the port. The
reduced rpc program only contains read-only procs used for 'volume
(info|list|status)', 'peer status' and 'system:: getwd' cli commands.
CLI changes:
The gluster cli now uses the glusterd socket file for communicating with
glusterd by default. A new option '--gluster-sock' has been added to
allow specifying the sockfile used to connect. Using the '--remote-host'
option will make cli connect to the given host & port.
Tests changes:
cluster.rc has been modified to make use of socket files and use
different log files for each glusterd.
Some of the tests using cluster.rc have been fixed.
Change-Id: Iaf24bc22f42f8014a5fa300ce37c7fc9b1b92b53
BUG: 980754
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5280
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently to know the number of files to be healed, either user
has to go to backend and check the number of entries present in
indices/xattrop directory. But if a volume consists of large
number of bricks, going to each backend and counting the number
of entries is a time-taking task. Otherwise user can give
gluster volume heal vol-name info command but with this
approach if no. of entries are very hugh in the indices/
xattrop directory, it will comsume time.
So as a feature, new command is implemented.
Command 1: gluster volume heal vn statistics heal-count
This command will get the number of entries present in
every brick of a volume. The output displays only entries
count.
Command 2: gluster volume heal vn statistics heal-count
replica 192.168.122.1:/home/user/brickname
Here if we are concerned with just one replica.
So providing any one of the brick of a replica will get
the number of entries to be healed for that replica only.
Example:
Replicate volume with replica count 2.
Backend status:
--------------
[root@dhcp-0-17 xattrop]# ls -lia | wc -l
1918
NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy
entries so actual no. of entries to be healed are
1916.
[root@dhcp-0-17 xattrop]# pwd
/home/user/2ty/.glusterfs/indices/xattrop
Command output:
--------------
Gathering count of entries to be healed on volume volume3 has been successful
Brick 192.168.122.1:/home/user/22iu
Status: Brick is Not connected
Entries count is not available
Brick 192.168.122.1:/home/user/2ty
Number of entries: 1916
Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290
BUG: 1015990
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/6044
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The if-clause will always return false, causing each regression test to
build the rpms. The following error is listed on the Jenkins Console
Output of the regression test:
./tests/basic/rpm.t: line 42: [: argument expected
BUG: 904005
Change-Id: I343c80d8c57fa0ec8b8a531bb5f5c86209d88d7e
CC: Harshavardhana <harsha@harshavardhana.net>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/6057
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Tested-by: Harshavardhana <harsha@harshavardhana.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
oVirt's Gluster Integration needs an inexpensive command that can be
executed every 10 seconds to monitor async tasks and their parameters,
for all volumes.
The solution involves adding a 'tasks' sub-command to 'volume status'
to fetch only the async task IDs, type and other relevant parameters.
Only the originator glusterd participates in this command as all the
information needed is available on all the nodes. This is to make the
command suitable for being executed every 10 seconds.
Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1
BUG: 1012346
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/6006
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ibfca8ddb7c663d44ed447be13b2eabb7bd393bb3
BUG: 993981
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/6028
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previous code would rather provide a filter but never look
for newly "Added" files leading to failures in subsequent
build.
This patch fixes such issue by actually verifying pedantically
for only files which need to be tested.
Change-Id: Ia716acf82ee60f8ffe5e36257f1cc866c6062718
BUG: 904005
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/6016
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As of today regression tests are an in-house breed, by making
it a new package and distributing it ensures larger set of
people use it and contribute to it. This can also be used
by any consumer/user to build their own environment for glusterfs
regression testing which is today limited only to 'upstream'
'glusterfs' releases and build.gluster.org
Change-Id: I4f7e9fd1c49982dcf0d788ef6a83ffe895a956ac
BUG: 764966
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/5674
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|