| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The gfid2path infra stores the "pargfid/bname" as on xattr
value for each non directory entry. Hardlinks would have a
separate xattr. This xattr key is internal and is not
exposed to applications. A virtual xattr is exposed for
the applications to fetch the path from gfid.
Internal xattr:
trusted.gfid2path.<xxhash>
Virtual xattr:
glusterfs.gfidtopath
getfattr -h -n glusterfs.gfidtopath /<aux-mnt>/.gfid/<gfid>
If there are hardlinks, it returns all the paths separated
by ':'.
A volume set option is introduced to change the delimiter
to required string of max length 7.
gluster vol set gfid2path-separator ":::"
> Updates: #139
> Change-Id: Ie3b0c3fd8bd5333c4a27410011e608333918c02a
> Signed-off-by: Kotresh HR <khiremat@redhat.com>
> Reviewed-on: https://review.gluster.org/17785
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Updates: #139
Change-Id: Ie3b0c3fd8bd5333c4a27410011e608333918c02a
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: https://review.gluster.org/17921
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gfid2path xattr is an internal xattr and should not be
allowed to modify by other applications from gluster
mount. This patch blocks the same.
> Updates: #139
> Change-Id: Id2cb29797ee1bd77e0e0d2203a47469fd7203355
> Signed-off-by: Kotresh HR <khiremat@redhat.com>
> Reviewed-on: https://review.gluster.org/17744
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Prashanth Pai <ppai@redhat.com>
> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
> Reviewed-by: Aravinda VK <avishwan@redhat.com>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
(cherry picked from commit 96eece8abbb9c06f0b91f37e718ac9e337a3f714)
Updates: #139
Change-Id: Id2cb29797ee1bd77e0e0d2203a47469fd7203355
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: https://review.gluster.org/17869
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Prashanth Pai <ppai@redhat.com>
Reviewed-by: Aravinda VK <avishwan@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using a minimal gfapi application that only initializes a small
graph (sink, shard and meta xlators) the following memory leaks are
reported by Valgrind:
HEAP SUMMARY:
in use at exit: 322,976 bytes in 75 blocks
total heap usage: 684 allocs, 609 frees, 2,092,116 bytes allocated
With this change, the mem-pools are cleaned up on calling of
mem_pools_fini() and the objects in the pool are free'd.
HEAP SUMMARY:
in use at exit: 315,265 bytes in 58 blocks
total heap usage: 684 allocs, 626 frees, 2,092,079 bytes allocated
This information was gathered with `./run-xlator.sh features/shard` that
comes with `gfapi-load-volfile` from gluster-debug-tools.
While working on the free'ing of the per_thread_pool_list_t structures,
it became apparent that GF_CALLOC() in mem_get_pool_list() gets
redirected to a standard calloc() without prepending the Gluster
specific memory header. This is because mem_pools_init() gets called
before THIS->ctx is valid, so it is not possible to check if memory
accounting is enabled or not. Because of this, the GF_CALLOC() call in
mem_get_pool_list() has been replaced by CALLOC() to prevent potential
mismatches between the allocation/free'ing of per_thread_pool_list_t
structures.
Change-Id: Id6f558816f399b0c613d74df36deac2300b6dd98
BUG: 1470170
URL: https://github.com/gluster/gluster-debug-tools
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/17768
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
Reviewed-by: soumya k <skoduri@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is not possible to call pthread_key_delete for the pool_key that is
intialized in the constructor for the memory pools. This makes it
difficult to do a full cleanup of all the resources in mem_pools_fini().
For this, the initialization of pool_key should be moved to
mem_pool_init().
However, the glusterfsd binary has a rather complex initialization
procedure. The memory pools need to get initialized partially to get
mem_get() functionality working. But, the pool_sweeper thread can get
killed in case it is started before glusterfsd deamonizes.
In order to solve this, mem_pools_init() is split into two pieces:
1. mem_pools_init_early() for initializing the basic structures
2. mem_pools_init_late() to start the pool_sweeper thread
With the split of mem_pools_init(), and placing the pthread_key_create()
in mem_pools_init_early(), it is now possible to correctly cleanup the
pool_key with pthread_key_delete() in mem_pools_fini().
It seems that there was no memory pool initialization in the CLI. This
has been added as well now. Without it, the CLI will not be able to call
mem_get() successfully which results in a hang of the process.
Change-Id: I1de0153dfe600fd79eac7468cc070e4bd35e71dd
BUG: 1470170
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/17779
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Set names to threads on creation for easier
debugging.
Output of top -H -p <PID-OF-GLUSTERFSD>
Before:
19773 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19774 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19775 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19776 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19777 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19778 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19779 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19780 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19781 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19782 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19783 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19784 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19785 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd
19786 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd
19787 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd
19789 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19790 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
25178 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
5398 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
7881 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
After:
19773 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19774 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustertimer
19775 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19776 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustermemsweep
19777 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustersproc0
19778 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustersproc1
19779 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll0
19780 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusteridxwrker
19781 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusteriotwr0
19782 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterbrssign
19783 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterbrswrker
19784 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterclogecon
19785 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd0
19786 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd1
19787 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd2
19789 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixjan
19790 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixfsy
25178 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll1
5398 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll2
7881 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixhc
Change-Id: Id5f333755c1ba168a2ffaa4fce6e71c375e10703
BUG: 1254002
Updates: #271
Signed-off-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-on: https://review.gluster.org/11926
Reviewed-by: Prashanth Pai <ppai@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implementation of these two functions becomes easier by using gf_fop_list[]
array. So implemented that and removed usage of these functions.
BUG: 1472250
Change-Id: I8a592913f9eeb02d965708bcf28a637588ed4988
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: https://review.gluster.org/17812
Reviewed-by: Niels de Vos <ndevos@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Bulk xattr removal doesn't check if the xattrs that are coming in xdata
have gfid/volume-id xattrs, so there is potential for bulkremovexattr
removing gfid/volume-id.
I also observed that bulkremovexattr is not available for fremovexattr.
Fix:
Do proper checks in bulk removexattr to remove gfid/volume-id.
Refactor [f]removexattr to reduce the differences.
BUG: 1470489
Change-Id: Ia845b31846a149500111c0996646e648f72cdce6
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: https://review.gluster.org/17765
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Anuradha Talur <atalur@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Only the old (removed) implementation of mem-pools provided access to
the ctx->mempool_list to faciliate state-dumps. The usage of the
mempool_list has been disabled with "#if defined(OLD_MEM_POOLS)", but a
few occurences were missed.
Change-Id: I912fb63830efc06247eb0c6551d198271ee57a86
BUG: 1470170
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/17778
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix fuse ctx memory leak in case an error occurs and the cleanup path
is different than usual. Also fix a memory leak in logging if
eh_save_history() fails.
Change-Id: I7ec967c807b0ed91184e5b958be70702215c46c9
BUG: 1470220
Signed-off-by: Danny Couture <couture.danny@gmail.com>
Reviewed-on: https://review.gluster.org/17759
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
Reviewed-by: Prashanth Pai <ppai@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Amar Tumballi <amarts@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
clean up things that I tripped over doing other changes.
1) fix mishmash of random spacing in struct decls in glusterfs.h.
Not technically a problem, just ugly to look at.
2) replace open-coded strings constants with existing #define
constants. A disaster waiting to happen.
3) Use sys_access() instead of sys_stat() or sys_lstat() to test
simple existence of file. Why copy dozens of bytes from kernel to
user space that aren't going to be used by anything? There are
probably more instances like these.
Change-Id: I28089bef4cc93d5e4e4213045fb1a2649d110f82
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: https://review.gluster.org/17769
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Prashanth Pai <ppai@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
For allowing parallel writes we shouldn't depend on ia_size to be same for
all the bricks in each write_cbk(). But we need to make sure backend size
is correct on all the bricks and no crashes/manual modifications happened.
Fix:
At the time of get_size_version() we do 1 check to make sure size of the file
is same across the bricks. From then on the FOPs will give the status of the
fop, so we rely on this information to keep which bricks are good/bad.
Updates #251
Change-Id: I1df645347e2e9f2e09cfa4411b6cc305d7f4e4e5
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: https://review.gluster.org/17741
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this infra, a new xattr is stored on each entry
creation as below.
trusted.gfid2path.<xxhash> = <pargfid>/<basename>
If there are hardlinks, multiple xattrs would be present.
Fops which are impacted:
create, mknod, link, symlink, rename, unlink
Option to enable:
gluster vol set <VOLNAME> storage.gfid2path on
Updates: #139
Change-Id: I369974cd16703c45ee87f82e6c2ff5a987a6cc6a
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: https://review.gluster.org/17488
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Aravinda VK <avishwan@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We are storing the entire volfile and using this to check
volfile change. With brick multiplexing there will be lot
of graphs per process which will increase the memory foot
print of the process. So instead of storing the entire
graph we could use sha256 and we can compare the hash to
see whether volfile change happened or not.
Also with Brick multiplexing, the direct comparison of vol
file is not correct. There are two problems.
Problem 1:
We are currently storing one single graph (the last
updated volfile) whereas, what we need is the entire
graph with all atttached bricks.
If we fix this issue, we have second problem
Problem 2:
With multiplexing we have a graph that contains multiple
bricks. But what we are checking as part of the reconfigure
is, comparing the entire graph with one single graph,
which will always fail.
Solution:
We create list in glusterfs_ctx_t that stores sha256 hash
of individual brick graphs. When a graph changes happens
we compare the stored hash and the current hash. If the
hash matches, then no need for reconfigure. Otherwise we
first do the reconfigure and then update the hash.
For now, gfapi has not changed this way. Meaning when gfapi
volfile fetch or reconfigure happens, we still store the
entire graph and compare, each memory.
This is fine, because libgfapi will not load brick graphs.
But changing the libgfapi will make the code similar in
both glusterfsd-mgmt and api. Also it helps to reduce some
memory.
Change-Id: I9df917a771a52b95622ab8f63af34ec390163a77
BUG: 1467986
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: https://review.gluster.org/17709
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The change consists of two parts: make sure it doesn't happen (in
glfs.c), and make it harmless if it does (in mem-pool.c).
Change-Id: Icb7dda7a45dd3d1ade2ee3991bb6a22c8ec88424
BUG: 1468863
Signed-off-by: Jeff Darcy <jdarcy@fb.com>
Reviewed-on: https://review.gluster.org/17728
Tested-by: Jeff Darcy <jeff@pl.atyp.us>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When glusterfs wants to retrieve the list of auxiliary gids
of a user, it typically allocates a sufficiently big gid_t
array on stack and calls getgrouplist(3) with it. However,
"sufficiently big" means to be of maximum supported gid list
size, which in GlusterFS is GF_MAX_AUX_GROUPS = 64k.
That means a 64k * sizeof(gid_t) = 256k allocation, which is
big enough to overflow the stack in certain cases.
A further observation is that stack allocation of the gid list
brings no gain, as in all cases the content of the gid list
eventually gets copied over to a heap allocated buffer.
So we add a convenience wrapper of getgrouplist to libglusterfs
called gf_getgrouplist which calls getgrouplist with a sufficiently
big heap allocated buffer (it takes care of the allocation too).
We are porting all the getgrouplist invocations to gf_getgrouplist
and thus eliminate the huge stack allocation.
BUG: 1464327
Change-Id: Icea76d0d74dcf2f87d26cb299acc771ca3b32d2b
Signed-off-by: Csaba Henk <csaba@redhat.com>
Reviewed-on: https://review.gluster.org/17706
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Plus minor readability improvements.
Reported-by: pmatthaei@debian.org
Change-Id: I5393819a2fc9f240a19811143bb57b127df717cf
BUG: 1466785
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: https://review.gluster.org/17660
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This also makes mem_pools_init and mem_pools_fini re-callable, so GFAPI
can go through infinite init/fini cycles if they want to. Not saying
that's a good idea, but at least it's safe.
Change-Id: I617913410bcff54568b802cb653f48bdd533bd65
Signed-off-by: Jeff Darcy <jdarcy@fb.com>
Reviewed-on: https://review.gluster.org/17662
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: Jeff Darcy <jeff@pl.atyp.us>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The most common pattern, both in our code and elsewhere, is this:
struct _xyz {
...
};
typedef struct _xyz xyz_t;
These exceptions - especially call_frame/call_stack - have been slowing
down code navigation for years. By converging on a single pattern,
navigating from xyz_t in code to the actual definition of struct _xyz
(i.e. without having to visit the typedef first) might even be
automatable.
Change-Id: I0e5dd1f51f98e000173c62ef4ddc5b21d9ec44ed
Signed-off-by: Jeff Darcy <jdarcy@fb.com>
Reviewed-on: https://review.gluster.org/17650
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: Jeff Darcy <jeff@pl.atyp.us>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
xxhash is a faster non-cryptographic hash.
https://github.com/Cyan4973/xxHash
Release Taken: "xxHash v0.6.2"
--------------
Files added:
contrib/xxhash/xxhash.c
contrib/xxhash/xxhash.h
contrib/xxhash/xxhsum.c
Modifications to source:
------------------------
Following functions and data types got 'GF_' prefix
as below to avoid any form of name collisions in future.
---- Functions ----
GF_XXH_versionNumber
GF_XXH32
GF_XXH32_createState
GF_XXH32_freeState
GF_XXH32_copyState
GF_XXH32_reset
GF_XXH32_update
GF_XXH32_digest
GF_XXH32_canonicalFromHash
GF_XXH32_hashFromCanonical
GF_XXH64
GF_XXH64_createState
GF_XXH64_freeState
GF_XXH64_copyState
GF_XXH64_reset
GF_XXH64_update
GF_XXH64_digest
GF_XXH64_canonicalFromHash
GF_XXH64_hashFromCanonical
---- Data Types ----
GF_XXH_errorcode
GF_XXH32_state_t*
GF_XXH32_canonical_t*
GF_XXH32_hash_t
GF_XXH64_state_t*
GF_XXH64_canonical_t*
GF_XXH64_hash_t
It is linked with libglusterfs.so. A wrapper
funtion is also added for the easy usage in
common-utils.c.
xxhash can be used for the all the usecases where
a faster non-cryptographic hash is required.
gfid to path infra would be using this for now.
NOTE:
----
The gluster coding guidelines check is ignored
as maintaining it further would be difficult.
Updates: #253
Change-Id: Ib143f90d91d4ee99864a10246d5983e92900173b
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: https://review.gluster.org/17641
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Wrapper for `gf_log` and `gf_msg` to add support for
structured logging format.
Two new wrappers available `gf_slog` and `gf_smsg`
Example 1: All static details
gf_slog ("cli", GF_LOG_INFO, "Volume Set",
"name=gv1",
"option=changelog.changelog",
"value=on",
NULL);
gf_smsg ("cli", GF_LOG_INFO, 0, MSGID_VOLUME_SET,
"Volume Set",
"name=gv1",
"option=changelog.changelog",
"value=on",
NULL);
Example 2: Using Format chars in key values
gf_slog ("cli", GF_LOG_INFO, "Volume Set",
"name=%s", volume_name,
"option=%s", option_name,
"value=%s", option_value,
NULL);
gf_smsg ("cli", GF_LOG_INFO, 0, MSGID_VOLUME_SET,
"Volume Set",
"name=%s", volume_name,
"option=%s", option_name,
"value=%s", option_value,
NULL);
Formats as,
<EVENT><TAB><KEY1=VALUE1><TAB><KEY2=VALUE2>...
Example:
Volume Set name=gv1 option=changelog.changelog value=on
Updates: #240
Change-Id: I871727be16a39f681d41f363daa0029b8066fb52
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: https://review.gluster.org/17543
Reviewed-by: MOHIT AGRAWAL <moagrawa@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Amar Tumballi <amarts@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I55f707ae1e7c3ad7fc0545f7aa657584cead58f9
BUG: 1465214
Signed-off-by: Jeff Darcy <jdarcy@fb.com>
Reviewed-on: https://review.gluster.org/17636
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: Jeff Darcy <jeff@pl.atyp.us>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Ji-Hyeon Gim
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we build GlusterFS with GF_DISBLE_MEMPOOL, it is failed due to macro
condition in mem-pool.c:mem_get().
Change-Id: I03fe804f93d761ea3bfdc3b20f0253a03350a68f
BUG: 1465214
Signed-off-by: Ji-Hyeon Gim <potatogim@potatogim.net>
Reviewed-on: https://review.gluster.org/17633
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
Tested-by: Ji-Hyeon Gim
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Afr has introduced a new key GF_XATTR_LIST_NODE_UUIDS_KEY,
through which rebalance will figure out its local subvolumes.(Reference
bugid=1463250)
key: GF_XATTR_NODE_UUID_KEY will continue to serve it's old
purpose of returning the first afr chiild.
test: prove tests/basic/distribute/rebal-all-nodes-migrate.t
Change-Id: I4d602feda2a05b29d2210c712a07a4ac6b8bc112
BUG: 1463648
Signed-off-by: Susant Palai <spalai@redhat.com>
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17595
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The change in EC to return list of node uuids for
GF_XATTR_NODE_UUID_KEY was causing problems with
geo-rep.
Fix:
This patch will allow to get the single node uuid
as it was doing before with the key
"GF_XATTR_NODE_UUID_KEY", and will also allow to get
the list of node uuids by using a new key
"GF_XATTR_LIST_NODE_UUIDS_KEY". This will solve
the problem with geo-rep and any other features which
were depending on this.
BUG: 1462790
Change-Id: I2d9214a9658d4a41a3d6de08600884d2bda5f3eb
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Reviewed-on: https://review.gluster.org/17594
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The patch https://review.gluster.org/#/c/17177 resolves "." and ".."
to corrosponding inodes and names before sending the request to the
backend server. But this will only work if inode and its parent is
linked properly. Incase of nameless lookup(applications like ganesha)
the inode of parent can be NULL(only gfid is send). So this patch will
resolve "." and ".." only if proper parent is available
Change-Id: I4c50258b0d896dabf000a547ab180b57df308a0b
BUG: 1460514
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: https://review.gluster.org/17502
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Poornima G <pgurusid@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: soumya k <skoduri@redhat.com>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The change in afr to return list of node uuids was causing problems
with geo-rep.
Fix:
This patch will allow to get the single node uuid as it was doing
before with the key "GF_XATTR_NODE_UUID_KEY", and will also allow
to get the list of node uuids by using a new key
"GF_XATTR_LIST_NODE_UUIDS_KEY". This will solve the problem with
geo-rep and any other feature which were depending on this.
Change-Id: I09885dac6dfca127be94b708470c8c2941356f9a
BUG: 1462790
Signed-off-by: karthik-us <ksubrahm@redhat.com>
Reviewed-on: https://review.gluster.org/17576
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Kotresh HR <khiremat@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
... as opposed to hardcoding it to "json" always.
Change-Id: I5e79473a514373145ad764f24bb6219a6983a4c6
BUG: 1458197
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: https://review.gluster.org/17451
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
problem:
When we call listen from protocol/server, we are giving a
hard coded valie of 10 if it is not manually given.
With multiplexing, especially when glusterd restarts all
clients may try to connect to the server at a time.
Which will result in overflowing the queue, and kernel
will complain about the errors.
Solution:
This patch will introduce a volume set command to make backlog
value as a configurable. This patch also changes the default
values for backlog from 10 to 128. This changes is only applicable
for sockets listening from protocol.
Example:
gluster volume set <volname> transport.listen-backlog 1024
Note: 1 Brick has to be restarted to get this value in effect
2 This changes won't be reflected in glusterd, or other
xlators which calls listen. If you need, you have to
add this option to the volfile.
Change-Id: I0c5a2bbf28b5db612f9979e7560e05dd82b41477
BUG: 1456405
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: https://review.gluster.org/17411
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when you take the 'statedump', it shows the output like below
-----
[dict]
max-number-of-dict-pairs=13
total-pairs-used=41613
total-dict-used=12629
average-pairs-per-dict=3
------
Updates #220
Change-Id: I71a7eda3a3cd23edf4483234f22f983923bbb081
Signed-off-by: Amar Tumballi <amarts@redhat.com>
Reviewed-on: https://review.gluster.org/4035
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Updates #220
Change-Id: I03b1d2fac2dfcdd21bdf4e4fff19d49425699931
Signed-off-by: Amar Tumballi <amarts@redhat.com>
Reviewed-on: https://review.gluster.org/6450
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: Jeff Darcy <jeff@pl.atyp.us>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
stopped any volume
Problem: After enabled brick mux if any volume has down and then try ot run mount
with running volume , mount command is hung.
Solution: After enable brick mux server has shared one data structure server_conf
for all associated subvolumes.After down any subvolume in some
ungraceful manner (remove brick directory) posix xlator sends
GF_EVENT_CHILD_DOWN event to parent xlatros and server notify
updates the child_up to false in server_conf.When client is trying
to communicate with server through mount it checks conf->child_up
and it is FALSE so it throws message "translator are not yet ready".
From this patch updated structure server_conf to save child_up status
for xlator wise. Another improtant correction from this patch is
cleanup threads from server side xlators after stop the volume.
BUG: 1453977
Change-Id: Ic54da3f01881b7c9429ce92cc569236eb1d43e0d
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: https://review.gluster.org/17356
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the comment (libglusterfs/src/iobuf.h:88-90) for 'arena_size' field
in 'struct iobuf_arena' is not valid anymore. According to line 190 in
__iobuf_arena_alloc() no longer follows that equation.
Change-Id: I68558164b309123cf19093e2da89bc156df294fd
BUG: 1455831
Signed-off-by: Ji-Hyeon Gim <potatogim@gluesys.com>
Reviewed-on: https://review.gluster.org/17393
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Shyamsundar Ranganathan <srangana@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd crashes when port is being set explcitly to a
range which is outside greater than short data type range.
Eg. sysctl net.ipv4.ip_local_reserved_ports="49152-49156"
In above case glusterd crashes while parsing the port.
With this fix glusterd will be able to handle port range
between INT_MIN to INT_MAX
Change-Id: I7c75ee67937b0e3384502973d96b1c36c89e0fe1
BUG: 1454418
Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
Reviewed-on: https://review.gluster.org/17359
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Samikshan Bairagya <samikshan@gmail.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- code in run.c to close all file descriptors,
except for specified ones is extracted to
int close_fds_except (int *fdv, size_t count);
- tokenizing and editing a string that consists
of comma-separated tokens (as done eg. in
mount_param_to_flag() of contrib/fuse/mount.c
is abstacted into the following API:
char *token_iter_init (char *str, char sep, token_iter_t *tit);
gf_boolean_t next_token (char **tokenp, token_iter_t *tit);
void drop_token (char *token, token_iter_t *tit);
Updates #153
Change-Id: I7cb5bda38f680f08882e2a7ef84f9142ffaa54eb
Signed-off-by: Csaba Henk <csaba@redhat.com>
Reviewed-on: https://review.gluster.org/17229
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
FALLOCATE file operations is not implemented in the
existing EC code. This change set implements it
for EC.
BUG: 1448293
Change-Id: Id9ed914db984c327c16878a5b2304a0ea461b623
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Reviewed-on: https://review.gluster.org/15200
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issue:
In nameless lookup/other fops, parent inode will be NULL, when we try
to add the cache to the NULL inode, it causes a crash.
Hence handle the scenario of nameless fops, and do not cache/serve
the nameless fops.
Change-Id: I3b90f882ac89e6aaf3419db89e6f890797f37700
BUG: 1451588
Signed-off-by: Poornima G <pgurusid@redhat.com>
Reviewed-on: https://review.gluster.org/17316
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With brick multiplexing being enabled, if a brick is instance attached to a
process then a PARENT_UP event is needed so that it reaches right till
posix layer and then from posix CHILD_UP event is sent back to all the
children.
Change-Id: Ic341086adb3bbbde0342af518e1b273dd2f669b9
BUG: 1447389
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://review.gluster.org/17225
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch is to handle "." and ".." in file path. Which means
this special dentry names will be resolved before sending fops
on the path.
Change-Id: I5e92f6d1ad1412bf432eb2488e53fb7731edb013
BUG: 1447266
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: https://review.gluster.org/17177
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
reading the entire rpc message from the wire
Currently socket is added back for future events after higher layers
(rpc, xlators etc) have processed the message. If message processing
involves signficant delay (as in writev replies processed by Erasure
Coding), performance takes hit. Hence this patch modifies
transport/socket to add back the socket for polling of events
immediately after reading the entire rpc message, but before
notification to higher layers.
credits: Thanks to "Kotresh Hiremath Ravishankar"
<khiremat@redhat.com> for assitance in fixing a regression in
bitrot caused by this patch.
Change-Id: I04b6b9d0b51a1cfb86ecac3c3d87a5f388cf5800
BUG: 1448364
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: https://review.gluster.org/15036
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
follow procedures:
1.thread1 client_ctx_get return NULL
2.thread 2 client_ctx_set ctx1 ok
3.thread1 client_ctx_set ctx2 ok
thread1 use ctx1, thread2 use ctx2 and ctx1 will leak
Change-Id: I990b02905edd1b3179323ada56888f852d20f538
BUG: 1449232
Signed-off-by: Zhou Zhengping <johnzzpcrystal@gmail.com>
Reviewed-on: https://review.gluster.org/17219
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I3d30bacc3d5d085220dd85a3919207deef8bd1dd
Signed-off-by: Kinglong Mee <mijinlong@open-fs.com>
Reviewed-on: https://review.gluster.org/17114
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Prashanth Pai <ppai@redhat.com>
Tested-by: Prashanth Pai <ppai@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Icb71ded6051afe44e07480e0499d2a39f05fac71
BUG: 1447826
Signed-off-by: Zhou Zhengping <johnzzpcrystal@gmail.com>
Reviewed-on: https://review.gluster.org/17171
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: While brick-muliplexing is on after restarting glusterd, CLI is
not showing pid of all brick processes in all volumes.
Solution: While brick-mux is on all local brick process communicated through one
UNIX socket but as per current code (glusterd_brick_start) it is trying
to communicate with separate UNIX socket for each volume which is populated
based on brick-name and vol-name.Because of multiplexing design only one
UNIX socket is opened so it is throwing poller error and not able to
fetch correct status of brick process through cli process.
To resolve the problem write a new function glusterd_set_socket_filepath_for_mux
that will call by glusterd_brick_start to validate about the existence of socketpath.
To avoid the continuous EPOLLERR erros in logs update socket_connect code.
Test: To reproduce the issue followed below steps
1) Create two distributed volumes(dist1 and dist2)
2) Set cluster.brick-multiplex is on
3) kill glusterd
4) run command gluster v status
After apply the patch it shows correct pid for all volumes
BUG: 1444596
Change-Id: I5d10af69dea0d0ca19511f43870f34295a54a4d2
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: https://review.gluster.org/17101
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Prashanth Pai <ppai@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original situation was as follows:
The function that validates xlator options indicating a size,
xlator_option_validate_sizet(), handles the case when the name
of the option is "cache-size" in a special way.
- Xlator options (things of type volume_option_t) has a
min and max attribute of type double.
- An xlator option is endowed with a gluster specific type (not
C type). An instance of an xlator option goes through a validation
process by a type specific validator function (which are collected
in option.c).
- Validators of numeric types - size being one of them - make use the
min and max attributes to perform a range check, except in one case:
if an option is defined with min = max = 0, then this option will be
exempt of range checking. (Note: the volume_option_t definition
features the following comments along the min, max fields:
double min; /* 0 means no range */
double max; /* 0 means no range */
which is slightly misleading as it lets one to conclude that
zeroing min or max buys exemption from low or high boundary check,
which is not true -- only *both* being zero buys exemption.)
- Besides this, the validator for options of size type,
xlator_option_validate_sizet() special cases options
named "cache-size" so that only min is enforced. (The only consequence
of a value exceeding max is that glusterd logs a warning about it, but
the cli user who makes such a setting gets no feedback on it.)
- This was introduced because a hard coded limit is not useful for
io-cache and quick-read. They rather use a runtime calculated
upper limit. (See changes
I7dd4d8c53051b89a293696abf1ee8dc237e39a20
I9c744b5ace10604d5a814e6218ca0d83c796db80
about the last two points.)
- As an unintended consequence, the upper limit check of
cache-size of write-behind, for which a conventional hard coded limit
is specified, is defeated.
What we do about it:
- Remove the special casing clause for cache-size in
xlator_option_validate_sizet. Thus the general range
check policy (as described above) will apply to
cache-size too.
- To implement a lower bound only check by the validator
for cache-size of io-cache and quick-read, change the
max attribute of these options to INFINITY.
The only behavioral difference is the omission of the warnings
about cache-size of io-cache and quick-read exceeding the former max
values. (They were rather heuristic anyway.)
BUG: 1445609
Change-Id: I0bd8bd391fa7d926f76e214a2178833fe4673b4a
Signed-off-by: Csaba Henk <csaba@redhat.com>
Reviewed-on: https://review.gluster.org/17125
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
They snuck in with the HALO patch (07cc8679c)
Change-Id: I8ced6cbb0b49554fc9d348c453d4d5da00f981f6
BUG: 1447953
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: https://review.gluster.org/17174
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I136372b9929d3ecf243649b6945571a67bfd80eb
BUG: 1447828
Signed-off-by: Zhou Zhengping <johnzzpcrystal@gmail.com>
Reviewed-on: https://review.gluster.org/17172
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Amar Tumballi <amarts@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The patch implement a part of SELinux translator to support setting
SELinux contexts on files in a glusterfs volume.
URL: https://github.com/gluster/glusterfs-specs/blob/master/accepted/SELinux-client-support.md
Change-Id: Id8916bd8e064ccf74ba86225ead95f86dc5a1a25
BUG: 1318100
Fixes : #55
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/13762
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Manikandan Selvaganesh <manikandancs333@gmail.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Halo Geo-replication is a feature which allows Gluster or NFS clients to write
locally to their region (as defined by a latency "halo" or threshold if you
like), and have their writes asynchronously propagate from their origin to the
rest of the cluster. Clients can also write synchronously to the cluster
simply by specifying a halo-latency which is very large (e.g. 10seconds) which
will include all bricks.
In other words, it allows clients to decide at mount time if they desire
synchronous or asynchronous IO into a cluster and the cluster can support both
of these modes to any number of clients simultaneously.
There are a few new volume options due to this feature:
halo-shd-latency: The threshold below which self-heal daemons will
consider children (bricks) connected.
halo-nfsd-latency: The threshold below which NFS daemons will consider
children (bricks) connected.
halo-latency: The threshold below which all other clients will
consider children (bricks) connected.
halo-min-replicas: The minimum number of replicas which are to
be enforced regardless of latency specified in the above 3 options.
If the number of children falls below this threshold the next
best (chosen by latency) shall be swapped in.
New FUSE mount options:
halo-latency & halo-min-replicas: As descripted above.
This feature combined with multi-threaded SHD support (D1271745) results in
some pretty cool geo-replication possibilities.
Operational Notes:
- Global consistency is gaurenteed for synchronous clients, this is provided by
the existing entry-locking mechanism.
- Asynchronous clients on the other hand and merely consistent to their region.
Writes & deletes will be protected via entry-locks as usual preventing
concurrent writes into files which are undergoing replication. Read operations
on the other hand should never block.
- Writes are allowed from _any_ region and propagated from the origin to all
other regions. The take away from this is care should be taken to ensure
multiple writers do not write the same files resulting in a gfid split-brain
which will require resolution via split-brain policies (majority, mtime &
size). Recommended method for preventing this is using the nfs-auth feature to
define which region for each share has RW permissions, tiers not in the origin
region should have RO perms.
TODO:
- Synchronous clients (including the SHD) should choose clients from their own
region as preferred sources for reads. Most of the plumbing is in place for
this via the child_latency array.
- Better GFID split brain handling & better dent type split brain handling
(i.e. create a trash can and move the offending files into it).
- Tagging in addition to latency as a means of defining which children you wish
to synchronously write to
Test Plan:
- The usual suspects, clang, gcc w/ address sanitizer & valgrind
- Prove tests
Reviewers: jackl, dph, cjh, meyering
Reviewed By: meyering
Subscribers: ethanr
Differential Revision: https://phabricator.fb.com/D1272053
Tasks: 4117827
Change-Id: I694a9ab429722da538da171ec528406e77b5e6d1
BUG: 1428061
Signed-off-by: Kevin Vigor <kvigor@fb.com>
Reviewed-on: http://review.gluster.org/16099
Reviewed-on: https://review.gluster.org/16177
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
xlators can use a 'global' timer-wheel for scheduling events. This
timer-wheel is managed per glusterfs_ctx_t, but does not need to be
allocated for every graph. When an xlator wants to use the timer-wheel,
it will be instanciated on demand, and provided to xlators that request
it later on.
By adding a reference counter to the glusterfs_ctx_t for the
timer-wheel, the threads and structures can be cleaned up when the last
xlator does not have a need for it anymore. In general, the xlators
request the timer-wheel in init(), and they should return it in fini().
Because the timer-wheel is managed per glusterfs_ctx_t, the functions
can be added to ctx.c and do not need to live in their very minimal
tw.[ch] files.
Change-Id: I19d225b39aaa272d9005ba7adc3104c3764f1572
BUG: 1442788
Reported-by: Poornima G <pgurusid@redhat.com>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/17068
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Zhou Zhengping <johnzzpcrystal@gmail.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Inside rename, a lookup is done on the source name to make sure that
the file is there. But we used to do a gfid based lookup and hence,
even if the source name was renamed to a new name from some other client,
lookup will be successful as server3_3_lookup will fetch the new path
based on the gfid.
So even if the source file does not exist any more rename will carry on,
and as server3_3_link(destination is hashed to a different brick other
than source cached scenario) also does gfid based resolve, it wont
detect that the source name does not exist and hardlink creation will be
successful (since gfid based resolve will get the new dentry).
To solve this problem, do a name based lookup inside rename. So that
rename will fail right away if the source does not exist.
Change-Id: Ieba8bdd6675088dbf18de90ed4622df043d163bd
BUG: 1412135
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: https://review.gluster.org/16375
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|