| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since https://review.gluster.org/#/c/17452, the statistics are appended
to the same file instead of overwritten over the previous stats. This
was causing the .t to fail since it checks for only the presence of a
non zero aggr.fop.write.count assuming the latest statistics will
overwrite the previous ones.
Fix it by checking for that the latest value of aggr.fop.write.count is
non zero.
Change-Id: I858011f343966a5d1c19d66dcc64b8cd26315df7
BUG: 1468432
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: https://review.gluster.org/17721
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the internal testing that was done, stat-prefetch did help
reduce the number of stats coming from qemu hitting the disk,
and thereby improved performance.
Change-Id: Icf1ce62ecf4e96b97e1946a77b30434157a7786a
BUG: 1468191
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: https://review.gluster.org/17713
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We are storing the entire volfile and using this to check
volfile change. With brick multiplexing there will be lot
of graphs per process which will increase the memory foot
print of the process. So instead of storing the entire
graph we could use sha256 and we can compare the hash to
see whether volfile change happened or not.
Also with Brick multiplexing, the direct comparison of vol
file is not correct. There are two problems.
Problem 1:
We are currently storing one single graph (the last
updated volfile) whereas, what we need is the entire
graph with all atttached bricks.
If we fix this issue, we have second problem
Problem 2:
With multiplexing we have a graph that contains multiple
bricks. But what we are checking as part of the reconfigure
is, comparing the entire graph with one single graph,
which will always fail.
Solution:
We create list in glusterfs_ctx_t that stores sha256 hash
of individual brick graphs. When a graph changes happens
we compare the stored hash and the current hash. If the
hash matches, then no need for reconfigure. Otherwise we
first do the reconfigure and then update the hash.
For now, gfapi has not changed this way. Meaning when gfapi
volfile fetch or reconfigure happens, we still store the
entire graph and compare, each memory.
This is fine, because libgfapi will not load brick graphs.
But changing the libgfapi will make the code similar in
both glusterfsd-mgmt and api. Also it helps to reduce some
memory.
Change-Id: I9df917a771a52b95622ab8f63af34ec390163a77
BUG: 1467986
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: https://review.gluster.org/17709
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit introduces a new global option that can be set to limit
the number of multiplexed bricks in one process.
Usage:
`# gluster volume set all cluster.max-bricks-per-process <value>`
If this option is not set then multiplexing will happen for now
with no limitations set; i.e. a brick process will have as many
bricks multiplexed to it as possible. In other words the current
multiplexing behaviour won't change if this option isn't set to
any value.
This commit also introduces a brick process instance that contains
information about brick processes, like the number of bricks
handled by the process (which is 1 in non-multiplexing cases), list
of bricks, and port number which also serves as an unique identifier
for each brick process instance. The brick process list is
maintained in 'glusterd_conf_t'.
Updates: #151
Change-Id: Ib987d14ab0a4f6034dac01b73a4b2839f7b0b695
Signed-off-by: Samikshan Bairagya <samikshan@gmail.com>
Reviewed-on: https://review.gluster.org/17469
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Solaris 10 uses WebNFS and not the MOUNT protocol. All permission checks
for allowing/denying clients to mount are done through the MNT handlers.
These handlers will not give out a filehandle to the NFS-client when
mounting is denied. This prevents clients from successful mounting.
However, over WebNFS a well known 'root-filehandle' is used directly
with the NFSv3 protocol.
When WebNFS was used, no permission checks (the "nfs.export-dir" option)
were applied. Now the WebNFS mount-handler in Gluster/NFS calls the
mnt3_parse_dir_exports() function that takes care of the permission
checking.
BUG: 1468291
Change-Id: Ic9dfd092473ba9c1c7b5fa38401cf9c0aa8395bb
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/17718
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: soumya k <skoduri@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When blocking locks are used, a new frame is allocated that is used to
send the notification to the client once once the lock becomes
available. In all other cases, the frame that contains the request from
the client will be used for the reply.
Because there was no way to track the different clients with their
requests (captured in the call-state), the call-state could be free'd
before the notification was sent to the client. This caused a
use-after-free of the call-state and could trigger segfaults of the
Gluster/NFS server or incorrect replies on (un)lock requests.
By introducing a nlm4_notify_args structure, the call-state and frame
can be tracked better. This prevents the possibility of segfaulting when
the call-state is used after being free'd.
BUG: 1467313
Change-Id: I285d2bc552f509e5145653b7a50afcff827cd612
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/17700
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to track down a potential use-after-free of the
nfs3_call_state_t structure in the NLM component, add reference counting
where teh structure is used. This should prevent premature free'ing of
the structure.
Change-Id: Ib1f13b0463ab1e012b7b49a623c91f0f3e73e1fb
BUG: 1467313
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/17699
Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a reply on an NLM-procedure gets stuck, the NFS-client will resend
the request. This can happen through a re-connect in case the connection
was terminated (long delay in the reply on the initial request). Once
that happens, not all NLM-procedures are handled correctly.
Testing this is difficult and time-consuming. There still may be
problems with certain operations, but this definitely makes it behave
much better than before.
The problem occured due to a problem in EC, change-id I18a782903ba
addressed the root cause.
Change-Id: I23b385568e27232951fa3fbd7198a0e5d775a8c2
BUG: 1467313
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/17698
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
we were taking unref on wrong dictionary which results
in wrong memory access.
Change-Id: Ic25a6c209ecd72c9056dfcb79fabcfc650dd3c1e
BUG: 1467513
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: https://review.gluster.org/17691
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
.snaps directory is a virtual direcotory, that doesn't
exist on the backend. Even though it is a special dentry,
it doesn't have a dedicated inode. So the inode number is
always random. Which means it will get different inode
number when reboot happens on snapd process.
Now with windows client the show-direcotry feature requires
a lookup on the .snpas direcoty post readdirp on root.
If the snapd restarted after a lookup, then subsequent lookup
will fail, because linked inode will be stale.
This patch will do a revalidate lookup with a new inode.
Change-Id: If97c07ecb307cefe7c86be8ebd05e28cbf678d1f
BUG: 1467513
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: https://review.gluster.org/17690
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
protocol/server expects a child up event to successfully
configure the graph. In the actual brick graph, posix is
the one who decide to initiate the notification to the parent
that the child is up.
But in snapd graph there is no posix, hence the child up
notification was missing.
Ideally each xlator should initiate the child up event whenever
it see's that this is the last child xlator.
Change-Id: Icccdb9fe920c265cadaf9f91c040a0831b4b78fc
BUG: 1467513
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: https://review.gluster.org/17689
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The change consists of two parts: make sure it doesn't happen (in
glfs.c), and make it harmless if it does (in mem-pool.c).
Change-Id: Icb7dda7a45dd3d1ade2ee3991bb6a22c8ec88424
BUG: 1468863
Signed-off-by: Jeff Darcy <jdarcy@fb.com>
Reviewed-on: https://review.gluster.org/17728
Tested-by: Jeff Darcy <jeff@pl.atyp.us>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
4 + 2 EC volume configuration.
If untar of linux is going on and we kill a brick,
indices will be created for the files/dir which need
to be healed. ec_shd_index_sweep spawns threads to
scan these entries and start heal. If in the middle
of this we kill one more brick, we end up in a
situation where we can not heal an entry as there
are only "ec->fragment" number of bricks are UP.
However, the scan will be continued and it will
trigger the heal for those entries.
Solution:
When a heal is triggered for an entry, check if it
*CAN* be healed or not. If not come out with ENOTCONN.
Change-Id: I305be7701c289f36bd7bde22491b71074771424f
BUG: 1464359
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
Reviewed-on: https://review.gluster.org/17692
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Disabling this test right now so that master is green again. This patch
https://review.gluster.org/#/c/17721/ will actually fix the test. This
patch will make master green again unblocking other patches to land onto
master.
Change-Id: I77d177ce92eb6edcf5326b27a0f7fdbefdec007b
Signed-off-by: Nigel Babu <nigelb@redhat.com>
BUG: 1468432
Reviewed-on: https://review.gluster.org/17723
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When glusterfs wants to retrieve the list of auxiliary gids
of a user, it typically allocates a sufficiently big gid_t
array on stack and calls getgrouplist(3) with it. However,
"sufficiently big" means to be of maximum supported gid list
size, which in GlusterFS is GF_MAX_AUX_GROUPS = 64k.
That means a 64k * sizeof(gid_t) = 256k allocation, which is
big enough to overflow the stack in certain cases.
A further observation is that stack allocation of the gid list
brings no gain, as in all cases the content of the gid list
eventually gets copied over to a heap allocated buffer.
So we add a convenience wrapper of getgrouplist to libglusterfs
called gf_getgrouplist which calls getgrouplist with a sufficiently
big heap allocated buffer (it takes care of the allocation too).
We are porting all the getgrouplist invocations to gf_getgrouplist
and thus eliminate the huge stack allocation.
BUG: 1464327
Change-Id: Icea76d0d74dcf2f87d26cb299acc771ca3b32d2b
Signed-off-by: Csaba Henk <csaba@redhat.com>
Reviewed-on: https://review.gluster.org/17706
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
... instead of overwriting stats from the previous interval.
This is so that consumers of this feature do not have to be worried
about monitoring when each 'ios-dump-interval' has passed and back up
the resultant stats file well before the next interval has expired.
Change-Id: Ide897237bf4d38e5d759f09911f7d9c817019edf
BUG: 1458197
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: https://review.gluster.org/17452
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a nlm_clnt is getting free'd, the FDs associated with this client
should be unref'd as well.
Change-Id: Ifa4ea4b7ed45a454413cfc0c820f2516c534a9aa
BUG: 1467313
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/17697
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is no refcounting done of the nfs3_call_state_t structure, which
seems to result in use-after-free problems in the NLM part of
Gluster/NFS. The structure is initialized with two different functions,
it is easier to have a single place to do this.
The Gluster/NFS part will not use the refcounting, for now. This is
being added to make the NLM code more stable. nfs3_call_state_wipe()
will behave as before for Gluster/NFS, but cleanup is triggered through
the refcounting now. This prevents major changes to the stable part of
the NFS-server, and makes it possible to improve the NLM component
separately.
Change-Id: I2e15bcf12af74e8a46c2727e4a160e9444d29ece
BUG: 1467313
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/17696
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The return code of xmlTextWriter* APIs says it returns either the bytes
written (may be 0 because of buffering) or -1 in case of error. Now if the
volume of the xml data payload is not huge then most of the time the
data to be written gets buffered, however when this grows sometimes this
APIs will return the total number of bytes written and then it becomes
absolutely mandatory that every such call is followed by
XML_RET_CHECK_AND_GOTO otherwise we may end up returning a non zero ret
code which would result into the overall xml generation to fail.
Change-Id: I02ee7076e1d8c26cf654d3dc3e77b1eb17cbdab0
BUG: 1467841
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://review.gluster.org/17702
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Gaurav Yadav <gyadav@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a SEEK_HOLE was issued near to the end of file, sometimes an
offset beyond the end of file was returned. Another problem was that
using some offsets greater than the end of file returned successfully
instead of failing with ENXIO.
Change-Id: I238d2884ba02fd19a78116b0f8f8e8d6338fb3f5
BUG: 1449348
Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-on: https://review.gluster.org/17228
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In a distributed volume on master, it can so happen that
the RMDIR followed by MKDIR is recorded in changelog on
a particular subvolume with same gfid and pargfid/bname
but not on all subvolumes as below.
E 61c67a2e-07f2-45a9-95cf-d8f16a5e9c36 RMDIR \
9cc51be8-91c3-4ef4-8ae3-17596fcfed40%2Ffedora2
E 61c67a2e-07f2-45a9-95cf-d8f16a5e9c36 MKDIR 16877 0 0 \
9cc51be8-91c3-4ef4-8ae3-17596fcfed40%2Ffedora2
While processing this changelog, geo-rep thinks RMDIR is
successful and does recursive rmdir on slave. But in the
master the directory still exists. This could lead to
data discrepancy between master and slave.
Cause:
RMDIR-MKDIR pair gets recorded so in changelog when the
directory removal is successful on cached subvolume and
failed in one of hashed subvol for some reason
(may be down). In this case, the directory is re-created
on cached subvol which gets recorded as MKDIR again in
changelog.
Solution:
So while processing RMDIR geo-replication should stat on
master with gfid and should not delete it if it's present.
Change-Id: If5da1d6462eb4d9ebe2e88b3a70cc454411a133e
BUG: 1467718
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: https://review.gluster.org/17695
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Aravinda VK <avishwan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These are both needed because GFAPI clients are a bit unique. They can
be long-running, so they can potentially benefit from the memory
reclamation that mem_pools_init will enable. On the other hand, they
might completely tear down their Gluster environment well before the
process exits, so they need mem_pools_fini as well.
Our main and auxiliary daemons do need mem_pools_init but don't need to
call it themselves because that's handled for them in glusterfsd.c right
after they daemonize. They don't need mem_pools_fini, because the only
place they could *safely* call it is right before exit and then it won't
do them any good. Transient processes (e.g. gf_attach) don't benefit
from either because, well, they're transient. They'll be gone before
memory reclamation matters.
Change-Id: I611cf77d8b408b3ecfc00664e8da294edde9e57a
Signed-off-by: Jeff Darcy <jdarcy@fb.com>
Reviewed-on: https://review.gluster.org/17666
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: Jeff Darcy <jeff@pl.atyp.us>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Plus minor readability improvements.
Reported-by: pmatthaei@debian.org
Change-Id: I5393819a2fc9f240a19811143bb57b127df717cf
BUG: 1466785
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: https://review.gluster.org/17660
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
posix_getxattr
Problem: In posix xlator posix_(f)getxattr is calling system call(sys_lgetxattr)
two times to fetch the xattr value.
Solution: After use the extra buffer for first time calling we can avoid second
attempt of system call(sys_lgetxattr) calling in posix_getxattr for most
of the xattrs.
BUG: 1460659
Change-Id: I0d8da776c5bc86653d874a4629a73bbf65c621b8
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: https://review.gluster.org/17520
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Kinglong Mee
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When external programs perform a dlopen("..so", RTLD_LAZY|RTLD_LOCAL)
on some shared objects like xlators, it can fail with dlerror set to
error string "undefined symbol <some-type>".
This was observed for the following shared objects: fuse.so, quota.so,
quotad.so, server.so, libgfrpc.so and socket.so
P.S: This was found while running a go program which fetches the list
of xlator options (volume_option_t) from xlator's shared object.
BUG: 1193929
Change-Id: I7b958409cf11fb67c2be32a3f85a96fb1260236b
Signed-off-by: Prashanth Pai <ppai@redhat.com>
Reviewed-on: https://review.gluster.org/17659
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This also makes mem_pools_init and mem_pools_fini re-callable, so GFAPI
can go through infinite init/fini cycles if they want to. Not saying
that's a good idea, but at least it's safe.
Change-Id: I617913410bcff54568b802cb653f48bdd533bd65
Signed-off-by: Jeff Darcy <jdarcy@fb.com>
Reviewed-on: https://review.gluster.org/17662
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: Jeff Darcy <jeff@pl.atyp.us>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
For a file in gfid split-brain, the parent directory ('/' during
testing) was detected as possibly undergoing heal instead of split-brain
in `heal-info` output. Also, it was not being displayed in `info
split-brain` output for the same reason. The problem was that when `glfsheal`
was run, lookup on '/' triggered a background self-heal due to which processing
of '/' during `heal info` failed to acquire locks with errno=EAGAIN.
Fix:
Set background-self-heal-count to zero while launching glfsheal.
Change-Id: I153a7c75af71f213a4eefacf504a0f9806c528a5
BUG: 1318895
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: https://review.gluster.org/13772
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This has been nuking just about everything. Kill it with fire.
Change-Id: I5e6e2e4d1568f118298fcf109077db001f87040d
Signed-off-by: Jeff Darcy <jdarcy@fb.com>
Reviewed-on: https://review.gluster.org/17652
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The most common pattern, both in our code and elsewhere, is this:
struct _xyz {
...
};
typedef struct _xyz xyz_t;
These exceptions - especially call_frame/call_stack - have been slowing
down code navigation for years. By converging on a single pattern,
navigating from xyz_t in code to the actual definition of struct _xyz
(i.e. without having to visit the typedef first) might even be
automatable.
Change-Id: I0e5dd1f51f98e000173c62ef4ddc5b21d9ec44ed
Signed-off-by: Jeff Darcy <jdarcy@fb.com>
Reviewed-on: https://review.gluster.org/17650
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: Jeff Darcy <jeff@pl.atyp.us>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The patch contains 2 scripts:
log_accounting.sh does a du -h on the FS hierarchy and a quota list
on the hierarchy and interleaves the two output. We can then identify
which directory(s) in FS has caused the accounting to go bad and try
to investigate what fops happened on those directories. We can also
limit the set of directories on which we need to set dirty xattr to
correct accounting.
xattr_analysis.py reads all the xattr of a brick and dumps it a human
readable form to ease debugging.
Change-Id: I2155561d10c08dc3ab9e8b09dbd258f0592b4d33
BUG: 1466188
Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com>
Reviewed-on: https://review.gluster.org/17649
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
xxhash is a faster non-cryptographic hash.
https://github.com/Cyan4973/xxHash
Release Taken: "xxHash v0.6.2"
--------------
Files added:
contrib/xxhash/xxhash.c
contrib/xxhash/xxhash.h
contrib/xxhash/xxhsum.c
Modifications to source:
------------------------
Following functions and data types got 'GF_' prefix
as below to avoid any form of name collisions in future.
---- Functions ----
GF_XXH_versionNumber
GF_XXH32
GF_XXH32_createState
GF_XXH32_freeState
GF_XXH32_copyState
GF_XXH32_reset
GF_XXH32_update
GF_XXH32_digest
GF_XXH32_canonicalFromHash
GF_XXH32_hashFromCanonical
GF_XXH64
GF_XXH64_createState
GF_XXH64_freeState
GF_XXH64_copyState
GF_XXH64_reset
GF_XXH64_update
GF_XXH64_digest
GF_XXH64_canonicalFromHash
GF_XXH64_hashFromCanonical
---- Data Types ----
GF_XXH_errorcode
GF_XXH32_state_t*
GF_XXH32_canonical_t*
GF_XXH32_hash_t
GF_XXH64_state_t*
GF_XXH64_canonical_t*
GF_XXH64_hash_t
It is linked with libglusterfs.so. A wrapper
funtion is also added for the easy usage in
common-utils.c.
xxhash can be used for the all the usecases where
a faster non-cryptographic hash is required.
gfid to path infra would be using this for now.
NOTE:
----
The gluster coding guidelines check is ignored
as maintaining it further would be difficult.
Updates: #253
Change-Id: Ib143f90d91d4ee99864a10246d5983e92900173b
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: https://review.gluster.org/17641
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
DELETE path is quoted before it reaches glusterfind. This wasn't handled
in the glusterfind code leading to double quoting of path separator
'%2F' to '%252F' i.e. the '%' character in '%2F' itself was quoted to
'%25'
Solution:
unquote the the deleted path before further processing
Change-Id: I2dfbbd7792dc0f9da5c8e02093b0f1c031ff344a
BUG: 1465024
Signed-off-by: Milind Changire <mchangir@redhat.com>
Reviewed-on: https://review.gluster.org/17629
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Aravinda VK <avishwan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
There is a race when the following two commands are executed on the mount in
parallel from two different terminals on a sharded volume,
which leads to use-after-free.
Terminal-1:
while true; do dd if=/dev/zero of=file1 bs=1M count=4; done
Terminal-2:
while true; do cat file1 > /dev/null; done
In the normal case this is the life-cycle of a shard-inode
1) Shard is added to LRU when it is first looked-up
2) For every operation on the shard it is moved up in LRU
3) When "unlink of the shard"/"LRU limit is hit" happens it is removed from LRU
But we are seeing a race where the inode stays in Shard LRU even after it is
forgotten which leads to Use-after-free and then some memory-corruptions.
These are the steps:
1) Shard is added to LRU when it is first looked-up
2) For every operation on the shard it is moved up in LRU
Reader-handler Truncate-handler
1) Reader handler needs shard-x to be read. 1) Truncate has just deleted shard-x
2) In shard_common_resolve_shards(), it does
inode_resolve() and that leads to
a hit in LRU, so it is going to call
__shard_update_shards_inode_list() to move the
inode to top of LRU
2) shard-x gets unlinked from the itable
and inode_forget(inode, 0) is called
to make sure the inode can be purged
upon last unref
3) when __shard_update_shards_inode_list() is
called it finds that the inode is not in LRU
so it adds it back to the LRU-list
Both these operations complete and call inode_unref(shard-x) which leads to the inode
getting freed and forgotten, even when it is in Shard LRU list. When more inodes are
added to LRU, use-after-free will happen and it leads to undefined behaviors.
Fix:
I see that the inode can be removed from LRU even by the protocol layers like gfapi/gNFS
when LRU limit is reached. So it is better to add a check in shard_forget() to remove itself
from LRU list if it exists.
BUG: 1466037
Change-Id: Ia79c0c5c9d5febc56c41ddb12b5daf03e5281638
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: https://review.gluster.org/17644
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use a local variable to store the call count
in the STACK_WIND for loop. Using frame->local
is dangerous as it could be freed while the loop
is still being processed
Change-Id: Ie65cdcfb7868509b4a83bc2a5b5d6304eabfbc8e
BUG: 1466110
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17645
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: Nigel Babu <nigelb@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Wrapper for `gf_log` and `gf_msg` to add support for
structured logging format.
Two new wrappers available `gf_slog` and `gf_smsg`
Example 1: All static details
gf_slog ("cli", GF_LOG_INFO, "Volume Set",
"name=gv1",
"option=changelog.changelog",
"value=on",
NULL);
gf_smsg ("cli", GF_LOG_INFO, 0, MSGID_VOLUME_SET,
"Volume Set",
"name=gv1",
"option=changelog.changelog",
"value=on",
NULL);
Example 2: Using Format chars in key values
gf_slog ("cli", GF_LOG_INFO, "Volume Set",
"name=%s", volume_name,
"option=%s", option_name,
"value=%s", option_value,
NULL);
gf_smsg ("cli", GF_LOG_INFO, 0, MSGID_VOLUME_SET,
"Volume Set",
"name=%s", volume_name,
"option=%s", option_name,
"value=%s", option_value,
NULL);
Formats as,
<EVENT><TAB><KEY1=VALUE1><TAB><KEY2=VALUE2>...
Example:
Volume Set name=gv1 option=changelog.changelog value=on
Updates: #240
Change-Id: I871727be16a39f681d41f363daa0029b8066fb52
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: https://review.gluster.org/17543
Reviewed-by: MOHIT AGRAWAL <moagrawa@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Amar Tumballi <amarts@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I55f707ae1e7c3ad7fc0545f7aa657584cead58f9
BUG: 1465214
Signed-off-by: Jeff Darcy <jdarcy@fb.com>
Reviewed-on: https://review.gluster.org/17636
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: Jeff Darcy <jeff@pl.atyp.us>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Ji-Hyeon Gim
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If an fd is opened on a file, the file is migrated
and the cached subvol is updated in the inode_ctx
before an fd based fop is sent, the fop is sent to
the dst subvol on which the fd is not opened.
This causes the FOP to fail with EBADF.
Now, every fd based fop will check to see that the fd
has been opened on the dst subvol before winding it down.
Change-Id: Id92ef5eb7a5b5226688e2d2868b15e383f5f240e
BUG: 1465075
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17630
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Susant Palai <spalai@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
"gluster v heal <volname> info" is taking
long time to respond when a brick is down.
RCA:
Heal info command does virtual mount.
EC wait for 10 seconds, before sending UP call to upper xlator,
to get notification (DOWN or UP) from all the bricks.
Currently, we are increasing ec->xl_notify_count based on
the current status of the brick. So, if a DOWN event notification
has come and brick is already down, we are not increasing
ec->xl_notify_count in ec_handle_down.
Solution:
Handle DOWN even as notification irrespective of what
is the current status of brick.
Change-Id: I0acac0db7ec7622d4c0584692e88ad52f45a910f
BUG: 1464091
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
Reviewed-on: https://review.gluster.org/17606
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
brickinfo's port & status should be filled up only when attach brick is
successful.
Change-Id: I68b181be37cb94d176f0f4692e8d9dac5493181c
BUG: 1465559
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://review.gluster.org/17640
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Samikshan Bairagya <samikshan@gmail.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we build GlusterFS with GF_DISBLE_MEMPOOL, it is failed due to macro
condition in mem-pool.c:mem_get().
Change-Id: I03fe804f93d761ea3bfdc3b20f0253a03350a68f
BUG: 1465214
Signed-off-by: Ji-Hyeon Gim <potatogim@potatogim.net>
Reviewed-on: https://review.gluster.org/17633
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
Tested-by: Ji-Hyeon Gim
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: In container environment sometime after delete gluster pod
and created new gluster pod brick process doesn't seem
to come up.
Solution: On the basis of logs it seems glusterd is trying to attach
with non glusterfs process.Change the code of function
glusterd_get_sock_from_brick_pid to fetch socketpath from argument
of running brick process.
BUG: 1464072
Change-Id: Ida6af00066341b683bbb4440d7a0d8042581656a
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: https://review.gluster.org/17601
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Afr has introduced a new key GF_XATTR_LIST_NODE_UUIDS_KEY,
through which rebalance will figure out its local subvolumes.(Reference
bugid=1463250)
key: GF_XATTR_NODE_UUID_KEY will continue to serve it's old
purpose of returning the first afr chiild.
test: prove tests/basic/distribute/rebal-all-nodes-migrate.t
Change-Id: I4d602feda2a05b29d2210c712a07a4ac6b8bc112
BUG: 1463648
Signed-off-by: Susant Palai <spalai@redhat.com>
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17595
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The rebalance used to get the file count in the beginning
and not update it. This caused estimates to fail
if the number changed during the rebalance.
The rebalance now updates the file count periodically.
Change-Id: I1667ee69e8a1d7d6bc6bc2f060fad7f989d19ed4
BUG: 1464110
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17607
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The change in EC to return list of node uuids for
GF_XATTR_NODE_UUID_KEY was causing problems with
geo-rep.
Fix:
This patch will allow to get the single node uuid
as it was doing before with the key
"GF_XATTR_NODE_UUID_KEY", and will also allow to get
the list of node uuids by using a new key
"GF_XATTR_LIST_NODE_UUIDS_KEY". This will solve
the problem with geo-rep and any other features which
were depending on this.
BUG: 1462790
Change-Id: I2d9214a9658d4a41a3d6de08600884d2bda5f3eb
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Reviewed-on: https://review.gluster.org/17594
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enhance the script testing glfs_xreaddirplus functionality
and also measure the performance difference when compared to
using the older method.
Change-Id: I590d07c850994afab0a02eb5dccb8342224aa6b7
BUG: 1442950
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Reviewed-on: https://review.gluster.org/17329
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a similar issue fixed for rename in https://review.gluster.org/#/c/16016/
For hardlinks, if cached and hashed subvolumes are different, then it will first
create linkto file in hashed using root permission, but actually hardlink creation
fails with EACESS and stale linkto file is never removed.All the followup hardlink
calls with file name will result ESTALE because linktofile creation fails with EEXIST
and follow up lookup on linkto file returns gfid-mismatching(old linkto file) and
finally fails with ESTALE
Steps to produce :
(From link/00.t test from posix-testsuite)
Steps executed in script
* create a file "abc" using root
* change the ownership of file to a non root user
* create hardlink "link" for "abc" using a non root user, it fails with EACESS
* delete "abc"
* create directory "abc" using root
* again try to create hadrlink "link" for "abc" using non root user, fails with ESTALE
Also tried to fix other bugs in dht_linkfile_create_cbk() and posix_lookup.
Thanks Susant for the help in debugging the issue and suggestion for this patch.
Change-Id: I7a5a1899d3fd1fdb13578b37f9d52a084492e35d
BUG: 1452084
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: https://review.gluster.org/17331
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reorganizing GlusterFS README contents.
BUG: 1461648
Change-Id: I3696e41963c536679d04934930d491f425ffe1ad
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Reviewed-on: https://review.gluster.org/17550
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The buffer used to hold the basename was hard coded
to the size of NAME_MAX(255). It might lead to buffer
overflow crashes when the basename which is sent
is more than NAME_MAX length. Fixed the same.
Change-Id: I6c1cad3ccaeb8c55549b1d3c5f96a198f65ba2b7
BUG: 1463178
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: https://review.gluster.org/17579
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
reboot
Reported-by: Hendrik Visage <hvjunk@gmail.com>
Change-Id: Ibcff56b00f45c8af54c1ae04974267c2180f5f63
BUG: 1452527
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: https://review.gluster.org/17339
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The patch https://review.gluster.org/#/c/17177 resolves "." and ".."
to corrosponding inodes and names before sending the request to the
backend server. But this will only work if inode and its parent is
linked properly. Incase of nameless lookup(applications like ganesha)
the inode of parent can be NULL(only gfid is send). So this patch will
resolve "." and ".." only if proper parent is available
Change-Id: I4c50258b0d896dabf000a547ab180b57df308a0b
BUG: 1460514
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: https://review.gluster.org/17502
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Poornima G <pgurusid@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: soumya k <skoduri@redhat.com>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|