| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 210 (libglusterfsclient: Enhance logging)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=210
|
|
|
|
|
|
|
|
|
|
|
|
| |
This ensures that the process using libglusterfsclient does
not exit before all the fops and calls have been replied to.
It helps to ensure that the backends are in a sane state when
the program exits.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 279 (File written with booster results in self-heal after dd exits)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=279
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch cleans up the umount and fini paths in preparation
to support waiting for unwind of all pending call frames.
Two misc fixes are:
1. Fix to avoid deadlock in _libgf_umount by
using _libgf_vmp_search_entry instead of
libgf_vmp_search_exact_entry since the latter tries to take a
lock already help by _libgf_umount.
2. Avoid a crash in _libgf_umount by deleting the vmp
entry from the list before it gets freed.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 279 (File written with booster results in self-heal after dd exits)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=279
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 210 (libglusterfsclient: Enhance logging)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=210
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the root inode's is outdated, send a revalidate on it.
A revalidate on root inode also reduces the window in which an
op will fail over distribute because the layout of the root
directory did not get constructed when we sent the lookup on
root in glusterfs_init. That can happen when not all children of a
distribute volume were up at the time of glusterfs_init.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 256 (revalidates should be sent on '/' in libglusterfsclient.)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=256
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 275 (libglusterfsclient: Generic build failure bug for libglusterfsclient and booster)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=275
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 263 (files are not resolved to glusterfs when vmp is not terminated with a '/'.)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=263
|
|
|
|
|
|
|
|
|
| |
distribute to initialize before sending lookup on '/'.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 255 (libglusterfsclient should wait till all the children of distribute are initialized before sending first lookup on '/')
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=255
|
|
|
|
|
|
|
|
|
| |
We should check fdctx instead.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 253 (Global bug for libglusterfsclient NULL checks and CALLOC handling fixes)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=253
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 253 (Global bug for libglusterfsclient NULL checks and CALLOC handling fixes)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=253
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 253 (Global bug for libglusterfsclient NULL checks and CALLOC handling fixes)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=253
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 253 (Global bug for libglusterfsclient NULL checks and CALLOC handling fixes)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=253
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 210 (libglusterfsclient: Enhance logging)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=210
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
properly in glusterfs_glh_realpath.
- while building the realpath, if the intermediate path happens to be a
symbolic link, the content of link was being appended at dirname (path),
instead of appending to intermediate path.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 237 (If any of the path component other than the last one, happens to be a symbolic link glusterfs_glh_realpath does not construct correct path.)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=237
|
|
|
|
|
|
|
|
|
| |
libgf_trim_to_prev_dir.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 236 (Stack overflow due to infinite recursion in glusterfs_glh_realpath)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=236
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 235 (Handle failures in glusterfs_glh_realpath appropriately)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=235
|
|
|
|
|
|
|
|
|
| |
glusterfs_glh_realpath.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 235 (Handle failures in glusterfs_glh_realpath appropriately)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=235
|
|
|
|
|
|
|
|
|
|
| |
- exclude symbolic links from set of filetypes to which ENOTDIR is returned,
since a symbolic link can point to a directory.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 234 (Apache-2.2 on booster returns HTTP_FORBIDDEN for a directory which is present)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=234
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterfs_glh_realpath.
- don't assume the content returned by readlink while constructing realpath
of a symbolic link to contain vmp as part of the path. This is necessary in
case of symbolic links which contain relative paths as targets.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 233 (Crash in Apache running on booster when a client tries to access a symbolic link)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=233
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When searching for an exact entry we need to compare the
component counts in the candidate VMP and the count in the
path being searched. This is opposite to the current
situation where we compare the component count in VMP
and the component count in maxentry, which will always
be same.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 209 (VMP parsing through fstab has issues)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=209
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Another attempt to enhance searching for VMP entries.
There was a problem of returning the longest prefix match
from all the VMPs without checking whether the number of
matched components were same as the number of components
in the candidate VMP.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 209 (VMP parsing through fstab has issues)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=209
|
|
|
|
|
|
|
|
|
| |
allocated memory.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 228 (Segmentation fault in glusterfs_getxattr)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=228
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some weeks back, I'd separated the big lock into vmplock and mountlock.
See commit 304e4274ca9b0339539581c5413e3339078c1182 in mainline.
At that time, we did not have a solution to the problem
of when to init the vmplist in a thread-safe manner, since
there was no lock to protect the vmplock specifically, and that
when libgf_vmp_map_ghandle was called inside glusterfs_mount
so the "lock" was already being held.
Now that we have separate mount and vmp locks, the
accesses can be synced correctly.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 211 (libglusterfsclient: Race condition against vmplist in libgf_vmp_map_ghandle)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=211
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 210 (libglusterfsclient: Enhance logging)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=210
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Till now, we've been doing a character by character comparison
between a given path and the VMP, to search for the glusterfs
handle for the given path.
This does not work for all cases and has been a known bug.
This commit changes the byte-by-byte comparison into a more
accurate component based comparison to fix search
failures.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 209 (VMP parsing through fstab has issues)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=209
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When an fd_t is fd_create'd, we need to call fd_bind on it to
ensure that any fd_lookup on the inode gets us this fd. We're not
doing this so translators like write-behind were not able to order
path-based requests at all resulting in some fops like stat, which
could be issued after a writev, overtaking a previous writev which
is still being written-behind.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 179 (fileop reports miscompares on read tests)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=179
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier we have invalidated the iattr cache on writes. Now
we need to do so for reads also, so that we are not updating
the iattr cache with 0-filled stat received from io-cache.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 180 (fileop fails at chmod with stale file handle error over unfs3)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=180
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Till now we've been creating an iovec, storing references in it
to the application data and simply passing it on to the translator
tree. This means that the buffer being passed to the translators is
not at all associated with the memory ref'd by the iobref argument
to write fop. This is a problem when write-behind is a translator in
the tree since it assumes that the memory in the iovecs passed to
write fops is already refcounted by the iobref and so it simply copies
the address of the application data. The problem is that the application
can continue using this buffer, free it or over-write it destroying the
data that write-behind may write at a later time.
The solution involves copying the application's write buffer into
an iobuf which will be referred to by the iobref.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 178 (libglusterfsclient: Data corruption on using write-behind in translator tree)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=178
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 115 (./configure adds libglusterfsclient when it shouldn't)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=115
|
|
|
|
|
|
|
|
|
|
|
|
| |
There seems to a reproduceable corruption specifically of
the libglusterfs_client_local_t that is allocated for
the read call. Therefore, the subsequent access to fd inside
local leads to a segfault. This is a temporary fix.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 164 (libglusterfsclient: Segfault due to memory corruption of frame local in libgf_client_read)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=164
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In libgf_client_loc_fill, there is a possibility that all
the ino, par and name are specified as non-NULL,non-zero args.
So if an inode is located in the itable using the ino and the
subsequent search for the inode using the par-ino and the file
name does not result in an inode being found, the current
code over-writes the inode that was found through the ino. The
correct behaviour is to stop further searches if inode
was already found using ino.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 161 (unfs3 crashes on link system call by fileop)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=161
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the loc_t of the link being created, we must fill in the inode
of the old/target loc since this is a link operation. The
inode_link to the new parent is called in libgf_client_link.
This fixes a crash while running fileop over a fully-loaded
dist-repl vol file.
Ref: Bugzilla 161
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 161 (unfs3 crashes on link system call by fileop)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=161
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is needed to work around the replicate behaviour of
possibly returning device number for the same file from
different subvolumes.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 148 (replicate: Returns st_dev from different subvols resulting in ESTALE thru unfs3booster)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=148
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The workaround for the DHT requirement for a lookup on /
needs to be done only once when the xlator graph is inited.
Doing it on every path's lookup results in a major performance
penalty when using distribute subvolumes upwards of 16, as reported
by Avati.
Ref: bug 152
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 152 (libglusterfsclient: DHT workaround is a major performance bottleneck)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=152
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 130 (build warnings)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=130
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 149 (libglusterfsclient interacts incorrectly with write-behind on writev)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=149
|
|
|
|
|
|
|
|
|
|
|
| |
We werent updating the attr AKA stat cache on read and write
on files so every stat on the file before the timeout was returning
stale attr from the cache. Yuck!
This fixes it. Turns out there is a good aspect of unfs3's notoriety
when it comes to doing stat()s for every operation.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
| |
Ref: http://www.gnu.org/s/libc/manual/html_node/Access-Modes.html
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is possible that the only translator in the libglusterfsclient
tree is the posix. In that case, inside gluster_init, the graph
init routines will need to call lstat on the posix subdirectory.
Since even the glusterfs stack is running over booster, those
calls will also first require vmp searching. BUT, the vmp lock
is the same as the mount lock that was already taken when we entered
glusterfs_mount, so a deadlock occurs.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This bug shows up while using unfs3 with replicate. The absence
of an inode_lookup on a looked-up/created inode results in it
getting pruned from the inode table. Consequently, a subsequent
lookup for the inode results in a different inode number being
returned by replicate. This breaks unfs3 because it tries to remember
the inode numbers returned by two different stat-family calls.
Resolves: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=11
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
| |
- Generally glusterfs_reset is called after fork in child to empty out
vmplist.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
|
|
|
| |
stored in fd_ctx is used.
- this helps in implementing sendfile(2). manpage says that
"If offset is not NULL, then sendfile() does not modify the current
file offset of in_fd"
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
| |
- unmounts all the entries in the vmplist.
- this api helps booster to cleanup all the mounts in a single call.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
| |
- this patch also checks for the presence of vmp before adding
an vmpentry.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
|
|
| |
We can avoid memory allocation, de-allocation and
data copies by just using the entries passed to us from
a lower layer and by de-linking the entries from the original
list.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This improves the potential for pre-fetching a larger
number of dirents. Consider that, with 255 chars as the max
name length for each dirent, in the worst case scenario, where
we actually have files with such large names, we're not getting
more than 4 entries with the current block size of 1024.
Generally also, increasing the size to 4k provides us
with a higher chance that directories with low to medium
number of dirents will be pre-fetched in a single readdir fop.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|