| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 4 (mount --bind fails if run immediately after mounting GlusterFS)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=4
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 279 (File written with booster results in self-heal after dd exits)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=279
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 210 (libglusterfsclient: Enhance logging)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=210
|
|
|
|
|
|
|
|
|
|
|
|
| |
This ensures that the process using libglusterfsclient does
not exit before all the fops and calls have been replied to.
It helps to ensure that the backends are in a sane state when
the program exits.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 279 (File written with booster results in self-heal after dd exits)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=279
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch cleans up the umount and fini paths in preparation
to support waiting for unwind of all pending call frames.
Two misc fixes are:
1. Fix to avoid deadlock in _libgf_umount by
using _libgf_vmp_search_entry instead of
libgf_vmp_search_exact_entry since the latter tries to take a
lock already help by _libgf_umount.
2. Avoid a crash in _libgf_umount by deleting the vmp
entry from the list before it gets freed.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 279 (File written with booster results in self-heal after dd exits)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=279
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 279 (File written with booster results in self-heal after dd exits)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=279
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 210 (libglusterfsclient: Enhance logging)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=210
|
|
|
|
|
|
|
|
|
|
|
| |
Internal users feel the amount of logging brought in
due to a previous logging enhancement patch is a bit too
aggressive for DEBUG, so this changes it to TRACE.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 210 (libglusterfsclient: Enhance logging)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=210
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the root inode's is outdated, send a revalidate on it.
A revalidate on root inode also reduces the window in which an
op will fail over distribute because the layout of the root
directory did not get constructed when we sent the lookup on
root in glusterfs_init. That can happen when not all children of a
distribute volume were up at the time of glusterfs_init.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 256 (revalidates should be sent on '/' in libglusterfsclient.)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=256
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
request->stub->fop.
- for non-write wind requests, the request structure outlives the stub.
The call stub is destroyed when stack is wound but request is destroyed
only when the reply has come.
(for writes, both stub and request are destroyed when refcount becomes 0,
which happens only when the write operation is stack unwound and a reply
for the write operation has come from underlying translators, for non-write
unwind requests the request is first destroyed before resuming the stub).
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 280 (simple stripe, with write-behind set up, when dbench is run client crashes.)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=280
|
|
|
|
|
|
|
|
|
| |
from the kernel
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 223 (flush not sent)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=223
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 222 (Enhance Internal locks to support multilple domains and rewrite inodelks)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=222
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 222 (Enhance Internal locks to support multilple domains and rewrite inodelks)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=222
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 222 (Enhance Internal locks to support multilple domains and rewrite inodelks)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=222
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 4 (mount --bind fails if run immediately after mounting GlusterFS)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=4
|
|
|
|
|
|
|
|
|
| |
of the mount arguments.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 204 (mount.glusterfs mounts to incorrect mount point)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=204
|
|
|
|
|
|
|
|
|
|
| |
- this helps us to not traverse the request list whenever we need currently
aggregated data in the queue
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 276 (write behind needs to be optimized.)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=276
|
|
|
|
|
|
|
|
|
|
| |
- this would increase the performance since we don't have to traverse the
request list every time we need the current window size.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 276 (write behind needs to be optimized.)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=276
|
|
|
|
|
|
|
|
|
|
| |
- request structure now holds a member write_size which is initialised at the
time of request creation and used later.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 276 (write behind needs to be optimized.)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=276
|
|
|
|
|
|
|
|
|
|
| |
option: data-self-heal-algorithm type: string default: "full"
This option allows the user to specify the algorithm to
be used for data self-heal. Currently supported values
are "full" and "diff".
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The "diff" self-heal algorithm works as follows:
For each block:
Compute MD5 checksum on source and all sinks
If checksum on a sink differs from source:
Read block from source and write to sinks
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
|
| |
Abstract the read/write loop part of data self-heal. This
patch has support for the "full" (i.e., read and write entire
file) algorithm.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
| |
Since a self-heal algorithm (e.g., rsync) might want to both read
and write from both the source and sink files, open them as O_RDWR.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
| |
rchecksum (fd, offset, len): Calculates both the weak and strong
checksums for a block of {len} bytes at {offset} in {fd}.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
gf_rsync_weak_checksum: Calculates a simple 32-bit checksum.
gf_rsync_strong_checksum: Calculates the MD5 checksum.
The strong checksum function makes use of Christophe Devine's
MD5 implementation (adapted from the rsync source code,
version 3.0.6. <http://www.samba.org/ftp/rsync/>).
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 277 (running dd on booster returns EINVAL)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=277
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- With this option enabled, writes are stack-wound even though not enough
data is aggregated, provided there are no write-requests which are
stack-wound but reply is yet to come. The reason behind this option
is to make use of the network, which is relatively free (with no writes
or replies in transit). However, with non-standard block-sizes of writes
the performance can actually degrade. Hence making this configurable.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 276 (write behind needs to be optimized.)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=276
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- move all the decision making code to __wb_can_wind.
- don't continue traversing the request list, once we know any of the
following conditions are true:
* requests other than write are present in queue.
* writes are happening at non-contiguous offsets.
* there are no write requests, which are wound to server but not yet
received the reply.
* enough data is aggregated for writing.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 276 (write behind needs to be optimized.)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=276
|
|
|
|
|
|
|
|
|
|
| |
- don't traverse entire request list to get the window-size, instead break when current
window size becomes greater than configured limit.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 276 (write behind needs to be optimized.)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=276
|
|
|
|
|
|
|
|
|
|
|
|
| |
the wind list in wb_ sync.
- no need of getting the total_count of number of requests in the list.
Even if there is a single request, we need to sync it.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 276 (write behind needs to be optimized.)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=276
|
|
|
|
|
|
|
|
|
| |
single iobuf.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 276 (write behind needs to be optimized.)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=276
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- remove wb_mark_wind_aggregegate_size_aware, since wb_mark_wind_all does
the same work (with check for whether current aggregated data size is
greater than the configured limit before calling it). Moreover,
wb_mark_wind_aggregate_size_aware called __wb_get_aggregate_size
redundantly, thereby reducing the performance, since for small
sized large number of writes, traversing the list of requests takes
significant amount of time.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 276 (write behind needs to be optimized.)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=276
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 275 (libglusterfsclient: Generic build failure bug for libglusterfsclient and booster)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=275
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 275 (libglusterfsclient: Generic build failure bug for libglusterfsclient and booster)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=275
|
|
|
|
|
|
|
|
|
|
| |
In posix_open(), posix_create(), and posix_close(), update
stats->nr_files only after the FOP has succeeded.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 248 (Updating stats in posix is incorrect)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=248
|
|
|
|
|
|
|
|
|
|
| |
- An extra vector was being allocated when the number of bytes being read
from cache were equal to the iobuf size.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 274 (Memory corruption in Apache running on booster)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=274
|
|
|
|
|
|
|
|
|
|
|
| |
- qr_lookup not to send request for file-content if the cache is already
present during revalidates.
- flush the cache in qr_lookup_cbk if the cache is not in sync with the file.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 273 (Code review and optimize quick-read)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=273
|
|
|
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 273 (Code review and optimize quick-read)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=273
|
|
|
|
|
|
|
|
|
| |
if not present should be atomic.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 273 (Code review and optimize quick-read)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=273
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- a new size has to be set in xattr_req only
if (quick-read is configured with a maximum file size limit
&& ((xattr_req does not have a request key for getting content)
|| (the size requested in xattr_req is not equal to configured
size in quick-read)))
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 273 (Code review and optimize quick-read)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=273
|
|
|
|
|
|
|
|
|
|
| |
- A global context pointer cannot be used with libglusterfsclient, since
there can be many contexts in a single process.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 271 (applications using booster protocol/client crash in client_setvolume_cbk.)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=271
|
|
|
|
|
|
|
|
|
| |
opened on directories.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 260 (ls on booster VMP results in error: "File descriptor in bad state")
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=260
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- In protocol/client, fdctx is accessed by two sets of procedures,
protocol_client_mark_fd_bad falls in one set whereas the other set consists of
all fops which receive fd as an argument. The way these fdctxs are got is
different in these two sets. While in the former set, fdctx is accessed
through conf->saved_fds, which is a list of fdctxs of fds representing
opened/created files. In the latter set, fdctxs are got directly from fd
through fd_ctx_get(). Now there can be race conditions between two threads
executing one procedure from these two sets. As an example let us consider
following scenario:
A flush operation is timed out and polling thread executing
protocol_client_mark_fd_bad, fuse thread executing client_release. This can
happen because, immediately a reply for flush is written to fuse, a release on
the same fd can be sent to glusterfs and the polling thread still might be
doing cleanup. Consider following set of events:
1. fuse thread does fd_ctx_get (fd).
2. polling thread gets the same fdctx but through conf->saved_fds.
3. Now both threads go ahead and does list_del (fdctx) and eventually free
fdctx.
In other situations the same set events might occur and the threads
executing fops other than flush in the second set might be accessing a
fdctx freed in protocol_client_mark_fd_bad.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 127 (race-condition in accessing fdctx in protocol/client)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=127
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterfs_get_ctx
- since glusterfs_get_ctx gets the global context pointer, there can be
problems in a multithreaded application running on libglusterfsclient
doing multiple glusterfs_inits. Hence use context specific to the
current xlator tree stored in each xlator object.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 240 (segmentation fault in qr_readv)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=240
|
|
|
|
|
|
|
|
|
| |
It was already implemented but not set to .fxattrop
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 267 (Add fxattrop to iothreads)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=267
|
|
|
|
|
|
|
|
|
|
| |
Added dumpop inodectx.
Support for dumop inodectx added in dht, locks and client-protocol.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 213 (Support for process state dump)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=213
|
|
|
|
|
|
|
|
|
|
| |
Changed prototype for inode_table_dump() and inode_dump()
Added support for dumpop inode in mount/fuse and protocol/server
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 213 (Support for process state dump)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=213
|