| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
Change-Id: I7b4e7c467b833bc5896808e6e1d1b1a0322c4fdb
BUG: 3483
Reviewed-on: http://review.gluster.com/318
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amar@gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
| |
Change-Id: I2d10f2be44f518f496427f257988f1858e888084
BUG: 3348
Reviewed-on: http://review.gluster.com/200
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@gluster.com>
|
|
|
|
|
|
|
|
| |
Change-Id: I3914467611e573cccee0d22df93920cf1b2eb79f
BUG: 3348
Reviewed-on: http://review.gluster.com/182
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@gluster.com>
|
|
|
|
|
|
|
|
| |
Signed-off-by: Anand Avati <avati@gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 2241 (GlusterFs Stat Actions Degrade During I/O)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2241
|
|
|
|
|
|
|
|
| |
Signed-off-by: Kaushik BV <kaushikbv@gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 1159 ()
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1159
|
|
|
|
|
|
|
|
| |
Signed-off-by: Vijay Bellur <vijay@gluster.com>
Signed-off-by: Vijay Bellur <vijay@dev.gluster.com>
BUG: 971 (dynamic volume management)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=971
|
|
|
|
|
|
|
|
| |
Signed-off-by: Pranith Kumar K <pranithk@gluster.com>
Signed-off-by: Vijay Bellur <vijay@dev.gluster.com>
BUG: 1388 ()
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1388
|
|
|
|
|
|
|
|
|
|
|
| |
Memory accounting Changes. Thanks to Vinayak Hegde and Csaba Henk for their
contributions.
Signed-off-by: Vijay Bellur <vijay@gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 329 (Replacing memory allocation functions with mem-type functions)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=329
|
|
|
|
|
|
|
|
|
|
|
|
| |
* conditional for scaling up threads was wrong
* ETIMEDOUT check was performed wrongly
Signed-off-by: Anand V. Avati <avati@blackhole.gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 583 (filesystem access hangs while deleting large files)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=583
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch lets io-threads work with a single queue and multiple
threads work on picking the next request from the queue and process
it.
Whenever the number of pending requests in the queue double, a new
worker thread is spawned.
Workers expire after a (configurable) timeout of inactivity
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
Signed-off-by: Anand V. Avati <avati@blackhole.gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 583 (filesystem access hangs while deleting large files)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=583
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We're performing a calculation for skewing idle time
that resulted in a timespec.tv_nsec value becoming larger than
1000 million or less than 0, forcing sem_timedwait to return
with an EINVAL instead of waiting for a request notification
from sem_post in iot_notify_worker(). This resulted in a missed
notification that resulted in a hang followed by a timeout
on the protocol/client side.
This commit avoids the over- and under-flow in tv_nsec by
skewing the tv_sec value instead.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It seems that use of mutexes is resulting in pretty high thread
sleep and wake-up cost. What is worse, if a worker thread has
acquired a lock, there is a possibility of the main glusterfs thread
being put to sleep. We change the use of mutexes into spinlock.
At the same time, we cannot anymore use condvars for notification since
the condvar interface depends on mutexes itself. Semaphores come to
out rescue. Luckily, even the pthread semaphores have a timedwait
interface to allow our idle worker threads to make an exit decision.
Further, it is possible that spinlocks are not available on all systems
so all this is curtained behind #defines so we can fall back to
mutexes and condvars implementation.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
|
|
| |
We've had complaints from users who've used
autoscaling option with default settings for min and
max threads, about high memory consumption because of
the large default value for max-threads.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit brings in support for allocation of iot_request_t's
in io-threads through the use of the mem-pool. We're hoping
that the overheads of hundreds and thousands of small allocations
can be avoided through this.
The important point to note is that the memory pool is not
for the translator as a whole but there is one small memory
pool for each worker thread. Not only does that help us
avoid malloc overheads for small allocations like iot_request_t
but also avoid contention on the heap data structures when multiple
threads want an iot_request_t from the pool.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch cleans up io-threads behaviour regarding the
range values that can be specified for min-threads
and max-threads. THe major change is that the min threads
have been reduced to 2 to signify that io-threads needs minimum
two threads for its operation, while keeping the default number of
threads at 16. The idea is to decouple the default thread count
from the minimum thread count.
Note to Avati:
This applies over Raghu's indentation and logging take-3 patch.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
| |
Going by the memory usage for each threads, it is prudent to
have lower number of threads by default and let users who understand
the memory consequences increase the thread count for themselves.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
| |
The default stack size on Linux is around 8 MiB for each
thread. This is clearly too high for our purpose. This commit reduces
the stack size down to 1 MiB.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit finally makes the autoscaling feature visible to the user.
Know that we're now using two separate thread-pools, one for data
requests, called ordered thread-pool in io-threads, and the other
for meta-data requests, called un-ordered thread-pool.
We do not expose this information to the user to keep io-threads
simple. Consequently, when the user specifies a min-threads and
max-threads value, the number of threads assigned to each pool
is equal, i.e. both pools start with their min threads set to half of
the option "min-threads" and both scale up their threads at most up to
half of option "max-threads".
Volfile options will be added to the wiki and user-guide.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
| |
The default is also to provide no scaling. For both, ordered and
unordered request pools, when scaling is off, we maintain atleast the
minimum number of threads specified in the volfile.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Now we have the remaining fops going through the ordered
thread-pool.
To route a request through ordered thread, we use
iot_schedule_ordered(..) and the worker thread for
ordered requests is iot_worker_ordered(..)
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit adds everything needed to:
a. Get un-ordered request going through the un-ordered
thread-pool. This happens through, the
iot_schedule_unordered(..). The unordered thread-pool
consists of thread running the iot_worker_unordered(..)
function.
b. Make threads in the un-ordered thread pool start-up
and exit depending on the thread state.
Note that at this point the requests that need
ordering are still going through iot_schedule(..).
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
|
|
| |
Worker threads were represented as a list in iot_conf_t
which made us traverse the list of workers in order to
decide which thread gets the request. Now we represent the
workers as a dynamically allocated array so that we can just index
into the array to schedule the file.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
| |
This patch changes the per-thread request queue from a custom circular
linked list, into the standard list.h list which is easier to
understand and has a cleaner interface.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
| |
updated copyright header to include 2009.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
iot_queue() and iot_dequeue() functions were using a io-threads
translator-wide lock which would be contended for by every worker
thread waiting for IO requests.
This patch reduces the granularity by turning the
lock into a per-worker lock.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
|
| |
Since we're not dependent on this io-thread internal state(i.e.
cache_size and current_size) to rate limit requests, we can remove
these two data members and code that checks for these.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|