| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit finally makes the autoscaling feature visible to the user.
Know that we're now using two separate thread-pools, one for data
requests, called ordered thread-pool in io-threads, and the other
for meta-data requests, called un-ordered thread-pool.
We do not expose this information to the user to keep io-threads
simple. Consequently, when the user specifies a min-threads and
max-threads value, the number of threads assigned to each pool
is equal, i.e. both pools start with their min threads set to half of
the option "min-threads" and both scale up their threads at most up to
half of option "max-threads".
Volfile options will be added to the wiki and user-guide.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
| |
The default is also to provide no scaling. For both, ordered and
unordered request pools, when scaling is off, we maintain atleast the
minimum number of threads specified in the volfile.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Now we have the remaining fops going through the ordered
thread-pool.
To route a request through ordered thread, we use
iot_schedule_ordered(..) and the worker thread for
ordered requests is iot_worker_ordered(..)
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit adds everything needed to:
a. Get un-ordered request going through the un-ordered
thread-pool. This happens through, the
iot_schedule_unordered(..). The unordered thread-pool
consists of thread running the iot_worker_unordered(..)
function.
b. Make threads in the un-ordered thread pool start-up
and exit depending on the thread state.
Note that at this point the requests that need
ordering are still going through iot_schedule(..).
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New io-threads will serve requests through two separate
threadpools.
One thread pool for requests that must be ordered
on a file that is open. so that the server can process the requests
in the order they were entered in the requests queue, and not in the order
the io-thread is able to send a request, which in turn is determined
by how the thread gets scheduled. This can also be called the
data-intensive ops thread pool.
Second thread-pool for requests that dont care about ordering, i.e.
requests like lookup, open, create, mkdir, etc.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
| |
flush.
This patch fixes bug report by Greg <greg@easyflirt.com> on gluster-users@ with subject 'glusterfsd crash'
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
| |
in wb_flush, there was a chance that wb_process_queue()
was called with NULL frame, which causes crash.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
| |
We must add a 'return' after a STACK_UNWIND due to a stub creation
failure, because if we dont, we'll end up adding a NULL stub to the
worker thread request queue.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
'testing/' directory.
This way, users will be aware which are in 'beta' stage, and we can keep on
adding new translators (if any) seemlessly to stable codebase and once tested
can move them to proper places.
To use these translators, everyone will have to prefix 'testing/' to existing
type of translator (in volumefile)
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch does two things:
1. Cleans up the request scheduling and queueing interface so that all
fops only need to call iot_schedule and not iot_queue and in some
cases iot_schedule.
2. Till now, we've had open and create calls go through the main
glusterfsd thread when sending open and create fops. This patch makes
them also go through the worker threads. But since the open and
creates requests would not be called with a valid inode number in the
loc_t, these requests will get assigned to the worker at index 0.
This will be fixed RSN, when we introduce various techniques of
distributing the inodes(..not requests..) over the worker threads.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
|
|
| |
Worker threads were represented as a list in iot_conf_t
which made us traverse the list of workers in order to
decide which thread gets the request. Now we represent the
workers as a dynamically allocated array so that we can just index
into the array to schedule the file.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
| |
This patch changes the per-thread request queue from a custom circular
linked list, into the standard list.h list which is easier to
understand and has a cleaner interface.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
| |
- the execution order of fops like read, stat, fsync, truncate etc whose results
are affected by writes, are preserved.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
| |
ioc_create_cbk was holding inode->lock and calling inode_ctx_put,
which also holds the same lock.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
| |
implementation is changed to hold inode->lock.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
| |
updated copyright header to include 2009.
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@amp.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
iot_queue() and iot_dequeue() functions were using a io-threads
translator-wide lock which would be contended for by every worker
thread waiting for IO requests.
This patch reduces the granularity by turning the
lock into a per-worker lock.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
|
|
|
|
| |
Since we're not dependent on this io-thread internal state(i.e.
cache_size and current_size) to rate limit requests, we can remove
these two data members and code that checks for these.
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|
|
| |
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
|
|
|