diff options
author | Xavi Hernandez <xhernandez@redhat.com> | 2019-01-24 18:44:06 +0100 |
---|---|---|
committer | Amar Tumballi <amarts@redhat.com> | 2019-02-18 02:58:24 +0000 |
commit | dddcf52020004d98f688ebef968de51d76cbf9a6 (patch) | |
tree | 01ee4c39a7859a76562e15aa7045c5bd86417a60 /contrib | |
parent | ec273a46820ba17f46488c082c65cd1aa6739be3 (diff) |
core: implement a global thread pool
This patch implements a thread pool that is wait-free for adding jobs to
the queue and uses a very small locked region to get jobs. This makes it
possible to decrease contention drastically. It's based on wfcqueue
structure provided by urcu library.
It automatically enables more threads when load demands it, and stops
them when not needed. There's a maximum number of threads that can be
used. This value can be configured.
Depending on the workload, the maximum number of threads plays an
important role. So it needs to be configured for optimal performance.
Currently the thread pool doesn't self adjust the maximum for the
workload, so this configuration needs to be changed manually.
For this reason, the global thread pool has been made optional, so that
volumes can still use the thread pool provided by io-threads.
To enable it for bricks, the following option needs to be set:
config.global-threading = on
This option has no effect if bricks are already running. A restart is
required to activate it. It's recommended to also enable the following
option when running bricks with the global thread pool:
performance.iot-pass-through = on
To enable it for a FUSE mount point, the option '--global-threading'
must be added to the mount command. To change it, an umount and remount
is needed. It's recommended to disable the following option when using
global threading on a mount point:
performance.client-io-threads = off
To enable it for services managed by glusterd, glusterd needs to be
started with option '--global-threading'. In this case all daemons, like
self-heal, will be using the global thread pool.
Currently it can only be enabled for bricks, FUSE mounts and glusterd
services.
The maximum number of threads for clients and bricks can be configured
using the following options:
config.client-threads
config.brick-threads
These options can be applied online and its effect is immediate most of
the times. If one of them is set to 0, the maximum number of threads
will be calcutated as #cores * 2.
Some distributions use a very old userspace-rcu library (version 0.7)
for this reason, some header files from version 0.10 have been copied
into contrib/userspace-rcu and are used if the detected version is 0.7
or older.
An additional change has been made to io-threads to prevent that threads
are started when iot-pass-through is set.
Change-Id: I09d19e246b9e6d53c6247b29dfca6af6ee00a24b
updates: #532
Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
Diffstat (limited to 'contrib')
-rw-r--r-- | contrib/userspace-rcu/static-wfcqueue.h | 685 | ||||
-rw-r--r-- | contrib/userspace-rcu/static-wfstack.h | 455 | ||||
-rw-r--r-- | contrib/userspace-rcu/wfcqueue.h | 216 | ||||
-rw-r--r-- | contrib/userspace-rcu/wfstack.h | 178 |
4 files changed, 1534 insertions, 0 deletions
diff --git a/contrib/userspace-rcu/static-wfcqueue.h b/contrib/userspace-rcu/static-wfcqueue.h new file mode 100644 index 00000000000..37d14ad674b --- /dev/null +++ b/contrib/userspace-rcu/static-wfcqueue.h @@ -0,0 +1,685 @@ +#ifndef _URCU_WFCQUEUE_STATIC_H +#define _URCU_WFCQUEUE_STATIC_H + +/* + * urcu/static/wfcqueue.h + * + * Userspace RCU library - Concurrent Queue with Wait-Free Enqueue/Blocking Dequeue + * + * TO BE INCLUDED ONLY IN LGPL-COMPATIBLE CODE. See urcu/wfcqueue.h for + * linking dynamically with the userspace rcu library. + * + * Copyright 2010-2012 - Mathieu Desnoyers <mathieu.desnoyers@efficios.com> + * Copyright 2011-2012 - Lai Jiangshan <laijs@cn.fujitsu.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +/* Copied from userspace-rcu 0.10 because version 0.7 doesn't contain it. */ + +#include <pthread.h> +#include <assert.h> +#include <poll.h> +#include <stdbool.h> +#include <urcu/compiler.h> +#include <urcu/uatomic.h> + +#ifdef __cplusplus +extern "C" { +#endif + +/* + * Concurrent queue with wait-free enqueue/blocking dequeue. + * + * This queue has been designed and implemented collaboratively by + * Mathieu Desnoyers and Lai Jiangshan. Inspired from + * half-wait-free/half-blocking queue implementation done by Paul E. + * McKenney. + * + * Mutual exclusion of cds_wfcq_* / __cds_wfcq_* API + * + * Synchronization table: + * + * External synchronization techniques described in the API below is + * required between pairs marked with "X". No external synchronization + * required between pairs marked with "-". + * + * Legend: + * [1] cds_wfcq_enqueue + * [2] __cds_wfcq_splice (destination queue) + * [3] __cds_wfcq_dequeue + * [4] __cds_wfcq_splice (source queue) + * [5] __cds_wfcq_first + * [6] __cds_wfcq_next + * + * [1] [2] [3] [4] [5] [6] + * [1] - - - - - - + * [2] - - - - - - + * [3] - - X X X X + * [4] - - X - X X + * [5] - - X X - - + * [6] - - X X - - + * + * Mutual exclusion can be ensured by holding cds_wfcq_dequeue_lock(). + * + * For convenience, cds_wfcq_dequeue_blocking() and + * cds_wfcq_splice_blocking() hold the dequeue lock. + * + * Besides locking, mutual exclusion of dequeue, splice and iteration + * can be ensured by performing all of those operations from a single + * thread, without requiring any lock. + */ + +#define WFCQ_ADAPT_ATTEMPTS 10 /* Retry if being set */ +#define WFCQ_WAIT 10 /* Wait 10 ms if being set */ + +/* + * cds_wfcq_node_init: initialize wait-free queue node. + */ +static inline void _cds_wfcq_node_init(struct cds_wfcq_node *node) +{ + node->next = NULL; +} + +/* + * cds_wfcq_init: initialize wait-free queue (with lock). Pair with + * cds_wfcq_destroy(). + */ +static inline void _cds_wfcq_init(struct cds_wfcq_head *head, + struct cds_wfcq_tail *tail) +{ + int ret; + + /* Set queue head and tail */ + _cds_wfcq_node_init(&head->node); + tail->p = &head->node; + ret = pthread_mutex_init(&head->lock, NULL); + assert(!ret); +} + +/* + * cds_wfcq_destroy: destroy wait-free queue (with lock). Pair with + * cds_wfcq_init(). + */ +static inline void _cds_wfcq_destroy(struct cds_wfcq_head *head, + struct cds_wfcq_tail *tail) +{ + int ret = pthread_mutex_destroy(&head->lock); + assert(!ret); +} + +/* + * __cds_wfcq_init: initialize wait-free queue (without lock). Don't + * pair with any destroy function. + */ +static inline void ___cds_wfcq_init(struct __cds_wfcq_head *head, + struct cds_wfcq_tail *tail) +{ + /* Set queue head and tail */ + _cds_wfcq_node_init(&head->node); + tail->p = &head->node; +} + +/* + * cds_wfcq_empty: return whether wait-free queue is empty. + * + * No memory barrier is issued. No mutual exclusion is required. + * + * We perform the test on head->node.next to check if the queue is + * possibly empty, but we confirm this by checking if the tail pointer + * points to the head node because the tail pointer is the linearisation + * point of the enqueuers. Just checking the head next pointer could + * make a queue appear empty if an enqueuer is preempted for a long time + * between xchg() and setting the previous node's next pointer. + */ +static inline bool _cds_wfcq_empty(cds_wfcq_head_ptr_t u_head, + struct cds_wfcq_tail *tail) +{ + struct __cds_wfcq_head *head = u_head._h; + /* + * Queue is empty if no node is pointed by head->node.next nor + * tail->p. Even though the tail->p check is sufficient to find + * out of the queue is empty, we first check head->node.next as a + * common case to ensure that dequeuers do not frequently access + * enqueuer's tail->p cache line. + */ + return CMM_LOAD_SHARED(head->node.next) == NULL + && CMM_LOAD_SHARED(tail->p) == &head->node; +} + +static inline void _cds_wfcq_dequeue_lock(struct cds_wfcq_head *head, + struct cds_wfcq_tail *tail) +{ + int ret; + + ret = pthread_mutex_lock(&head->lock); + assert(!ret); +} + +static inline void _cds_wfcq_dequeue_unlock(struct cds_wfcq_head *head, + struct cds_wfcq_tail *tail) +{ + int ret; + + ret = pthread_mutex_unlock(&head->lock); + assert(!ret); +} + +static inline bool ___cds_wfcq_append(cds_wfcq_head_ptr_t u_head, + struct cds_wfcq_tail *tail, + struct cds_wfcq_node *new_head, + struct cds_wfcq_node *new_tail) +{ + struct __cds_wfcq_head *head = u_head._h; + struct cds_wfcq_node *old_tail; + + /* + * Implicit memory barrier before uatomic_xchg() orders earlier + * stores to data structure containing node and setting + * node->next to NULL before publication. + */ + old_tail = uatomic_xchg(&tail->p, new_tail); + + /* + * Implicit memory barrier after uatomic_xchg() orders store to + * q->tail before store to old_tail->next. + * + * At this point, dequeuers see a NULL tail->p->next, which + * indicates that the queue is being appended to. The following + * store will append "node" to the queue from a dequeuer + * perspective. + */ + CMM_STORE_SHARED(old_tail->next, new_head); + /* + * Return false if queue was empty prior to adding the node, + * else return true. + */ + return old_tail != &head->node; +} + +/* + * cds_wfcq_enqueue: enqueue a node into a wait-free queue. + * + * Issues a full memory barrier before enqueue. No mutual exclusion is + * required. + * + * Returns false if the queue was empty prior to adding the node. + * Returns true otherwise. + */ +static inline bool _cds_wfcq_enqueue(cds_wfcq_head_ptr_t head, + struct cds_wfcq_tail *tail, + struct cds_wfcq_node *new_tail) +{ + return ___cds_wfcq_append(head, tail, new_tail, new_tail); +} + +/* + * CDS_WFCQ_WAIT_SLEEP: + * + * By default, this sleeps for the given @msec milliseconds. + * This is a macro which LGPL users may #define themselves before + * including wfcqueue.h to override the default behavior (e.g. + * to log a warning or perform other background work). + */ +#ifndef CDS_WFCQ_WAIT_SLEEP +#define CDS_WFCQ_WAIT_SLEEP(msec) ___cds_wfcq_wait_sleep(msec) +#endif + +static inline void ___cds_wfcq_wait_sleep(int msec) +{ + (void) poll(NULL, 0, msec); +} + +/* + * ___cds_wfcq_busy_wait: adaptative busy-wait. + * + * Returns 1 if nonblocking and needs to block, 0 otherwise. + */ +static inline bool +___cds_wfcq_busy_wait(int *attempt, int blocking) +{ + if (!blocking) + return 1; + if (++(*attempt) >= WFCQ_ADAPT_ATTEMPTS) { + CDS_WFCQ_WAIT_SLEEP(WFCQ_WAIT); /* Wait for 10ms */ + *attempt = 0; + } else { + caa_cpu_relax(); + } + return 0; +} + +/* + * Waiting for enqueuer to complete enqueue and return the next node. + */ +static inline struct cds_wfcq_node * +___cds_wfcq_node_sync_next(struct cds_wfcq_node *node, int blocking) +{ + struct cds_wfcq_node *next; + int attempt = 0; + + /* + * Adaptative busy-looping waiting for enqueuer to complete enqueue. + */ + while ((next = CMM_LOAD_SHARED(node->next)) == NULL) { + if (___cds_wfcq_busy_wait(&attempt, blocking)) + return CDS_WFCQ_WOULDBLOCK; + } + + return next; +} + +static inline struct cds_wfcq_node * +___cds_wfcq_first(cds_wfcq_head_ptr_t u_head, + struct cds_wfcq_tail *tail, + int blocking) +{ + struct __cds_wfcq_head *head = u_head._h; + struct cds_wfcq_node *node; + + if (_cds_wfcq_empty(__cds_wfcq_head_cast(head), tail)) + return NULL; + node = ___cds_wfcq_node_sync_next(&head->node, blocking); + /* Load head->node.next before loading node's content */ + cmm_smp_read_barrier_depends(); + return node; +} + +/* + * __cds_wfcq_first_blocking: get first node of a queue, without dequeuing. + * + * Content written into the node before enqueue is guaranteed to be + * consistent, but no other memory ordering is ensured. + * Dequeue/splice/iteration mutual exclusion should be ensured by the + * caller. + * + * Used by for-like iteration macros in urcu/wfqueue.h: + * __cds_wfcq_for_each_blocking() + * __cds_wfcq_for_each_blocking_safe() + * + * Returns NULL if queue is empty, first node otherwise. + */ +static inline struct cds_wfcq_node * +___cds_wfcq_first_blocking(cds_wfcq_head_ptr_t head, + struct cds_wfcq_tail *tail) +{ + return ___cds_wfcq_first(head, tail, 1); +} + + +/* + * __cds_wfcq_first_nonblocking: get first node of a queue, without dequeuing. + * + * Same as __cds_wfcq_first_blocking, but returns CDS_WFCQ_WOULDBLOCK if + * it needs to block. + */ +static inline struct cds_wfcq_node * +___cds_wfcq_first_nonblocking(cds_wfcq_head_ptr_t head, + struct cds_wfcq_tail *tail) +{ + return ___cds_wfcq_first(head, tail, 0); +} + +static inline struct cds_wfcq_node * +___cds_wfcq_next(cds_wfcq_head_ptr_t head, + struct cds_wfcq_tail *tail, + struct cds_wfcq_node *node, + int blocking) +{ + struct cds_wfcq_node *next; + + /* + * Even though the following tail->p check is sufficient to find + * out if we reached the end of the queue, we first check + * node->next as a common case to ensure that iteration on nodes + * do not frequently access enqueuer's tail->p cache line. + */ + if ((next = CMM_LOAD_SHARED(node->next)) == NULL) { + /* Load node->next before tail->p */ + cmm_smp_rmb(); + if (CMM_LOAD_SHARED(tail->p) == node) + return NULL; + next = ___cds_wfcq_node_sync_next(node, blocking); + } + /* Load node->next before loading next's content */ + cmm_smp_read_barrier_depends(); + return next; +} + +/* + * __cds_wfcq_next_blocking: get next node of a queue, without dequeuing. + * + * Content written into the node before enqueue is guaranteed to be + * consistent, but no other memory ordering is ensured. + * Dequeue/splice/iteration mutual exclusion should be ensured by the + * caller. + * + * Used by for-like iteration macros in urcu/wfqueue.h: + * __cds_wfcq_for_each_blocking() + * __cds_wfcq_for_each_blocking_safe() + * + * Returns NULL if reached end of queue, non-NULL next queue node + * otherwise. + */ +static inline struct cds_wfcq_node * +___cds_wfcq_next_blocking(cds_wfcq_head_ptr_t head, + struct cds_wfcq_tail *tail, + struct cds_wfcq_node *node) +{ + return ___cds_wfcq_next(head, tail, node, 1); +} + +/* + * __cds_wfcq_next_blocking: get next node of a queue, without dequeuing. + * + * Same as __cds_wfcq_next_blocking, but returns CDS_WFCQ_WOULDBLOCK if + * it needs to block. + */ +static inline struct cds_wfcq_node * +___cds_wfcq_next_nonblocking(cds_wfcq_head_ptr_t head, + struct cds_wfcq_tail *tail, + struct cds_wfcq_node *node) +{ + return ___cds_wfcq_next(head, tail, node, 0); +} + +static inline struct cds_wfcq_node * +___cds_wfcq_dequeue_with_state(cds_wfcq_head_ptr_t u_head, + struct cds_wfcq_tail *tail, + int *state, + int blocking) +{ + struct __cds_wfcq_head *head = u_head._h; + struct cds_wfcq_node *node, *next; + + if (state) + *state = 0; + + if (_cds_wfcq_empty(__cds_wfcq_head_cast(head), tail)) { + return NULL; + } + + node = ___cds_wfcq_node_sync_next(&head->node, blocking); + if (!blocking && node == CDS_WFCQ_WOULDBLOCK) { + return CDS_WFCQ_WOULDBLOCK; + } + + if ((next = CMM_LOAD_SHARED(node->next)) == NULL) { + /* + * @node is probably the only node in the queue. + * Try to move the tail to &q->head. + * q->head.next is set to NULL here, and stays + * NULL if the cmpxchg succeeds. Should the + * cmpxchg fail due to a concurrent enqueue, the + * q->head.next will be set to the next node. + * The implicit memory barrier before + * uatomic_cmpxchg() orders load node->next + * before loading q->tail. + * The implicit memory barrier before uatomic_cmpxchg + * orders load q->head.next before loading node's + * content. + */ + _cds_wfcq_node_init(&head->node); + if (uatomic_cmpxchg(&tail->p, node, &head->node) == node) { + if (state) + *state |= CDS_WFCQ_STATE_LAST; + return node; + } + next = ___cds_wfcq_node_sync_next(node, blocking); + /* + * In nonblocking mode, if we would need to block to + * get node's next, set the head next node pointer + * (currently NULL) back to its original value. + */ + if (!blocking && next == CDS_WFCQ_WOULDBLOCK) { + head->node.next = node; + return CDS_WFCQ_WOULDBLOCK; + } + } + + /* + * Move queue head forward. + */ + head->node.next = next; + + /* Load q->head.next before loading node's content */ + cmm_smp_read_barrier_depends(); + return node; +} + +/* + * __cds_wfcq_dequeue_with_state_blocking: dequeue node from queue, with state. + * + * Content written into the node before enqueue is guaranteed to be + * consistent, but no other memory ordering is ensured. + * It is valid to reuse and free a dequeued node immediately. + * Dequeue/splice/iteration mutual exclusion should be ensured by the + * caller. + */ +static inline struct cds_wfcq_node * +___cds_wfcq_dequeue_with_state_blocking(cds_wfcq_head_ptr_t head, + struct cds_wfcq_tail *tail, int *state) +{ + return ___cds_wfcq_dequeue_with_state(head, tail, state, 1); +} + +/* + * ___cds_wfcq_dequeue_blocking: dequeue node from queue. + * + * Same as __cds_wfcq_dequeue_with_state_blocking, but without saving + * state. + */ +static inline struct cds_wfcq_node * +___cds_wfcq_dequeue_blocking(cds_wfcq_head_ptr_t head, + struct cds_wfcq_tail *tail) +{ + return ___cds_wfcq_dequeue_with_state_blocking(head, tail, NULL); +} + +/* + * __cds_wfcq_dequeue_with_state_nonblocking: dequeue node, with state. + * + * Same as __cds_wfcq_dequeue_blocking, but returns CDS_WFCQ_WOULDBLOCK + * if it needs to block. + */ +static inline struct cds_wfcq_node * +___cds_wfcq_dequeue_with_state_nonblocking(cds_wfcq_head_ptr_t head, + struct cds_wfcq_tail *tail, int *state) +{ + return ___cds_wfcq_dequeue_with_state(head, tail, state, 0); +} + +/* + * ___cds_wfcq_dequeue_nonblocking: dequeue node from queue. + * + * Same as __cds_wfcq_dequeue_with_state_nonblocking, but without saving + * state. + */ +static inline struct cds_wfcq_node * +___cds_wfcq_dequeue_nonblocking(cds_wfcq_head_ptr_t head, + struct cds_wfcq_tail *tail) +{ + return ___cds_wfcq_dequeue_with_state_nonblocking(head, tail, NULL); +} + +/* + * __cds_wfcq_splice: enqueue all src_q nodes at the end of dest_q. + * + * Dequeue all nodes from src_q. + * dest_q must be already initialized. + * Mutual exclusion for src_q should be ensured by the caller as + * specified in the "Synchronisation table". + * Returns enum cds_wfcq_ret which indicates the state of the src or + * dest queue. + */ +static inline enum cds_wfcq_ret +___cds_wfcq_splice( + cds_wfcq_head_ptr_t u_dest_q_head, + struct cds_wfcq_tail *dest_q_tail, + cds_wfcq_head_ptr_t u_src_q_head, + struct cds_wfcq_tail *src_q_tail, + int blocking) +{ + struct __cds_wfcq_head *dest_q_head = u_dest_q_head._h; + struct __cds_wfcq_head *src_q_head = u_src_q_head._h; + struct cds_wfcq_node *head, *tail; + int attempt = 0; + + /* + * Initial emptiness check to speed up cases where queue is + * empty: only require loads to check if queue is empty. + */ + if (_cds_wfcq_empty(__cds_wfcq_head_cast(src_q_head), src_q_tail)) + return CDS_WFCQ_RET_SRC_EMPTY; + + for (;;) { + /* + * Open-coded _cds_wfcq_empty() by testing result of + * uatomic_xchg, as well as tail pointer vs head node + * address. + */ + head = uatomic_xchg(&src_q_head->node.next, NULL); + if (head) + break; /* non-empty */ + if (CMM_LOAD_SHARED(src_q_tail->p) == &src_q_head->node) + return CDS_WFCQ_RET_SRC_EMPTY; + if (___cds_wfcq_busy_wait(&attempt, blocking)) + return CDS_WFCQ_RET_WOULDBLOCK; + } + + /* + * Memory barrier implied before uatomic_xchg() orders store to + * src_q->head before store to src_q->tail. This is required by + * concurrent enqueue on src_q, which exchanges the tail before + * updating the previous tail's next pointer. + */ + tail = uatomic_xchg(&src_q_tail->p, &src_q_head->node); + + /* + * Append the spliced content of src_q into dest_q. Does not + * require mutual exclusion on dest_q (wait-free). + */ + if (___cds_wfcq_append(__cds_wfcq_head_cast(dest_q_head), dest_q_tail, + head, tail)) + return CDS_WFCQ_RET_DEST_NON_EMPTY; + else + return CDS_WFCQ_RET_DEST_EMPTY; +} + +/* + * __cds_wfcq_splice_blocking: enqueue all src_q nodes at the end of dest_q. + * + * Dequeue all nodes from src_q. + * dest_q must be already initialized. + * Mutual exclusion for src_q should be ensured by the caller as + * specified in the "Synchronisation table". + * Returns enum cds_wfcq_ret which indicates the state of the src or + * dest queue. Never returns CDS_WFCQ_RET_WOULDBLOCK. + */ +static inline enum cds_wfcq_ret +___cds_wfcq_splice_blocking( + cds_wfcq_head_ptr_t dest_q_head, + struct cds_wfcq_tail *dest_q_tail, + cds_wfcq_head_ptr_t src_q_head, + struct cds_wfcq_tail *src_q_tail) +{ + return ___cds_wfcq_splice(dest_q_head, dest_q_tail, + src_q_head, src_q_tail, 1); +} + +/* + * __cds_wfcq_splice_nonblocking: enqueue all src_q nodes at the end of dest_q. + * + * Same as __cds_wfcq_splice_blocking, but returns + * CDS_WFCQ_RET_WOULDBLOCK if it needs to block. + */ +static inline enum cds_wfcq_ret +___cds_wfcq_splice_nonblocking( + cds_wfcq_head_ptr_t dest_q_head, + struct cds_wfcq_tail *dest_q_tail, + cds_wfcq_head_ptr_t src_q_head, + struct cds_wfcq_tail *src_q_tail) +{ + return ___cds_wfcq_splice(dest_q_head, dest_q_tail, + src_q_head, src_q_tail, 0); +} + +/* + * cds_wfcq_dequeue_with_state_blocking: dequeue a node from a wait-free queue. + * + * Content written into the node before enqueue is guaranteed to be + * consistent, but no other memory ordering is ensured. + * Mutual exclusion with cds_wfcq_splice_blocking and dequeue lock is + * ensured. + * It is valid to reuse and free a dequeued node immediately. + */ +static inline struct cds_wfcq_node * +_cds_wfcq_dequeue_with_state_blocking(struct cds_wfcq_head *head, + struct cds_wfcq_tail *tail, int *state) +{ + struct cds_wfcq_node *retval; + + _cds_wfcq_dequeue_lock(head, tail); + retval = ___cds_wfcq_dequeue_with_state_blocking(cds_wfcq_head_cast(head), + tail, state); + _cds_wfcq_dequeue_unlock(head, tail); + return retval; +} + +/* + * cds_wfcq_dequeue_blocking: dequeue node from queue. + * + * Same as cds_wfcq_dequeue_blocking, but without saving state. + */ +static inline struct cds_wfcq_node * +_cds_wfcq_dequeue_blocking(struct cds_wfcq_head *head, + struct cds_wfcq_tail *tail) +{ + return _cds_wfcq_dequeue_with_state_blocking(head, tail, NULL); +} + +/* + * cds_wfcq_splice_blocking: enqueue all src_q nodes at the end of dest_q. + * + * Dequeue all nodes from src_q. + * dest_q must be already initialized. + * Content written into the node before enqueue is guaranteed to be + * consistent, but no other memory ordering is ensured. + * Mutual exclusion with cds_wfcq_dequeue_blocking and dequeue lock is + * ensured. + * Returns enum cds_wfcq_ret which indicates the state of the src or + * dest queue. Never returns CDS_WFCQ_RET_WOULDBLOCK. + */ +static inline enum cds_wfcq_ret +_cds_wfcq_splice_blocking( + struct cds_wfcq_head *dest_q_head, + struct cds_wfcq_tail *dest_q_tail, + struct cds_wfcq_head *src_q_head, + struct cds_wfcq_tail *src_q_tail) +{ + enum cds_wfcq_ret ret; + + _cds_wfcq_dequeue_lock(src_q_head, src_q_tail); + ret = ___cds_wfcq_splice_blocking(cds_wfcq_head_cast(dest_q_head), dest_q_tail, + cds_wfcq_head_cast(src_q_head), src_q_tail); + _cds_wfcq_dequeue_unlock(src_q_head, src_q_tail); + return ret; +} + +#ifdef __cplusplus +} +#endif + +#endif /* _URCU_WFCQUEUE_STATIC_H */ diff --git a/contrib/userspace-rcu/static-wfstack.h b/contrib/userspace-rcu/static-wfstack.h new file mode 100644 index 00000000000..29b81c3aac3 --- /dev/null +++ b/contrib/userspace-rcu/static-wfstack.h @@ -0,0 +1,455 @@ +#ifndef _URCU_STATIC_WFSTACK_H +#define _URCU_STATIC_WFSTACK_H + +/* + * urcu/static/wfstack.h + * + * Userspace RCU library - Stack with with wait-free push, blocking traversal. + * + * TO BE INCLUDED ONLY IN LGPL-COMPATIBLE CODE. See urcu/wfstack.h for + * linking dynamically with the userspace rcu library. + * + * Copyright 2010-2012 - Mathieu Desnoyers <mathieu.desnoyers@efficios.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +/* Adapted from userspace-rcu 0.10 because version 0.7 doesn't support a stack + * without mutex. */ + +#include <pthread.h> +#include <assert.h> +#include <poll.h> +#include <stdbool.h> +#include <urcu/compiler.h> +#include <urcu/uatomic.h> + +#ifdef __cplusplus +extern "C" { +#endif + +#define CDS_WFS_END ((void *) 0x1UL) +#define CDS_WFS_ADAPT_ATTEMPTS 10 /* Retry if being set */ +#define CDS_WFS_WAIT 10 /* Wait 10 ms if being set */ + +/* + * Stack with wait-free push, blocking traversal. + * + * Stack implementing push, pop, pop_all operations, as well as iterator + * on the stack head returned by pop_all. + * + * Wait-free operations: cds_wfs_push, __cds_wfs_pop_all, cds_wfs_empty, + * cds_wfs_first. + * Blocking operations: cds_wfs_pop, cds_wfs_pop_all, cds_wfs_next, + * iteration on stack head returned by pop_all. + * + * Synchronization table: + * + * External synchronization techniques described in the API below is + * required between pairs marked with "X". No external synchronization + * required between pairs marked with "-". + * + * cds_wfs_push __cds_wfs_pop __cds_wfs_pop_all + * cds_wfs_push - - - + * __cds_wfs_pop - X X + * __cds_wfs_pop_all - X - + * + * cds_wfs_pop and cds_wfs_pop_all use an internal mutex to provide + * synchronization. + */ + +/* + * cds_wfs_node_init: initialize wait-free stack node. + */ +static inline +void _cds_wfs_node_init(struct cds_wfs_node *node) +{ + node->next = NULL; +} + +/* + * __cds_wfs_init: initialize wait-free stack. Don't pair with + * any destroy function. + */ +static inline void ___cds_wfs_init(struct __cds_wfs_stack *s) +{ + s->head = CDS_WFS_END; +} + +/* + * cds_wfs_init: initialize wait-free stack. Pair with + * cds_wfs_destroy(). + */ +static inline +void _cds_wfs_init(struct cds_wfs_stack *s) +{ + int ret; + + s->head = CDS_WFS_END; + ret = pthread_mutex_init(&s->lock, NULL); + assert(!ret); +} + +/* + * cds_wfs_destroy: destroy wait-free stack. Pair with + * cds_wfs_init(). + */ +static inline +void _cds_wfs_destroy(struct cds_wfs_stack *s) +{ + int ret = pthread_mutex_destroy(&s->lock); + assert(!ret); +} + +static inline bool ___cds_wfs_end(void *node) +{ + return node == CDS_WFS_END; +} + +/* + * cds_wfs_empty: return whether wait-free stack is empty. + * + * No memory barrier is issued. No mutual exclusion is required. + */ +static inline bool _cds_wfs_empty(cds_wfs_stack_ptr_t u_stack) +{ + struct __cds_wfs_stack *s = u_stack._s; + + return ___cds_wfs_end(CMM_LOAD_SHARED(s->head)); +} + +/* + * cds_wfs_push: push a node into the stack. + * + * Issues a full memory barrier before push. No mutual exclusion is + * required. + * + * Returns 0 if the stack was empty prior to adding the node. + * Returns non-zero otherwise. + */ +static inline +int _cds_wfs_push(cds_wfs_stack_ptr_t u_stack, struct cds_wfs_node *node) +{ + struct __cds_wfs_stack *s = u_stack._s; + struct cds_wfs_head *old_head, *new_head; + + assert(node->next == NULL); + new_head = caa_container_of(node, struct cds_wfs_head, node); + /* + * uatomic_xchg() implicit memory barrier orders earlier stores + * to node (setting it to NULL) before publication. + */ + old_head = uatomic_xchg(&s->head, new_head); + /* + * At this point, dequeuers see a NULL node->next, they should + * busy-wait until node->next is set to old_head. + */ + CMM_STORE_SHARED(node->next, &old_head->node); + return !___cds_wfs_end(old_head); +} + +/* + * Waiting for push to complete enqueue and return the next node. + */ +static inline struct cds_wfs_node * +___cds_wfs_node_sync_next(struct cds_wfs_node *node, int blocking) +{ + struct cds_wfs_node *next; + int attempt = 0; + + /* + * Adaptative busy-looping waiting for push to complete. + */ + while ((next = CMM_LOAD_SHARED(node->next)) == NULL) { + if (!blocking) + return CDS_WFS_WOULDBLOCK; + if (++attempt >= CDS_WFS_ADAPT_ATTEMPTS) { + (void) poll(NULL, 0, CDS_WFS_WAIT); /* Wait for 10ms */ + attempt = 0; + } else { + caa_cpu_relax(); + } + } + + return next; +} + +static inline +struct cds_wfs_node * +___cds_wfs_pop(cds_wfs_stack_ptr_t u_stack, int *state, int blocking) +{ + struct cds_wfs_head *head, *new_head; + struct cds_wfs_node *next; + struct __cds_wfs_stack *s = u_stack._s; + + if (state) + *state = 0; + for (;;) { + head = CMM_LOAD_SHARED(s->head); + if (___cds_wfs_end(head)) { + return NULL; + } + next = ___cds_wfs_node_sync_next(&head->node, blocking); + if (!blocking && next == CDS_WFS_WOULDBLOCK) { + return CDS_WFS_WOULDBLOCK; + } + new_head = caa_container_of(next, struct cds_wfs_head, node); + if (uatomic_cmpxchg(&s->head, head, new_head) == head) { + if (state && ___cds_wfs_end(new_head)) + *state |= CDS_WFS_STATE_LAST; + return &head->node; + } + if (!blocking) { + return CDS_WFS_WOULDBLOCK; + } + /* busy-loop if head changed under us */ + } +} + +/* + * __cds_wfs_pop_with_state_blocking: pop a node from the stack, with state. + * + * Returns NULL if stack is empty. + * + * __cds_wfs_pop_blocking needs to be synchronized using one of the + * following techniques: + * + * 1) Calling __cds_wfs_pop_blocking under rcu read lock critical + * section. The caller must wait for a grace period to pass before + * freeing the returned node or modifying the cds_wfs_node structure. + * 2) Using mutual exclusion (e.g. mutexes) to protect + * __cds_wfs_pop_blocking and __cds_wfs_pop_all callers. + * 3) Ensuring that only ONE thread can call __cds_wfs_pop_blocking() + * and __cds_wfs_pop_all(). (multi-provider/single-consumer scheme). + * + * "state" saves state flags atomically sampled with pop operation. + */ +static inline +struct cds_wfs_node * +___cds_wfs_pop_with_state_blocking(cds_wfs_stack_ptr_t u_stack, int *state) +{ + return ___cds_wfs_pop(u_stack, state, 1); +} + +static inline +struct cds_wfs_node * +___cds_wfs_pop_blocking(cds_wfs_stack_ptr_t u_stack) +{ + return ___cds_wfs_pop_with_state_blocking(u_stack, NULL); +} + +/* + * __cds_wfs_pop_with_state_nonblocking: pop a node from the stack. + * + * Same as __cds_wfs_pop_with_state_blocking, but returns + * CDS_WFS_WOULDBLOCK if it needs to block. + * + * "state" saves state flags atomically sampled with pop operation. + */ +static inline +struct cds_wfs_node * +___cds_wfs_pop_with_state_nonblocking(cds_wfs_stack_ptr_t u_stack, int *state) +{ + return ___cds_wfs_pop(u_stack, state, 0); +} + +/* + * __cds_wfs_pop_nonblocking: pop a node from the stack. + * + * Same as __cds_wfs_pop_blocking, but returns CDS_WFS_WOULDBLOCK if + * it needs to block. + */ +static inline +struct cds_wfs_node * +___cds_wfs_pop_nonblocking(cds_wfs_stack_ptr_t u_stack) +{ + return ___cds_wfs_pop_with_state_nonblocking(u_stack, NULL); +} + +/* + * __cds_wfs_pop_all: pop all nodes from a stack. + * + * __cds_wfs_pop_all does not require any synchronization with other + * push, nor with other __cds_wfs_pop_all, but requires synchronization + * matching the technique used to synchronize __cds_wfs_pop_blocking: + * + * 1) If __cds_wfs_pop_blocking is called under rcu read lock critical + * section, both __cds_wfs_pop_blocking and cds_wfs_pop_all callers + * must wait for a grace period to pass before freeing the returned + * node or modifying the cds_wfs_node structure. However, no RCU + * read-side critical section is needed around __cds_wfs_pop_all. + * 2) Using mutual exclusion (e.g. mutexes) to protect + * __cds_wfs_pop_blocking and __cds_wfs_pop_all callers. + * 3) Ensuring that only ONE thread can call __cds_wfs_pop_blocking() + * and __cds_wfs_pop_all(). (multi-provider/single-consumer scheme). + */ +static inline +struct cds_wfs_head * +___cds_wfs_pop_all(cds_wfs_stack_ptr_t u_stack) +{ + struct __cds_wfs_stack *s = u_stack._s; + struct cds_wfs_head *head; + + /* + * Implicit memory barrier after uatomic_xchg() matches implicit + * memory barrier before uatomic_xchg() in cds_wfs_push. It + * ensures that all nodes of the returned list are consistent. + * There is no need to issue memory barriers when iterating on + * the returned list, because the full memory barrier issued + * prior to each uatomic_cmpxchg, which each write to head, are + * taking care to order writes to each node prior to the full + * memory barrier after this uatomic_xchg(). + */ + head = uatomic_xchg(&s->head, CDS_WFS_END); + if (___cds_wfs_end(head)) + return NULL; + return head; +} + +/* + * cds_wfs_pop_lock: lock stack pop-protection mutex. + */ +static inline void _cds_wfs_pop_lock(struct cds_wfs_stack *s) +{ + int ret; + + ret = pthread_mutex_lock(&s->lock); + assert(!ret); +} + +/* + * cds_wfs_pop_unlock: unlock stack pop-protection mutex. + */ +static inline void _cds_wfs_pop_unlock(struct cds_wfs_stack *s) +{ + int ret; + + ret = pthread_mutex_unlock(&s->lock); + assert(!ret); +} + +/* + * Call __cds_wfs_pop_with_state_blocking with an internal pop mutex held. + */ +static inline +struct cds_wfs_node * +_cds_wfs_pop_with_state_blocking(struct cds_wfs_stack *s, int *state) +{ + struct cds_wfs_node *retnode; + + _cds_wfs_pop_lock(s); + retnode = ___cds_wfs_pop_with_state_blocking(s, state); + _cds_wfs_pop_unlock(s); + return retnode; +} + +/* + * Call _cds_wfs_pop_with_state_blocking without saving any state. + */ +static inline +struct cds_wfs_node * +_cds_wfs_pop_blocking(struct cds_wfs_stack *s) +{ + return _cds_wfs_pop_with_state_blocking(s, NULL); +} + +/* + * Call __cds_wfs_pop_all with an internal pop mutex held. + */ +static inline +struct cds_wfs_head * +_cds_wfs_pop_all_blocking(struct cds_wfs_stack *s) +{ + struct cds_wfs_head *rethead; + + _cds_wfs_pop_lock(s); + rethead = ___cds_wfs_pop_all(s); + _cds_wfs_pop_unlock(s); + return rethead; +} + +/* + * cds_wfs_first: get first node of a popped stack. + * + * Content written into the node before enqueue is guaranteed to be + * consistent, but no other memory ordering is ensured. + * + * Used by for-like iteration macros in urcu/wfstack.h: + * cds_wfs_for_each_blocking() + * cds_wfs_for_each_blocking_safe() + * + * Returns NULL if popped stack is empty, top stack node otherwise. + */ +static inline struct cds_wfs_node * +_cds_wfs_first(struct cds_wfs_head *head) +{ + if (___cds_wfs_end(head)) + return NULL; + return &head->node; +} + +static inline struct cds_wfs_node * +___cds_wfs_next(struct cds_wfs_node *node, int blocking) +{ + struct cds_wfs_node *next; + + next = ___cds_wfs_node_sync_next(node, blocking); + /* + * CDS_WFS_WOULDBLOCK != CSD_WFS_END, so we can check for end + * even if ___cds_wfs_node_sync_next returns CDS_WFS_WOULDBLOCK, + * and still return CDS_WFS_WOULDBLOCK. + */ + if (___cds_wfs_end(next)) + return NULL; + return next; +} + +/* + * cds_wfs_next_blocking: get next node of a popped stack. + * + * Content written into the node before enqueue is guaranteed to be + * consistent, but no other memory ordering is ensured. + * + * Used by for-like iteration macros in urcu/wfstack.h: + * cds_wfs_for_each_blocking() + * cds_wfs_for_each_blocking_safe() + * + * Returns NULL if reached end of popped stack, non-NULL next stack + * node otherwise. + */ +static inline struct cds_wfs_node * +_cds_wfs_next_blocking(struct cds_wfs_node *node) +{ + return ___cds_wfs_next(node, 1); +} + + +/* + * cds_wfs_next_nonblocking: get next node of a popped stack. + * + * Same as cds_wfs_next_blocking, but returns CDS_WFS_WOULDBLOCK if it + * needs to block. + */ +static inline struct cds_wfs_node * +_cds_wfs_next_nonblocking(struct cds_wfs_node *node) +{ + return ___cds_wfs_next(node, 0); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _URCU_STATIC_WFSTACK_H */ diff --git a/contrib/userspace-rcu/wfcqueue.h b/contrib/userspace-rcu/wfcqueue.h new file mode 100644 index 00000000000..0292585ac79 --- /dev/null +++ b/contrib/userspace-rcu/wfcqueue.h @@ -0,0 +1,216 @@ +#ifndef _URCU_WFCQUEUE_H +#define _URCU_WFCQUEUE_H + +/* + * urcu/wfcqueue.h + * + * Userspace RCU library - Concurrent Queue with Wait-Free Enqueue/Blocking Dequeue + * + * Copyright 2010-2012 - Mathieu Desnoyers <mathieu.desnoyers@efficios.com> + * Copyright 2011-2012 - Lai Jiangshan <laijs@cn.fujitsu.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +/* Adapted from userspace-rcu 0.10 because version 0.7 doesn't contain it. + * The non-LGPL section has been removed. */ + +#include <pthread.h> +#include <assert.h> +#include <stdbool.h> +#include <urcu/compiler.h> +#include <urcu/arch.h> + +#ifdef __cplusplus +extern "C" { +#endif + +/* + * Concurrent queue with wait-free enqueue/blocking dequeue. + * + * This queue has been designed and implemented collaboratively by + * Mathieu Desnoyers and Lai Jiangshan. Inspired from + * half-wait-free/half-blocking queue implementation done by Paul E. + * McKenney. + */ + +#define CDS_WFCQ_WOULDBLOCK ((struct cds_wfcq_node *) -1UL) + +enum cds_wfcq_ret { + CDS_WFCQ_RET_WOULDBLOCK = -1, + CDS_WFCQ_RET_DEST_EMPTY = 0, + CDS_WFCQ_RET_DEST_NON_EMPTY = 1, + CDS_WFCQ_RET_SRC_EMPTY = 2, +}; + +enum cds_wfcq_state { + CDS_WFCQ_STATE_LAST = (1U << 0), +}; + +struct cds_wfcq_node { + struct cds_wfcq_node *next; +}; + +/* + * Do not put head and tail on the same cache-line if concurrent + * enqueue/dequeue are expected from many CPUs. This eliminates + * false-sharing between enqueue and dequeue. + */ +struct __cds_wfcq_head { + struct cds_wfcq_node node; +}; + +struct cds_wfcq_head { + struct cds_wfcq_node node; + pthread_mutex_t lock; +}; + +#ifndef __cplusplus +/* + * The transparent union allows calling functions that work on both + * struct cds_wfcq_head and struct __cds_wfcq_head on any of those two + * types. + */ +typedef union { + struct __cds_wfcq_head *_h; + struct cds_wfcq_head *h; +} __attribute__((__transparent_union__)) cds_wfcq_head_ptr_t; + +/* + * This static inline is only present for compatibility with C++. It is + * effect-less in C. + */ +static inline struct __cds_wfcq_head *__cds_wfcq_head_cast(struct __cds_wfcq_head *head) +{ + return head; +} + +/* + * This static inline is only present for compatibility with C++. It is + * effect-less in C. + */ +static inline struct cds_wfcq_head *cds_wfcq_head_cast(struct cds_wfcq_head *head) +{ + return head; +} +#else /* #ifndef __cplusplus */ + +/* C++ ignores transparent union. */ +typedef union { + struct __cds_wfcq_head *_h; + struct cds_wfcq_head *h; +} cds_wfcq_head_ptr_t; + +/* C++ ignores transparent union. Requires an explicit conversion. */ +static inline cds_wfcq_head_ptr_t __cds_wfcq_head_cast(struct __cds_wfcq_head *head) +{ + cds_wfcq_head_ptr_t ret = { ._h = head }; + return ret; +} +/* C++ ignores transparent union. Requires an explicit conversion. */ +static inline cds_wfcq_head_ptr_t cds_wfcq_head_cast(struct cds_wfcq_head *head) +{ + cds_wfcq_head_ptr_t ret = { .h = head }; + return ret; +} +#endif /* #else #ifndef __cplusplus */ + +struct cds_wfcq_tail { + struct cds_wfcq_node *p; +}; + +#include "static-wfcqueue.h" + +#define cds_wfcq_node_init _cds_wfcq_node_init +#define cds_wfcq_init _cds_wfcq_init +#define __cds_wfcq_init ___cds_wfcq_init +#define cds_wfcq_destroy _cds_wfcq_destroy +#define cds_wfcq_empty _cds_wfcq_empty +#define cds_wfcq_enqueue _cds_wfcq_enqueue + +/* Dequeue locking */ +#define cds_wfcq_dequeue_lock _cds_wfcq_dequeue_lock +#define cds_wfcq_dequeue_unlock _cds_wfcq_dequeue_unlock + +/* Locking performed within cds_wfcq calls. */ +#define cds_wfcq_dequeue_blocking _cds_wfcq_dequeue_blocking +#define cds_wfcq_dequeue_with_state_blocking \ + _cds_wfcq_dequeue_with_state_blocking +#define cds_wfcq_splice_blocking _cds_wfcq_splice_blocking +#define cds_wfcq_first_blocking _cds_wfcq_first_blocking +#define cds_wfcq_next_blocking _cds_wfcq_next_blocking + +/* Locking ensured by caller by holding cds_wfcq_dequeue_lock() */ +#define __cds_wfcq_dequeue_blocking ___cds_wfcq_dequeue_blocking +#define __cds_wfcq_dequeue_with_state_blocking \ + ___cds_wfcq_dequeue_with_state_blocking +#define __cds_wfcq_splice_blocking ___cds_wfcq_splice_blocking +#define __cds_wfcq_first_blocking ___cds_wfcq_first_blocking +#define __cds_wfcq_next_blocking ___cds_wfcq_next_blocking + +/* + * Locking ensured by caller by holding cds_wfcq_dequeue_lock(). + * Non-blocking: deque, first, next return CDS_WFCQ_WOULDBLOCK if they + * need to block. splice returns nonzero if it needs to block. + */ +#define __cds_wfcq_dequeue_nonblocking ___cds_wfcq_dequeue_nonblocking +#define __cds_wfcq_dequeue_with_state_nonblocking \ + ___cds_wfcq_dequeue_with_state_nonblocking +#define __cds_wfcq_splice_nonblocking ___cds_wfcq_splice_nonblocking +#define __cds_wfcq_first_nonblocking ___cds_wfcq_first_nonblocking +#define __cds_wfcq_next_nonblocking ___cds_wfcq_next_nonblocking + +/* + * __cds_wfcq_for_each_blocking: Iterate over all nodes in a queue, + * without dequeuing them. + * @head: head of the queue (struct cds_wfcq_head or __cds_wfcq_head pointer). + * @tail: tail of the queue (struct cds_wfcq_tail pointer). + * @node: iterator on the queue (struct cds_wfcq_node pointer). + * + * Content written into each node before enqueue is guaranteed to be + * consistent, but no other memory ordering is ensured. + * Dequeue/splice/iteration mutual exclusion should be ensured by the + * caller. + */ +#define __cds_wfcq_for_each_blocking(head, tail, node) \ + for (node = __cds_wfcq_first_blocking(head, tail); \ + node != NULL; \ + node = __cds_wfcq_next_blocking(head, tail, node)) + +/* + * __cds_wfcq_for_each_blocking_safe: Iterate over all nodes in a queue, + * without dequeuing them. Safe against deletion. + * @head: head of the queue (struct cds_wfcq_head or __cds_wfcq_head pointer). + * @tail: tail of the queue (struct cds_wfcq_tail pointer). + * @node: iterator on the queue (struct cds_wfcq_node pointer). + * @n: struct cds_wfcq_node pointer holding the next pointer (used + * internally). + * + * Content written into each node before enqueue is guaranteed to be + * consistent, but no other memory ordering is ensured. + * Dequeue/splice/iteration mutual exclusion should be ensured by the + * caller. + */ +#define __cds_wfcq_for_each_blocking_safe(head, tail, node, n) \ + for (node = __cds_wfcq_first_blocking(head, tail), \ + n = (node ? __cds_wfcq_next_blocking(head, tail, node) : NULL); \ + node != NULL; \ + node = n, n = (node ? __cds_wfcq_next_blocking(head, tail, node) : NULL)) + +#ifdef __cplusplus +} +#endif + +#endif /* _URCU_WFCQUEUE_H */ diff --git a/contrib/userspace-rcu/wfstack.h b/contrib/userspace-rcu/wfstack.h new file mode 100644 index 00000000000..738fd1cfd33 --- /dev/null +++ b/contrib/userspace-rcu/wfstack.h @@ -0,0 +1,178 @@ +#ifndef _URCU_WFSTACK_H +#define _URCU_WFSTACK_H + +/* + * urcu/wfstack.h + * + * Userspace RCU library - Stack with wait-free push, blocking traversal. + * + * Copyright 2010-2012 - Mathieu Desnoyers <mathieu.desnoyers@efficios.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +/* Adapted from userspace-rcu 0.10 because version 0.7 doesn't support a stack + * without mutex. The non-LGPL section has been removed. */ + +#include <pthread.h> +#include <assert.h> +#include <stdbool.h> +#include <urcu/compiler.h> + +#ifdef __cplusplus +extern "C" { +#endif + +/* + * Stack with wait-free push, blocking traversal. + * + * Stack implementing push, pop, pop_all operations, as well as iterator + * on the stack head returned by pop_all. + * + * Wait-free operations: cds_wfs_push, __cds_wfs_pop_all, cds_wfs_empty, + * cds_wfs_first. + * Blocking operations: cds_wfs_pop, cds_wfs_pop_all, cds_wfs_next, + * iteration on stack head returned by pop_all. + * + * Synchronization table: + * + * External synchronization techniques described in the API below is + * required between pairs marked with "X". No external synchronization + * required between pairs marked with "-". + * + * cds_wfs_push __cds_wfs_pop __cds_wfs_pop_all + * cds_wfs_push - - - + * __cds_wfs_pop - X X + * __cds_wfs_pop_all - X - + * + * cds_wfs_pop and cds_wfs_pop_all use an internal mutex to provide + * synchronization. + */ + +#define CDS_WFS_WOULDBLOCK ((void *) -1UL) + +enum cds_wfs_state { + CDS_WFS_STATE_LAST = (1U << 0), +}; + +/* + * struct cds_wfs_node is returned by __cds_wfs_pop, and also used as + * iterator on stack. It is not safe to dereference the node next + * pointer when returned by __cds_wfs_pop_blocking. + */ +struct cds_wfs_node { + struct cds_wfs_node *next; +}; + +/* + * struct cds_wfs_head is returned by __cds_wfs_pop_all, and can be used + * to begin iteration on the stack. "node" needs to be the first field of + * cds_wfs_head, so the end-of-stack pointer value can be used for both + * types. + */ +struct cds_wfs_head { + struct cds_wfs_node node; +}; + +struct __cds_wfs_stack { + struct cds_wfs_head *head; +}; + +struct cds_wfs_stack { + struct cds_wfs_head *head; + pthread_mutex_t lock; +}; + +/* + * The transparent union allows calling functions that work on both + * struct cds_wfs_stack and struct __cds_wfs_stack on any of those two + * types. + */ +typedef union { + struct __cds_wfs_stack *_s; + struct cds_wfs_stack *s; +} __attribute__((__transparent_union__)) cds_wfs_stack_ptr_t; + +#include "static-wfstack.h" + +#define cds_wfs_node_init _cds_wfs_node_init +#define cds_wfs_init _cds_wfs_init +#define cds_wfs_destroy _cds_wfs_destroy +#define __cds_wfs_init ___cds_wfs_init +#define cds_wfs_empty _cds_wfs_empty +#define cds_wfs_push _cds_wfs_push + +/* Locking performed internally */ +#define cds_wfs_pop_blocking _cds_wfs_pop_blocking +#define cds_wfs_pop_with_state_blocking _cds_wfs_pop_with_state_blocking +#define cds_wfs_pop_all_blocking _cds_wfs_pop_all_blocking + +/* + * For iteration on cds_wfs_head returned by __cds_wfs_pop_all or + * cds_wfs_pop_all_blocking. + */ +#define cds_wfs_first _cds_wfs_first +#define cds_wfs_next_blocking _cds_wfs_next_blocking +#define cds_wfs_next_nonblocking _cds_wfs_next_nonblocking + +/* Pop locking with internal mutex */ +#define cds_wfs_pop_lock _cds_wfs_pop_lock +#define cds_wfs_pop_unlock _cds_wfs_pop_unlock + +/* Synchronization ensured by the caller. See synchronization table. */ +#define __cds_wfs_pop_blocking ___cds_wfs_pop_blocking +#define __cds_wfs_pop_with_state_blocking \ + ___cds_wfs_pop_with_state_blocking +#define __cds_wfs_pop_nonblocking ___cds_wfs_pop_nonblocking +#define __cds_wfs_pop_with_state_nonblocking \ + ___cds_wfs_pop_with_state_nonblocking +#define __cds_wfs_pop_all ___cds_wfs_pop_all + +#ifdef __cplusplus +} +#endif + +/* + * cds_wfs_for_each_blocking: Iterate over all nodes returned by + * __cds_wfs_pop_all(). + * @head: head of the queue (struct cds_wfs_head pointer). + * @node: iterator (struct cds_wfs_node pointer). + * + * Content written into each node before enqueue is guaranteed to be + * consistent, but no other memory ordering is ensured. + */ +#define cds_wfs_for_each_blocking(head, node) \ + for (node = cds_wfs_first(head); \ + node != NULL; \ + node = cds_wfs_next_blocking(node)) + +/* + * cds_wfs_for_each_blocking_safe: Iterate over all nodes returned by + * __cds_wfs_pop_all(). Safe against deletion. + * @head: head of the queue (struct cds_wfs_head pointer). + * @node: iterator (struct cds_wfs_node pointer). + * @n: struct cds_wfs_node pointer holding the next pointer (used + * internally). + * + * Content written into each node before enqueue is guaranteed to be + * consistent, but no other memory ordering is ensured. + */ +#define cds_wfs_for_each_blocking_safe(head, node, n) \ + for (node = cds_wfs_first(head), \ + n = (node ? cds_wfs_next_blocking(node) : NULL); \ + node != NULL; \ + node = n, n = (node ? cds_wfs_next_blocking(node) : NULL)) + +#endif /* _URCU_WFSTACK_H */ |