diff options
author | Jeff Darcy <jdarcy@redhat.com> | 2016-10-14 10:04:07 -0400 |
---|---|---|
committer | Shyamsundar Ranganathan <srangana@redhat.com> | 2017-02-02 13:30:19 -0500 |
commit | ae47befebeda2de5fd2d706090cbacf4ef60c785 (patch) | |
tree | 4d0605aa4fd21c504341894d287e555dac455768 /libglusterfs/src/mem-pool.h | |
parent | 984c470159a68114be2a260412cfefe2c158ab99 (diff) |
libglusterfs: make memory pools more thread-friendly
Early multiplexing tests revealed *massive* contention on certain
pools' global locks - especially for dictionaries and secondarily for
call stubs. For the thread counts that multiplexing can create, a
more lock-free solution is clearly needed. Also, the current mem-pool
implementation does a poor job releasing memory back to the system,
artificially inflating memory usage to match whatever the worst case
was since the process started. This is bad in general, but especially
so for multiplexing where there are more pools and a major point of
the whole exercise is to reduce memory consumption.
The basic ideas for the new design are these
There is one pool, globally, for each power-of-two size range.
Every attempt to create a new pool within this range will instead
add a reference to the existing pool.
Instead of adding pools for each translator within each multiplexed
brick (potentially infinite and quite possibly thousands), we
allocate one set of size-based pools per *thread* (hundreds at
worst).
Each per-thread pool is divided into hot and cold lists. Every
allocation first attempts to use the hot list, then the cold list.
When objects are freed, they always go on the hot list.
There is one global "pool sweeper" thread, which periodically
reclaims everything in each pool's cold list and then "demotes" the
current hot list to be the new cold list.
For normal allocation activity, only a per-thread lock need be
taken, and even that only to guard against very rare contention from
the pool sweeper. When threads start and stop, a global lock must
be taken to add them to the pool sweeper's list. Lock contention is
therefore extremely low, and the hot/cold lists also provide good
locality.
A more complete explanation (of a similar earlier design) can be found
here:
http://www.gluster.org/pipermail/gluster-devel/2016-October/051160.html
Change-Id: I5bc8a1ba57cfb553998f979a498886e0d006e665
BUG: 1385758
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: https://review.gluster.org/15645
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
Diffstat (limited to 'libglusterfs/src/mem-pool.h')
-rw-r--r-- | libglusterfs/src/mem-pool.h | 67 |
1 files changed, 52 insertions, 15 deletions
diff --git a/libglusterfs/src/mem-pool.h b/libglusterfs/src/mem-pool.h index 6cff7be94f4..0dc186341b2 100644 --- a/libglusterfs/src/mem-pool.h +++ b/libglusterfs/src/mem-pool.h @@ -209,24 +209,61 @@ out: return dup_mem; } +typedef struct pooled_obj_hdr { + unsigned long magic; + struct pooled_obj_hdr *next; + struct per_thread_pool_list *pool_list; + unsigned int power_of_two; +} pooled_obj_hdr_t; + +#define AVAILABLE_SIZE(p2) ((1 << (p2)) - sizeof(pooled_obj_hdr_t)) + +typedef struct per_thread_pool { + /* This never changes, so doesn't need a lock. */ + struct mem_pool *parent; + /* Everything else is protected by our own lock. */ + pooled_obj_hdr_t *hot_list; + pooled_obj_hdr_t *cold_list; +} per_thread_pool_t; + +typedef struct per_thread_pool_list { + /* + * These first two members are protected by the global pool lock. When + * a thread first tries to use any pool, we create one of these. We + * link it into the global list using thr_list so the pool-sweeper + * thread can find it, and use pthread_setspecific so this thread can + * find it. When the per-thread destructor runs, we "poison" the pool + * list to prevent further allocations. This also signals to the + * pool-sweeper thread that the list should be detached and freed after + * the next time it's swept. + */ + struct list_head thr_list; + unsigned int poison; + /* + * There's really more than one pool, but the actual number is hidden + * in the implementation code so we just make it a single-element array + * here. + */ + pthread_spinlock_t lock; + per_thread_pool_t pools[1]; +} per_thread_pool_list_t; + struct mem_pool { - struct list_head list; - int hot_count; - int cold_count; - gf_lock_t lock; - unsigned long padded_sizeof_type; - void *pool; - void *pool_end; - int real_sizeof_type; - uint64_t alloc_count; - uint64_t pool_misses; - int max_alloc; - int curr_stdalloc; - int max_stdalloc; - char *name; - struct list_head global_list; + unsigned int power_of_two; + /* + * Updates to these are *not* protected by a global lock, so races + * could occur and the numbers might be slightly off. Don't expect + * them to line up exactly. It's the general trends that matter, and + * it's not worth the locked-bus-cycle overhead to make these precise. + */ + unsigned long allocs_hot; + unsigned long allocs_cold; + unsigned long allocs_stdc; + unsigned long frees_to_list; }; +void mem_pools_init (void); + struct mem_pool * mem_pool_new_fn (unsigned long sizeof_type, unsigned long count, char *name); |