<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/libglusterfs/src/mem-pool.c, branch v7.1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>core: fix memory allocation issues</title>
<updated>2019-09-16T02:02:00+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-06-21T09:28:08+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=a2201d804d8e76ca82a9d086a6ee545cb26228a1'/>
<id>a2201d804d8e76ca82a9d086a6ee545cb26228a1</id>
<content type='text'>
Two problems have been identified that caused that gluster's memory
usage were twice higher than required.

1. An off by 1 error caused that all objects allocated from the memory
   pools were taken from a pool bigger than required. Since each pool
   corresponds to a size equal to a power of two, this was wasting half
   of the available memory.

2. The header information used for accounting on each memory object was
   not taken into consideration when searching for a suitable memory
   pool. It was added later when each individual block was allocated.
   This made this space "invisible" to memory accounting.

Credits: Thanks to Nithya Balachandran for identifying this problem and
         testing this patch.

&gt;Fixes: bz#1722802
Change-Id: I90e27ad795fe51ca11c13080f62207451f6c138c
&gt;Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
&gt;(cherry picked from commit 1716a907da1a835b658740f1325033d7ddd44952)

Fixes: bz#1748774
Change-Id: I90e27ad795fe51ca11c13080f62207451f6c138c
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Two problems have been identified that caused that gluster's memory
usage were twice higher than required.

1. An off by 1 error caused that all objects allocated from the memory
   pools were taken from a pool bigger than required. Since each pool
   corresponds to a size equal to a power of two, this was wasting half
   of the available memory.

2. The header information used for accounting on each memory object was
   not taken into consideration when searching for a suitable memory
   pool. It was added later when each individual block was allocated.
   This made this space "invisible" to memory accounting.

Credits: Thanks to Nithya Balachandran for identifying this problem and
         testing this patch.

&gt;Fixes: bz#1722802
Change-Id: I90e27ad795fe51ca11c13080f62207451f6c138c
&gt;Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
&gt;(cherry picked from commit 1716a907da1a835b658740f1325033d7ddd44952)

Fixes: bz#1748774
Change-Id: I90e27ad795fe51ca11c13080f62207451f6c138c
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>multiple files: another attempt to remove includes</title>
<updated>2019-06-14T16:50:32+00:00</updated>
<author>
<name>Yaniv Kaul</name>
<email>ykaul@redhat.com</email>
</author>
<published>2019-06-09T10:31:31+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0a6fe8551ac9807a8b6ad62241ec8048cf9f9025'/>
<id>0a6fe8551ac9807a8b6ad62241ec8048cf9f9025</id>
<content type='text'>
There are many include statements that are not needed.
A previous more ambitious attempt failed because of *BSD plafrom
(see https://review.gluster.org/#/c/glusterfs/+/21929/ )

Now trying a more conservative reduction.
It does not solve all circular deps that we have, but it
does reduce some of them. There is just too much to handle
reasonably (dht-common.h includes dht-lock.h which includes
dht-common.h ...), but it does reduce the overall number of lines
of include we need to look at in the future to understand and fix
the mess later one.

Change-Id: I550cd001bdefb8be0fe67632f783c0ef6bee3f9f
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There are many include statements that are not needed.
A previous more ambitious attempt failed because of *BSD plafrom
(see https://review.gluster.org/#/c/glusterfs/+/21929/ )

Now trying a more conservative reduction.
It does not solve all circular deps that we have, but it
does reduce some of them. There is just too much to handle
reasonably (dht-common.h includes dht-lock.h which includes
dht-common.h ...), but it does reduce the overall number of lines
of include we need to look at in the future to understand and fix
the mess later one.

Change-Id: I550cd001bdefb8be0fe67632f783c0ef6bee3f9f
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: Fix compilation when --disable-mempool is used</title>
<updated>2019-05-07T05:11:40+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2019-05-06T10:27:16+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=291e5ec2a09a145ebb0f4c3ef0b78d616786532f'/>
<id>291e5ec2a09a145ebb0f4c3ef0b78d616786532f</id>
<content type='text'>
updates bz#1193929
Change-Id: I245c065b209bcce5db939b6a0a934ba6fd393b47
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
updates bz#1193929
Change-Id: I245c065b209bcce5db939b6a0a934ba6fd393b47
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mem-pool.{c|h}: minor changes</title>
<updated>2019-05-06T13:59:55+00:00</updated>
<author>
<name>Yaniv Kaul</name>
<email>ykaul@redhat.com</email>
</author>
<published>2019-02-27T13:48:42+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=ecd1292eaea8873298357ab2d903110f713f7185'/>
<id>ecd1292eaea8873298357ab2d903110f713f7185</id>
<content type='text'>
1. Removed some code that was not needed. It did not really do anything.
2. CALLOC -&gt; MALLOC in one place.

Compile-tested only!
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;

Change-Id: I4419161e1bb636158e32b5d33044b06f1eef2449
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
1. Removed some code that was not needed. It did not really do anything.
2. CALLOC -&gt; MALLOC in one place.

Compile-tested only!
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;

Change-Id: I4419161e1bb636158e32b5d33044b06f1eef2449
</pre>
</div>
</content>
</entry>
<entry>
<title>core: avoid dynamic TLS allocation when possible</title>
<updated>2019-04-24T03:26:48+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-03-05T17:58:20+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d8eadde7d498939c746ea8ddd9dc70a1029b4070'/>
<id>d8eadde7d498939c746ea8ddd9dc70a1029b4070</id>
<content type='text'>
Some interdependencies between logging and memory management functions
make it impossible to use the logging framework before initializing
memory subsystem because they both depend on Thread Local Storage
allocated through pthread_key_create() during initialization.

This causes a crash when we try to log something very early in the
initialization phase.

To prevent this, several dynamically allocated TLS structures have
been replaced by static TLS reserved at compile time using '__thread'
keyword. This also reduces the number of error sources, making
initialization simpler.

Updates: bz#1193929
Change-Id: I8ea2e072411e30790d50084b6b7e909c7bb01d50
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Some interdependencies between logging and memory management functions
make it impossible to use the logging framework before initializing
memory subsystem because they both depend on Thread Local Storage
allocated through pthread_key_create() during initialization.

This causes a crash when we try to log something very early in the
initialization phase.

To prevent this, several dynamically allocated TLS structures have
been replaced by static TLS reserved at compile time using '__thread'
keyword. This also reduces the number of error sources, making
initialization simpler.

Updates: bz#1193929
Change-Id: I8ea2e072411e30790d50084b6b7e909c7bb01d50
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: fix hang issue in __gf_free</title>
<updated>2019-04-22T15:56:49+00:00</updated>
<author>
<name>Susant Palai</name>
<email>spalai@redhat.com</email>
</author>
<published>2019-04-22T15:48:30+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=132f03a0b77980b25a8b499ad1fe1e4ab2aee40e'/>
<id>132f03a0b77980b25a8b499ad1fe1e4ab2aee40e</id>
<content type='text'>
Currently GF_ASSERT is done under mem_accounting lock at some places.
On a GF_ASSERT failure, gf_msg_callingfn is called which calls gf_malloc
internally and it takes the same mem_accounting lock leading to deadlock.

This is a temporary fix to avoid any hang issue in master.
https://review.gluster.org/#/c/glusterfs/+/22589/ is being worked on
in the mean while so that GF_ASSERT can be used under mem_accounting
lock.

Change-Id: I6d67f23979e7edd2695bdc6aab2997dae4a4060a
updates: bz#1700865
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently GF_ASSERT is done under mem_accounting lock at some places.
On a GF_ASSERT failure, gf_msg_callingfn is called which calls gf_malloc
internally and it takes the same mem_accounting lock leading to deadlock.

This is a temporary fix to avoid any hang issue in master.
https://review.gluster.org/#/c/glusterfs/+/22589/ is being worked on
in the mean while so that GF_ASSERT can be used under mem_accounting
lock.

Change-Id: I6d67f23979e7edd2695bdc6aab2997dae4a4060a
updates: bz#1700865
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: handle memory accounting correctly</title>
<updated>2019-04-22T03:54:17+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-04-12T11:40:59+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b0fce72477d56eeca616ab089756eab4f4b4bf8e'/>
<id>b0fce72477d56eeca616ab089756eab4f4b4bf8e</id>
<content type='text'>
When a translator stops, memory accounting for that translator is not
destroyed (because there could remain memory allocated that references
it), but mutexes that coordinate updates of memory accounting were
destroyed. This caused incorrect memory accounting and even crashes in
debug mode.

This patch also fixes some other things:

* Reduce the number of atomic operations needed to manage memory
  accounting.
* Correctly account memory when realloc() is used.
* Merge two critical sections into one.
* Cleaned the code a bit.

Change-Id: Id5eaee7338729b9bc52c931815ca3ff1e5a7dcc8
Updates: bz#1659334
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When a translator stops, memory accounting for that translator is not
destroyed (because there could remain memory allocated that references
it), but mutexes that coordinate updates of memory accounting were
destroyed. This caused incorrect memory accounting and even crashes in
debug mode.

This patch also fixes some other things:

* Reduce the number of atomic operations needed to manage memory
  accounting.
* Correctly account memory when realloc() is used.
* Merge two critical sections into one.
* Cleaned the code a bit.

Change-Id: Id5eaee7338729b9bc52c931815ca3ff1e5a7dcc8
Updates: bz#1659334
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mem-pool: remove dead code.</title>
<updated>2019-03-26T06:23:20+00:00</updated>
<author>
<name>Yaniv Kaul</name>
<email>ykaul@redhat.com</email>
</author>
<published>2019-03-21T17:51:30+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c9408e1bbfc4e2a85080bc6410bd2c9cdf259534'/>
<id>c9408e1bbfc4e2a85080bc6410bd2c9cdf259534</id>
<content type='text'>
Change-Id: I3bbda719027b45e1289db2e6a718627141bcbdc8
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I3bbda719027b45e1289db2e6a718627141bcbdc8
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Revert "iobuf: Get rid of pre allocated iobuf_pool and use per thread mem pool"</title>
<updated>2019-01-08T11:16:03+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2019-01-04T07:04:50+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=37653efdc7681d1b0f255054ec2f9c9ddd4c8b14'/>
<id>37653efdc7681d1b0f255054ec2f9c9ddd4c8b14</id>
<content type='text'>
This reverts commit b87c397091bac6a4a6dec4e45a7671fad4a11770.

There seems to be some performance regression with the patch and hence recommended to have it reverted.

Updates: #325
Change-Id: Id85d6203173a44fad6cf51d39b3e96f37afcec09
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This reverts commit b87c397091bac6a4a6dec4e45a7671fad4a11770.

There seems to be some performance regression with the patch and hence recommended to have it reverted.

Updates: #325
Change-Id: Id85d6203173a44fad6cf51d39b3e96f37afcec09
</pre>
</div>
</content>
</entry>
<entry>
<title>iobuf: Get rid of pre allocated iobuf_pool and use per thread mem pool</title>
<updated>2018-12-18T09:35:24+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2018-11-21T06:39:39+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b87c397091bac6a4a6dec4e45a7671fad4a11770'/>
<id>b87c397091bac6a4a6dec4e45a7671fad4a11770</id>
<content type='text'>
The current implementation of iobuf_pool has two problems:
- prealloc of 12.5MB memory, this limits the scale factor of the gluster
  processes due to RAM requirements
- lock contention, as the current implementation has one global
  iobuf_pool lock. Credits for debugging and addressing the same goes to
  Krutika Dhananjay &lt;kdhananj@redhat.com&gt;. Issue: #410

Hence changing the iobuf implementation to use per thread mem pool.
This may theoritically appear to cause perf dip as there is no preallocation.
But per thread mem pool will not have significant perf impact as the last
allocated memory is kept alive for subsequent allocs, for some time.
The worst case would be if iobufs requested are of random sizes each time.
The best case is, if we get iobuf request of the same size. From the perf
tests, this patch did not seem to cause any perf decrease.

Note that, with this patch, the rdma performance is going to degrade
drastically. In one of the previous patchsets we had fixes to not
degrade rdma perf, but rdma is not supported and also not tested [1].
Hence the decision was to not have code in rdma that is not tested
and not supported.

[1] https://lists.gluster.org/pipermail/gluster-users.old/2018-July/034400.html

Updates: #325
Change-Id: Ic2ef3bd498f9250dea25f25ba0c01fde19584b27
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The current implementation of iobuf_pool has two problems:
- prealloc of 12.5MB memory, this limits the scale factor of the gluster
  processes due to RAM requirements
- lock contention, as the current implementation has one global
  iobuf_pool lock. Credits for debugging and addressing the same goes to
  Krutika Dhananjay &lt;kdhananj@redhat.com&gt;. Issue: #410

Hence changing the iobuf implementation to use per thread mem pool.
This may theoritically appear to cause perf dip as there is no preallocation.
But per thread mem pool will not have significant perf impact as the last
allocated memory is kept alive for subsequent allocs, for some time.
The worst case would be if iobufs requested are of random sizes each time.
The best case is, if we get iobuf request of the same size. From the perf
tests, this patch did not seem to cause any perf decrease.

Note that, with this patch, the rdma performance is going to degrade
drastically. In one of the previous patchsets we had fixes to not
degrade rdma perf, but rdma is not supported and also not tested [1].
Hence the decision was to not have code in rdma that is not tested
and not supported.

[1] https://lists.gluster.org/pipermail/gluster-users.old/2018-July/034400.html

Updates: #325
Change-Id: Ic2ef3bd498f9250dea25f25ba0c01fde19584b27
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
