<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/doc/debugging/statedump.md, branch v7.1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>Fixing formatting errors in markdown files</title>
<updated>2019-06-08T05:45:35+00:00</updated>
<author>
<name>kshithijiyer</name>
<email>kshithij.ki@gmail.com</email>
</author>
<published>2019-06-05T14:10:29+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2eaf8e846afd71c30f2a6ff6f863a39b1145b8b6'/>
<id>2eaf8e846afd71c30f2a6ff6f863a39b1145b8b6</id>
<content type='text'>
There are a lot of fromatting error is markdown
files peresent under /doc directiory of the project.
Fixing formatting errors and sending a patch.

Fixes: bz#1718273
Change-Id: I08f938088bbaaafddf634f73616ea0dbfe7aedf3
Signed-off-by: kshithijiyer &lt;kshithij.ki@gmail.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There are a lot of fromatting error is markdown
files peresent under /doc directiory of the project.
Fixing formatting errors and sending a patch.

Fixes: bz#1718273
Change-Id: I08f938088bbaaafddf634f73616ea0dbfe7aedf3
Signed-off-by: kshithijiyer &lt;kshithij.ki@gmail.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Revert "iobuf: Get rid of pre allocated iobuf_pool and use per thread mem pool"</title>
<updated>2019-01-08T11:16:03+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2019-01-04T07:04:50+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=37653efdc7681d1b0f255054ec2f9c9ddd4c8b14'/>
<id>37653efdc7681d1b0f255054ec2f9c9ddd4c8b14</id>
<content type='text'>
This reverts commit b87c397091bac6a4a6dec4e45a7671fad4a11770.

There seems to be some performance regression with the patch and hence recommended to have it reverted.

Updates: #325
Change-Id: Id85d6203173a44fad6cf51d39b3e96f37afcec09
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This reverts commit b87c397091bac6a4a6dec4e45a7671fad4a11770.

There seems to be some performance regression with the patch and hence recommended to have it reverted.

Updates: #325
Change-Id: Id85d6203173a44fad6cf51d39b3e96f37afcec09
</pre>
</div>
</content>
</entry>
<entry>
<title>iobuf: Get rid of pre allocated iobuf_pool and use per thread mem pool</title>
<updated>2018-12-18T09:35:24+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2018-11-21T06:39:39+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b87c397091bac6a4a6dec4e45a7671fad4a11770'/>
<id>b87c397091bac6a4a6dec4e45a7671fad4a11770</id>
<content type='text'>
The current implementation of iobuf_pool has two problems:
- prealloc of 12.5MB memory, this limits the scale factor of the gluster
  processes due to RAM requirements
- lock contention, as the current implementation has one global
  iobuf_pool lock. Credits for debugging and addressing the same goes to
  Krutika Dhananjay &lt;kdhananj@redhat.com&gt;. Issue: #410

Hence changing the iobuf implementation to use per thread mem pool.
This may theoritically appear to cause perf dip as there is no preallocation.
But per thread mem pool will not have significant perf impact as the last
allocated memory is kept alive for subsequent allocs, for some time.
The worst case would be if iobufs requested are of random sizes each time.
The best case is, if we get iobuf request of the same size. From the perf
tests, this patch did not seem to cause any perf decrease.

Note that, with this patch, the rdma performance is going to degrade
drastically. In one of the previous patchsets we had fixes to not
degrade rdma perf, but rdma is not supported and also not tested [1].
Hence the decision was to not have code in rdma that is not tested
and not supported.

[1] https://lists.gluster.org/pipermail/gluster-users.old/2018-July/034400.html

Updates: #325
Change-Id: Ic2ef3bd498f9250dea25f25ba0c01fde19584b27
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The current implementation of iobuf_pool has two problems:
- prealloc of 12.5MB memory, this limits the scale factor of the gluster
  processes due to RAM requirements
- lock contention, as the current implementation has one global
  iobuf_pool lock. Credits for debugging and addressing the same goes to
  Krutika Dhananjay &lt;kdhananj@redhat.com&gt;. Issue: #410

Hence changing the iobuf implementation to use per thread mem pool.
This may theoritically appear to cause perf dip as there is no preallocation.
But per thread mem pool will not have significant perf impact as the last
allocated memory is kept alive for subsequent allocs, for some time.
The worst case would be if iobufs requested are of random sizes each time.
The best case is, if we get iobuf request of the same size. From the perf
tests, this patch did not seem to cause any perf decrease.

Note that, with this patch, the rdma performance is going to degrade
drastically. In one of the previous patchsets we had fixes to not
degrade rdma perf, but rdma is not supported and also not tested [1].
Hence the decision was to not have code in rdma that is not tested
and not supported.

[1] https://lists.gluster.org/pipermail/gluster-users.old/2018-July/034400.html

Updates: #325
Change-Id: Ic2ef3bd498f9250dea25f25ba0c01fde19584b27
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: add a cli command to trigger a statedump on a client</title>
<updated>2017-01-24T00:13:15+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2016-01-22T16:44:21+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f2e7b6800b812e8bbc9bdbcea4c400a1784e31dc'/>
<id>f2e7b6800b812e8bbc9bdbcea4c400a1784e31dc</id>
<content type='text'>
With this, we will be able to trigger statedumps on remote Gluster
clients, mainly targetted for applications using libgfapi.

Design:
SIGUSR signal is the most comman way of taking a statedump in Gluster.
But it cannot be used for libgfapi based processes, as the process
loading the library might have already consumed SIGUSR signal. Hence
going by the command way.

One has to issue a Gluster command to initiate a statedump on the
libgfapi based client. The command takes hostname and PID as an
argument. All the glusterds in the cluster, check if they are connected
to the specified hostname, and send an RPC request to all the connected
clients from that hostname (via the mgmt connection).

URL: http://review.gluster.org/16357
Change-Id: Icbe4d2f026b32a2c7d5535e1bfb2cdaaff042e91
BUG: 1169302
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
[ndevos: minor fixes and split patch in smaller pieces]
Reviewed-on: https://review.gluster.org/9228
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Tested-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With this, we will be able to trigger statedumps on remote Gluster
clients, mainly targetted for applications using libgfapi.

Design:
SIGUSR signal is the most comman way of taking a statedump in Gluster.
But it cannot be used for libgfapi based processes, as the process
loading the library might have already consumed SIGUSR signal. Hence
going by the command way.

One has to issue a Gluster command to initiate a statedump on the
libgfapi based client. The command takes hostname and PID as an
argument. All the glusterds in the cluster, check if they are connected
to the specified hostname, and send an RPC request to all the connected
clients from that hostname (via the mgmt connection).

URL: http://review.gluster.org/16357
Change-Id: Icbe4d2f026b32a2c7d5535e1bfb2cdaaff042e91
BUG: 1169302
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
[ndevos: minor fixes and split patch in smaller pieces]
Reviewed-on: https://review.gluster.org/9228
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Tested-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>inode: Add per xlator ref count for inode</title>
<updated>2017-01-18T09:29:45+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2016-03-15T07:14:16+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c9239db7961afd648f1fa3310e5ce9b8281c8ad2'/>
<id>c9239db7961afd648f1fa3310e5ce9b8281c8ad2</id>
<content type='text'>
Debugging inode ref leaks is very difficult as there is no info
except for the ref count on the inode. Hence this patch is first step
towards debugging inode ref leaks. With this patch, in the statedump
we get additional info that tells the ref count taken by each xlator
on the inode.

Change-Id: I7802f7e7b13c04eb4d41fdf52d5469fd2c2a185a
BUG: 1325531
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13736
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Debugging inode ref leaks is very difficult as there is no info
except for the ref count on the inode. Hence this patch is first step
towards debugging inode ref leaks. With this patch, in the statedump
we get additional info that tells the ref count taken by each xlator
on the inode.

Change-Id: I7802f7e7b13c04eb4d41fdf52d5469fd2c2a185a
BUG: 1325531
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13736
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>statedump: Print curr_stdalloc in mempool statedump ...</title>
<updated>2014-08-30T02:31:41+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2014-08-28T04:28:43+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f52efa681b1a16c287ed00e2a79cc7f05e65fed1'/>
<id>f52efa681b1a16c287ed00e2a79cc7f05e65fed1</id>
<content type='text'>
 ...for, it is curr_stdalloc that is incremented for every mem_get()
and decremented on every call to mem_put() and can be used to detect
leaks, when cold_count is 0.

Change-Id: I418a132c3ea4c0b99ea5c6840ff3024d8d19ddf4
BUG: 1134221
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8557
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Humble Devassy Chirammal &lt;humble.devassy@gmail.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
 ...for, it is curr_stdalloc that is incremented for every mem_get()
and decremented on every call to mem_put() and can be used to detect
leaks, when cold_count is 0.

Change-Id: I418a132c3ea4c0b99ea5c6840ff3024d8d19ddf4
BUG: 1134221
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8557
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Humble Devassy Chirammal &lt;humble.devassy@gmail.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>doc: How to generate and read statedump</title>
<updated>2014-07-27T20:35:51+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2014-07-09T12:01:52+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7b7f8533331f9478724e226a9c3a4a34dfe11228'/>
<id>7b7f8533331f9478724e226a9c3a4a34dfe11228</id>
<content type='text'>
Thanks to Poornima G's help with iobuf
Section explanation.

Change-Id: I17737fdbd1f402914f7e67fb4047f5c26ea5c36c
BUG: 1118309
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8288
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-by: Humble Devassy Chirammal &lt;humble.devassy@gmail.com&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Thanks to Poornima G's help with iobuf
Section explanation.

Change-Id: I17737fdbd1f402914f7e67fb4047f5c26ea5c36c
BUG: 1118309
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8288
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-by: Humble Devassy Chirammal &lt;humble.devassy@gmail.com&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
