<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/glusterfsd/src/glusterfsd-mgmt.c, branch v4.1.5</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterfsd: Do not process GLUSTERD_BRICK_XLATOR_OP if graph is not ready</title>
<updated>2018-07-02T10:12:14+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-06-29T06:00:37+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=8f6e437073ee20d8f1ec8efc57b9aac86d09269d'/>
<id>8f6e437073ee20d8f1ec8efc57b9aac86d09269d</id>
<content type='text'>
Problem:
If glustershd gets restarted by glusterd due to node reboot/volume start force/
or any thing that changes shd graph (add/remove brick), and index heal
is launched via CLI, there can be a chance that shd receives this IPC
before the graph is fully active. Thus when it accesses
glusterfsd_ctx-&gt;active, it crashes.

Fix:
Since glusterd does not really wait for the daemons it spawned to be
fully initialized and can send the request as soon as rpc initialization has
succeeded, we just handle it at shd. If glusterfs_graph_activate() is
not yet done in shd but glusterd sends GD_OP_HEAL_VOLUME to shd,
we fail the request.

Change-Id: If6cc07bc5455c4ba03458a36c28b63664496b17d
fixes: bz#1597229
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 2f9e555f79ce69668950693ed2e09cfafd2b7ec1)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
If glustershd gets restarted by glusterd due to node reboot/volume start force/
or any thing that changes shd graph (add/remove brick), and index heal
is launched via CLI, there can be a chance that shd receives this IPC
before the graph is fully active. Thus when it accesses
glusterfsd_ctx-&gt;active, it crashes.

Fix:
Since glusterd does not really wait for the daemons it spawned to be
fully initialized and can send the request as soon as rpc initialization has
succeeded, we just handle it at shd. If glusterfs_graph_activate() is
not yet done in shd but glusterd sends GD_OP_HEAL_VOLUME to shd,
we fail the request.

Change-Id: If6cc07bc5455c4ba03458a36c28b63664496b17d
fixes: bz#1597229
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 2f9e555f79ce69668950693ed2e09cfafd2b7ec1)
</pre>
</div>
</content>
</entry>
<entry>
<title>Revert "glusterfsd: Memleak in glusterfsd process while  brick mux is on"</title>
<updated>2018-05-25T02:05:37+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-05-23T03:40:11+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b679fd4b73d9ec039029088769722887b61d750a'/>
<id>b679fd4b73d9ec039029088769722887b61d750a</id>
<content type='text'>
Updates: bz#1582286
This reverts commit 7c3cc485054e4ede1efb358552135b432fb7047a.
Change-Id: I831d646112bcfa13d0c2153482ad00ff1b23aa6c
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Updates: bz#1582286
This reverts commit 7c3cc485054e4ede1efb358552135b432fb7047a.
Change-Id: I831d646112bcfa13d0c2153482ad00ff1b23aa6c
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Revert "gluster: Sometimes Brick process is crashed at the time of stopping brick"</title>
<updated>2018-05-25T02:05:37+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-05-23T03:36:04+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7b95d5a4b3988757bf8c91f82dcaf86ed3da6875'/>
<id>7b95d5a4b3988757bf8c91f82dcaf86ed3da6875</id>
<content type='text'>
Updates: bz#1582286
This reverts commit 0043c63f70776444f69667a4ef9596217ecb42b7.
Change-Id: Iab3b4f4a54e122c589e515add93c6effc966b3e0
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Updates: bz#1582286
This reverts commit 0043c63f70776444f69667a4ef9596217ecb42b7.
Change-Id: Iab3b4f4a54e122c589e515add93c6effc966b3e0
</pre>
</div>
</content>
</entry>
<entry>
<title>Revert "server: fix unresolved symbols by moving them to libglusterfs"</title>
<updated>2018-05-25T02:05:37+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-05-23T03:34:41+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=57dd3692d1a10d446db7fe919497335984e2cd3f'/>
<id>57dd3692d1a10d446db7fe919497335984e2cd3f</id>
<content type='text'>
Updates: bz#1582286
This reverts commit 408a6d07ababde234ddeafe16687aacd2b810b42.
Change-Id: If8247d7980d698141f47130a3c532b942408ec2b
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Updates: bz#1582286
This reverts commit 408a6d07ababde234ddeafe16687aacd2b810b42.
Change-Id: If8247d7980d698141f47130a3c532b942408ec2b
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterfsd: initiate pmap_signout for all detach brick requests</title>
<updated>2018-05-04T04:49:52+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-04-23T10:33:42+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e2ee5cb2ad9f207d43bcacc4bb5dc460aac5df00'/>
<id>e2ee5cb2ad9f207d43bcacc4bb5dc460aac5df00</id>
<content type='text'>
In glusterfs_handle_terminate all bricks getting detached need to
initiate a pmap_signout.

Change-Id: Iacbd6fcd49215fe6a5210df7dfed1260fde9179a
Fixes: bz#1570011
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In glusterfs_handle_terminate all bricks getting detached need to
initiate a pmap_signout.

Change-Id: Iacbd6fcd49215fe6a5210df7dfed1260fde9179a
Fixes: bz#1570011
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>server: fix unresolved symbols by moving them to libglusterfs</title>
<updated>2018-04-20T07:55:29+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-04-20T06:46:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=408a6d07ababde234ddeafe16687aacd2b810b42'/>
<id>408a6d07ababde234ddeafe16687aacd2b810b42</id>
<content type='text'>
Problem: glusterd2 build is failed due to undefined symbol
         (xlator_mem_cleanup , glusterfsd_ctx) in server.so

Solution: To resolve the same done below two changes
          1) Move xlator_mem_cleanup code from glusterfsd-mgmt.c
             to xlator.c to be part of libglusterfs.so
          2) replace glusterfsd_ctx to this-&gt;ctx because symbol
             glusterfsd_ctx is not part of server.so

BUG: 1544090
Change-Id: Ie5e6fba9ed458931d08eb0948d450aa962424ae5
fixes: bz#1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: glusterd2 build is failed due to undefined symbol
         (xlator_mem_cleanup , glusterfsd_ctx) in server.so

Solution: To resolve the same done below two changes
          1) Move xlator_mem_cleanup code from glusterfsd-mgmt.c
             to xlator.c to be part of libglusterfs.so
          2) replace glusterfsd_ctx to this-&gt;ctx because symbol
             glusterfsd_ctx is not part of server.so

BUG: 1544090
Change-Id: Ie5e6fba9ed458931d08eb0948d450aa962424ae5
fixes: bz#1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gluster: Sometimes Brick process is crashed at the time of stopping brick</title>
<updated>2018-04-19T04:31:51+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-03-12T14:13:15+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0043c63f70776444f69667a4ef9596217ecb42b7'/>
<id>0043c63f70776444f69667a4ef9596217ecb42b7</id>
<content type='text'>
Problem: Sometimes brick process is getting crashed at the time
         of stop brick while brick mux is enabled.

Solution: Brick process was getting crashed because of rpc connection
          was not cleaning properly while brick mux is enabled.In this patch
          after sending GF_EVENT_CLEANUP notification to xlator(server)
          waits for all rpc client connection destroy for specific xlator.Once rpc
          connections are destroyed in server_rpc_notify for all associated client
          for that brick then call xlator_mem_cleanup for for brick xlator as well as
          all child xlators.To avoid races at the time of cleanup introduce
          two new flags at each xlator cleanup_starting, call_cleanup.

BUG: 1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;

Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/)
      with same patch after enable brick mux forcefully, all test cases are
      passed.

Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf
updates: bz#1544090
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: Sometimes brick process is getting crashed at the time
         of stop brick while brick mux is enabled.

Solution: Brick process was getting crashed because of rpc connection
          was not cleaning properly while brick mux is enabled.In this patch
          after sending GF_EVENT_CLEANUP notification to xlator(server)
          waits for all rpc client connection destroy for specific xlator.Once rpc
          connections are destroyed in server_rpc_notify for all associated client
          for that brick then call xlator_mem_cleanup for for brick xlator as well as
          all child xlators.To avoid races at the time of cleanup introduce
          two new flags at each xlator cleanup_starting, call_cleanup.

BUG: 1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;

Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/)
      with same patch after enable brick mux forcefully, all test cases are
      passed.

Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf
updates: bz#1544090
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: volume inode/fd status broken with brick mux</title>
<updated>2018-04-19T02:54:50+00:00</updated>
<author>
<name>hari gowtham</name>
<email>hgowtham@redhat.com</email>
</author>
<published>2018-04-11T12:08:26+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=be26b0da2f1a7fe336400de6a1c016716983bd38'/>
<id>be26b0da2f1a7fe336400de6a1c016716983bd38</id>
<content type='text'>
Problem:
The values for inode/fd was populated from the ctx received
from the server xlator.
Without brickmux, every brick from a volume belonged to a
single brick from the volume.
So searching the server and populating it worked.

With brickmux, a number of bricks can be confined to a single
process. These bricks can be from different volumes too (if
we use the max-bricks-per-process option).
If they are from different volumes, using the server xlator
to populate causes problem.

Fix:
Use the brick to validate and populate the inode/fd status.

Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;

Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd
fixes: bz#1566067
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
The values for inode/fd was populated from the ctx received
from the server xlator.
Without brickmux, every brick from a volume belonged to a
single brick from the volume.
So searching the server and populating it worked.

With brickmux, a number of bricks can be confined to a single
process. These bricks can be from different volumes too (if
we use the max-bricks-per-process option).
If they are from different volumes, using the server xlator
to populate causes problem.

Fix:
Use the brick to validate and populate the inode/fd status.

Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;

Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd
fixes: bz#1566067
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Make sure latency-arg is passed to afr</title>
<updated>2018-04-18T13:55:49+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2018-04-12T18:46:22+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=a7525c507eb682d317dafaa7d06ba59b2c50048d'/>
<id>a7525c507eb682d317dafaa7d06ba59b2c50048d</id>
<content type='text'>
xlator_notify doesn't pass the extra arguments that come in the
input function, so XLATOR_NOTIFY macro should be used instead
to pass the extra arguments to the function.

BUG: 1567881
fixes bz#1567881
Change-Id: Ic15b6c446638cbacf3149693147a754219037c47
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
xlator_notify doesn't pass the extra arguments that come in the
input function, so XLATOR_NOTIFY macro should be used instead
to pass the extra arguments to the function.

BUG: 1567881
fixes bz#1567881
Change-Id: Ic15b6c446638cbacf3149693147a754219037c47
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Revert "glusterd: handling brick termination in brick-mux"</title>
<updated>2018-03-29T14:58:27+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-03-29T10:48:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=3f9851db49ca6ac7a969817964a6ad216b10fd6f'/>
<id>3f9851db49ca6ac7a969817964a6ad216b10fd6f</id>
<content type='text'>
This reverts commit a60fc2ddc03134fb23c5ed5c0bcb195e1649416b.

This commit was causing multiple tests to time out when brick 
multiplexing is enabled. With further debugging, it's found that even 
though the volume stop transaction is converted into mgmt_v3 to allow
the remote nodes to follow the synctask framework to process the command,
there are other callers of glusterd_brick_stop () which are not synctask
based.
Change-Id: I7aee687abc6bfeaa70c7447031f55ed4ccd64693
updates: bz#1545048
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This reverts commit a60fc2ddc03134fb23c5ed5c0bcb195e1649416b.

This commit was causing multiple tests to time out when brick 
multiplexing is enabled. With further debugging, it's found that even 
though the volume stop transaction is converted into mgmt_v3 to allow
the remote nodes to follow the synctask framework to process the command,
there are other callers of glusterd_brick_stop () which are not synctask
based.
Change-Id: I7aee687abc6bfeaa70c7447031f55ed4ccd64693
updates: bz#1545048
</pre>
</div>
</content>
</entry>
</feed>
