<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/protocol/server/src/server.c, branch v4.1.0alpha</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>server/auth: add option for strict authentication</title>
<updated>2018-04-20T20:44:06+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2018-04-02T06:50:47+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=bca55ab1bfcd2889f8387ba8bcab27766e1b94ac'/>
<id>bca55ab1bfcd2889f8387ba8bcab27766e1b94ac</id>
<content type='text'>
When this option is enabled, we will check for a matching
username and password, if not found then the connection will
be rejected. This also does a checksum validation of volfile

The option is invalid when SSL/TLS is in use, at which point
the SSL/TLS certificate user name is used to validate and
hence authorize the right user. This expects TLS allow rules
to be setup correctly rather than the default *.

This option is not settable, as a result this cannot be enabled
for volumes using the CLI. This is used with the shared storage
volume, to restrict access to the same in non-SSL/TLS environments
to the gluster peers only.

Tested:
  ./tests/bugs/protocol/bug-1321578.t
  ./tests/features/ssl-authz.t
  - Ran tests on volumes with and without strict auth
    checking (as brick vol file needed to be edited to test,
    or rather to enable the option)
  - Ran tests on volumes to ensure existing mounts are
    disconnected when we enable strict checking

Change-Id: I2ac4f0cfa5b59cc789cc5a265358389b04556b59
fixes: bz#1568844
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When this option is enabled, we will check for a matching
username and password, if not found then the connection will
be rejected. This also does a checksum validation of volfile

The option is invalid when SSL/TLS is in use, at which point
the SSL/TLS certificate user name is used to validate and
hence authorize the right user. This expects TLS allow rules
to be setup correctly rather than the default *.

This option is not settable, as a result this cannot be enabled
for volumes using the CLI. This is used with the shared storage
volume, to restrict access to the same in non-SSL/TLS environments
to the gluster peers only.

Tested:
  ./tests/bugs/protocol/bug-1321578.t
  ./tests/features/ssl-authz.t
  - Ran tests on volumes with and without strict auth
    checking (as brick vol file needed to be edited to test,
    or rather to enable the option)
  - Ran tests on volumes to ensure existing mounts are
    disconnected when we enable strict checking

Change-Id: I2ac4f0cfa5b59cc789cc5a265358389b04556b59
fixes: bz#1568844
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>server: fix unresolved symbols by moving them to libglusterfs</title>
<updated>2018-04-20T07:55:29+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-04-20T06:46:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=408a6d07ababde234ddeafe16687aacd2b810b42'/>
<id>408a6d07ababde234ddeafe16687aacd2b810b42</id>
<content type='text'>
Problem: glusterd2 build is failed due to undefined symbol
         (xlator_mem_cleanup , glusterfsd_ctx) in server.so

Solution: To resolve the same done below two changes
          1) Move xlator_mem_cleanup code from glusterfsd-mgmt.c
             to xlator.c to be part of libglusterfs.so
          2) replace glusterfsd_ctx to this-&gt;ctx because symbol
             glusterfsd_ctx is not part of server.so

BUG: 1544090
Change-Id: Ie5e6fba9ed458931d08eb0948d450aa962424ae5
fixes: bz#1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: glusterd2 build is failed due to undefined symbol
         (xlator_mem_cleanup , glusterfsd_ctx) in server.so

Solution: To resolve the same done below two changes
          1) Move xlator_mem_cleanup code from glusterfsd-mgmt.c
             to xlator.c to be part of libglusterfs.so
          2) replace glusterfsd_ctx to this-&gt;ctx because symbol
             glusterfsd_ctx is not part of server.so

BUG: 1544090
Change-Id: Ie5e6fba9ed458931d08eb0948d450aa962424ae5
fixes: bz#1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gluster: Sometimes Brick process is crashed at the time of stopping brick</title>
<updated>2018-04-19T04:31:51+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-03-12T14:13:15+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0043c63f70776444f69667a4ef9596217ecb42b7'/>
<id>0043c63f70776444f69667a4ef9596217ecb42b7</id>
<content type='text'>
Problem: Sometimes brick process is getting crashed at the time
         of stop brick while brick mux is enabled.

Solution: Brick process was getting crashed because of rpc connection
          was not cleaning properly while brick mux is enabled.In this patch
          after sending GF_EVENT_CLEANUP notification to xlator(server)
          waits for all rpc client connection destroy for specific xlator.Once rpc
          connections are destroyed in server_rpc_notify for all associated client
          for that brick then call xlator_mem_cleanup for for brick xlator as well as
          all child xlators.To avoid races at the time of cleanup introduce
          two new flags at each xlator cleanup_starting, call_cleanup.

BUG: 1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;

Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/)
      with same patch after enable brick mux forcefully, all test cases are
      passed.

Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf
updates: bz#1544090
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: Sometimes brick process is getting crashed at the time
         of stop brick while brick mux is enabled.

Solution: Brick process was getting crashed because of rpc connection
          was not cleaning properly while brick mux is enabled.In this patch
          after sending GF_EVENT_CLEANUP notification to xlator(server)
          waits for all rpc client connection destroy for specific xlator.Once rpc
          connections are destroyed in server_rpc_notify for all associated client
          for that brick then call xlator_mem_cleanup for for brick xlator as well as
          all child xlators.To avoid races at the time of cleanup introduce
          two new flags at each xlator cleanup_starting, call_cleanup.

BUG: 1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;

Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/)
      with same patch after enable brick mux forcefully, all test cases are
      passed.

Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf
updates: bz#1544090
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: volume inode/fd status broken with brick mux</title>
<updated>2018-04-19T02:54:50+00:00</updated>
<author>
<name>hari gowtham</name>
<email>hgowtham@redhat.com</email>
</author>
<published>2018-04-11T12:08:26+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=be26b0da2f1a7fe336400de6a1c016716983bd38'/>
<id>be26b0da2f1a7fe336400de6a1c016716983bd38</id>
<content type='text'>
Problem:
The values for inode/fd was populated from the ctx received
from the server xlator.
Without brickmux, every brick from a volume belonged to a
single brick from the volume.
So searching the server and populating it worked.

With brickmux, a number of bricks can be confined to a single
process. These bricks can be from different volumes too (if
we use the max-bricks-per-process option).
If they are from different volumes, using the server xlator
to populate causes problem.

Fix:
Use the brick to validate and populate the inode/fd status.

Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;

Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd
fixes: bz#1566067
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
The values for inode/fd was populated from the ctx received
from the server xlator.
Without brickmux, every brick from a volume belonged to a
single brick from the volume.
So searching the server and populating it worked.

With brickmux, a number of bricks can be confined to a single
process. These bricks can be from different volumes too (if
we use the max-bricks-per-process option).
If they are from different volumes, using the server xlator
to populate causes problem.

Fix:
Use the brick to validate and populate the inode/fd status.

Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;

Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd
fixes: bz#1566067
</pre>
</div>
</content>
</entry>
<entry>
<title>rpcsvc: correct event-thread scaling</title>
<updated>2018-03-12T08:48:42+00:00</updated>
<author>
<name>Milind Changire</name>
<email>mchangir@redhat.com</email>
</author>
<published>2018-03-09T10:47:38+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0c3d984287d91d3fe1ffeef297252d912c08a410'/>
<id>0c3d984287d91d3fe1ffeef297252d912c08a410</id>
<content type='text'>
Problem:
Auto thread count derived from the number of attachs and detachs
was reset to 1 when server_reconfigure() was called.

Solution:
Avoid auto-thread-count reset to 1.

Change-Id: Ic00e86adb81ba3c828e354a6ccb638209ae58b3e
BUG: 1547888
Signed-off-by: Milind Changire &lt;mchangir@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Auto thread count derived from the number of attachs and detachs
was reset to 1 when server_reconfigure() was called.

Solution:
Avoid auto-thread-count reset to 1.

Change-Id: Ic00e86adb81ba3c828e354a6ccb638209ae58b3e
BUG: 1547888
Signed-off-by: Milind Changire &lt;mchangir@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>build: address linkage issues</title>
<updated>2018-03-05T14:25:17+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2018-03-02T22:04:49+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2bb17551a597b382d77bb5ebc2671b45565cd542'/>
<id>2bb17551a597b382d77bb5ebc2671b45565cd542</id>
<content type='text'>
We have the following undefined symbol error from protocol/server.so:

  glusterfs_mgmt_pmap_signout
  glusterfs_autoscale_threads

See https://review.gluster.org/19225 (bz#1532238)
and https://review.gluster.org/19657 (bz#1550895)

(why are there two different bzs for the same bug?)

IMO this is a cleaner solution. I.e. moving the above two functions
to libgfrpc (.../rpc/rpc-lib/...)

I would also, for (foolish) consistency sake, like to see
glusterfs_mgmt_pmap_signin() moved from glusterfsd to libgfrpc as
well.

This works on f28/rawhide, with its new, more restrictive run-time
link semantics. The smoke and regression tests on earlier fedora and
centos will confirm that it works on those platforms too.

Change-Id: I9cfbd1cc15e7ebd9fc31b56ac791287fa2c584de
BUG: 1550895
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We have the following undefined symbol error from protocol/server.so:

  glusterfs_mgmt_pmap_signout
  glusterfs_autoscale_threads

See https://review.gluster.org/19225 (bz#1532238)
and https://review.gluster.org/19657 (bz#1550895)

(why are there two different bzs for the same bug?)

IMO this is a cleaner solution. I.e. moving the above two functions
to libgfrpc (.../rpc/rpc-lib/...)

I would also, for (foolish) consistency sake, like to see
glusterfs_mgmt_pmap_signin() moved from glusterfsd to libgfrpc as
well.

This works on f28/rawhide, with its new, more restrictive run-time
link semantics. The smoke and regression tests on earlier fedora and
centos will confirm that it works on those platforms too.

Change-Id: I9cfbd1cc15e7ebd9fc31b56ac791287fa2c584de
BUG: 1550895
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterfsd: Memleak in glusterfsd process while  brick mux is on</title>
<updated>2018-02-27T07:11:15+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-02-10T06:55:15+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7c3cc485054e4ede1efb358552135b432fb7047a'/>
<id>7c3cc485054e4ede1efb358552135b432fb7047a</id>
<content type='text'>
Problem: At the time of stopping the volume while brick multiplex is
         enabled memory is not cleanup from all server side xlators.

Solution: To cleanup memory for all server side xlators call fini
          in glusterfs_handle_terminate after send GF_EVENT_CLEANUP
          notification to top xlator.

BUG: 1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;

Note: Run all test-cases in separate build (https://review.gluster.org/19574)
      with same patch after enable brick mux forcefully, all test cases are
      passed.

Change-Id: Ia10dc7f2605aa50f2b90b3fe4eb380ba9299e2fc
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: At the time of stopping the volume while brick multiplex is
         enabled memory is not cleanup from all server side xlators.

Solution: To cleanup memory for all server side xlators call fini
          in glusterfs_handle_terminate after send GF_EVENT_CLEANUP
          notification to top xlator.

BUG: 1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;

Note: Run all test-cases in separate build (https://review.gluster.org/19574)
      with same patch after enable brick mux forcefully, all test cases are
      passed.

Change-Id: Ia10dc7f2605aa50f2b90b3fe4eb380ba9299e2fc
</pre>
</div>
</content>
</entry>
<entry>
<title>rpcsvc: scale rpcsvc_request_handler threads</title>
<updated>2018-02-26T09:44:38+00:00</updated>
<author>
<name>Milind Changire</name>
<email>mchangir@redhat.com</email>
</author>
<published>2018-02-26T09:44:18+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7d641313f46789ec0a7ba0cc04f504724c780855'/>
<id>7d641313f46789ec0a7ba0cc04f504724c780855</id>
<content type='text'>
Scale rpcsvc_request_handler threads to match the scaling of event
handler threads.

Please refer to https://bugzilla.redhat.com/show_bug.cgi?id=1467614#c51
for a discussion about why we need multi-threaded rpcsvc request
handlers.

Change-Id: Ib6838fb8b928e15602a3d36fd66b7ba08999430b
Signed-off-by: Milind Changire &lt;mchangir@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Scale rpcsvc_request_handler threads to match the scaling of event
handler threads.

Please refer to https://bugzilla.redhat.com/show_bug.cgi?id=1467614#c51
for a discussion about why we need multi-threaded rpcsvc request
handlers.

Change-Id: Ib6838fb8b928e15602a3d36fd66b7ba08999430b
Signed-off-by: Milind Changire &lt;mchangir@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Revert "glusterfsd: Memleak in glusterfsd process while  brick mux is on"</title>
<updated>2018-02-19T19:30:56+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-02-18T02:44:35+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e3e7cdbde5463ff6d20af52329a784ca629c6aef'/>
<id>e3e7cdbde5463ff6d20af52329a784ca629c6aef</id>
<content type='text'>
There are still remain some code paths where cleanup is required while
brick mux is on.I will upload a new patch after resolve all code paths.

This reverts commit b313d97faa766443a7f8128b6e19f3d2f1b267dd.

BUG: 1544090
Change-Id: I26ef1d29061092bd9a409c8933d5488e968ed90e
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There are still remain some code paths where cleanup is required while
brick mux is on.I will upload a new patch after resolve all code paths.

This reverts commit b313d97faa766443a7f8128b6e19f3d2f1b267dd.

BUG: 1544090
Change-Id: I26ef1d29061092bd9a409c8933d5488e968ed90e
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterfsd: Memleak in glusterfsd process while  brick mux is on</title>
<updated>2018-02-15T17:03:20+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-02-10T06:55:15+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b313d97faa766443a7f8128b6e19f3d2f1b267dd'/>
<id>b313d97faa766443a7f8128b6e19f3d2f1b267dd</id>
<content type='text'>
Problem: At the time of stopping the volume while brick multiplex is
         enabled memory is not cleanup from all server side xlators.

Solution: To cleanup memory for all server side xlators call fini
          in glusterfs_handle_terminate after send GF_EVENT_CLEANUP
          notification to top xlator.

BUG: 1544090
Change-Id: Ifa1525e25b697371276158705026b421b4f81140
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: At the time of stopping the volume while brick multiplex is
         enabled memory is not cleanup from all server side xlators.

Solution: To cleanup memory for all server side xlators call fini
          in glusterfs_handle_terminate after send GF_EVENT_CLEANUP
          notification to top xlator.

BUG: 1544090
Change-Id: Ifa1525e25b697371276158705026b421b4f81140
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
