<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/protocol, branch v4.1.0alpha</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>feature/leases : fixing bugs found while testing glfs_test.t</title>
<updated>2018-05-04T17:13:13+00:00</updated>
<author>
<name>Jiffin Tony Thottan</name>
<email>jthottan@redhat.com</email>
</author>
<published>2018-04-27T09:52:50+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=15866ac9773e89cd9e017e7d3bf8aa01a87edfd8'/>
<id>15866ac9773e89cd9e017e7d3bf8aa01a87edfd8</id>
<content type='text'>
Change-Id: Iee8f431601ecda184108a079f665e05902b0f78b
updates: #350
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Iee8f431601ecda184108a079f665e05902b0f78b
updates: #350
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/server : unwind as per op version</title>
<updated>2018-05-03T07:16:21+00:00</updated>
<author>
<name>Ashish Pandey</name>
<email>aspandey@redhat.com</email>
</author>
<published>2018-04-25T11:18:56+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=112048652ab00d9650053396c1b28a49d1853783'/>
<id>112048652ab00d9650053396c1b28a49d1853783</id>
<content type='text'>
Change-Id: Id6717640ac14881b490e512c4682e45ffffa7f5b
fixes: bz#1570538
BUG: 1570538
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Id6717640ac14881b490e512c4682e45ffffa7f5b
fixes: bz#1570538
BUG: 1570538
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>server/resolver: don't trust inode-table for RESOLVE_NOT</title>
<updated>2018-04-30T04:58:08+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2018-03-17T07:46:59+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7c6d15af7da4cea510fc2844115730656900e117'/>
<id>7c6d15af7da4cea510fc2844115730656900e117</id>
<content type='text'>
There have been known races between fops which add a dentry (like
lookup, create, mknod etc) and fops that remove a dentry (like rename,
unlink, rmdir etc) due to which stale dentries are left out in inode
table even though the dentry doesn't exist on backend. For eg.,
consider a lookup (parent/bname) and unlink (parent/bname) racing in
the following order:

* lookup hits storage/posix and finds that dentry exists
* unlink removes the dentry on storage/posix
* unlink reaches protocol/server where the dentry (parent/bname) is
  unlinked from the inode
* lookup reaches protocol/server and creates a dentry (parent/bname)
  on the inode

Now we've a stale dentry (parent/bname) associated with the inode in
itable. This situation is bad for fops like link, create etc which
invoke resolver with type RESOLVE_NOT. These fops fail with EEXIST
even though there is no such dentry on backend fs. This issue can be
solved in two ways:

* Enable "dentry fop serializer" xlator [1].
  # gluster volume set features.sdfs on

* Make sure resolver does a lookup on backend when it finds a dentry
  in itable and validates the state of itable.
   - If a dentry is not found, unlink those stale dentries from itable
     and continue with fop
   - If dentry is found, fail the fop with EEXIST

This patch implements second solution as sdfs is not enabled by
default in brick xlator stack. Once sdfs is enabled by default, this
patch can be reverted.

[1] https://github.com/gluster/glusterfs/issues/397

Change-Id: Ia8bb0cf97f97cb0e72639bce8aadb0f6d3f4a34a
updates: bz#1543279
BUG: 1543279
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There have been known races between fops which add a dentry (like
lookup, create, mknod etc) and fops that remove a dentry (like rename,
unlink, rmdir etc) due to which stale dentries are left out in inode
table even though the dentry doesn't exist on backend. For eg.,
consider a lookup (parent/bname) and unlink (parent/bname) racing in
the following order:

* lookup hits storage/posix and finds that dentry exists
* unlink removes the dentry on storage/posix
* unlink reaches protocol/server where the dentry (parent/bname) is
  unlinked from the inode
* lookup reaches protocol/server and creates a dentry (parent/bname)
  on the inode

Now we've a stale dentry (parent/bname) associated with the inode in
itable. This situation is bad for fops like link, create etc which
invoke resolver with type RESOLVE_NOT. These fops fail with EEXIST
even though there is no such dentry on backend fs. This issue can be
solved in two ways:

* Enable "dentry fop serializer" xlator [1].
  # gluster volume set features.sdfs on

* Make sure resolver does a lookup on backend when it finds a dentry
  in itable and validates the state of itable.
   - If a dentry is not found, unlink those stale dentries from itable
     and continue with fop
   - If dentry is found, fail the fop with EEXIST

This patch implements second solution as sdfs is not enabled by
default in brick xlator stack. Once sdfs is enabled by default, this
patch can be reverted.

[1] https://github.com/gluster/glusterfs/issues/397

Change-Id: Ia8bb0cf97f97cb0e72639bce8aadb0f6d3f4a34a
updates: bz#1543279
BUG: 1543279
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>server/auth: add option for strict authentication</title>
<updated>2018-04-20T20:44:06+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2018-04-02T06:50:47+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=bca55ab1bfcd2889f8387ba8bcab27766e1b94ac'/>
<id>bca55ab1bfcd2889f8387ba8bcab27766e1b94ac</id>
<content type='text'>
When this option is enabled, we will check for a matching
username and password, if not found then the connection will
be rejected. This also does a checksum validation of volfile

The option is invalid when SSL/TLS is in use, at which point
the SSL/TLS certificate user name is used to validate and
hence authorize the right user. This expects TLS allow rules
to be setup correctly rather than the default *.

This option is not settable, as a result this cannot be enabled
for volumes using the CLI. This is used with the shared storage
volume, to restrict access to the same in non-SSL/TLS environments
to the gluster peers only.

Tested:
  ./tests/bugs/protocol/bug-1321578.t
  ./tests/features/ssl-authz.t
  - Ran tests on volumes with and without strict auth
    checking (as brick vol file needed to be edited to test,
    or rather to enable the option)
  - Ran tests on volumes to ensure existing mounts are
    disconnected when we enable strict checking

Change-Id: I2ac4f0cfa5b59cc789cc5a265358389b04556b59
fixes: bz#1568844
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When this option is enabled, we will check for a matching
username and password, if not found then the connection will
be rejected. This also does a checksum validation of volfile

The option is invalid when SSL/TLS is in use, at which point
the SSL/TLS certificate user name is used to validate and
hence authorize the right user. This expects TLS allow rules
to be setup correctly rather than the default *.

This option is not settable, as a result this cannot be enabled
for volumes using the CLI. This is used with the shared storage
volume, to restrict access to the same in non-SSL/TLS environments
to the gluster peers only.

Tested:
  ./tests/bugs/protocol/bug-1321578.t
  ./tests/features/ssl-authz.t
  - Ran tests on volumes with and without strict auth
    checking (as brick vol file needed to be edited to test,
    or rather to enable the option)
  - Ran tests on volumes to ensure existing mounts are
    disconnected when we enable strict checking

Change-Id: I2ac4f0cfa5b59cc789cc5a265358389b04556b59
fixes: bz#1568844
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>server: fix unresolved symbols by moving them to libglusterfs</title>
<updated>2018-04-20T07:55:29+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-04-20T06:46:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=408a6d07ababde234ddeafe16687aacd2b810b42'/>
<id>408a6d07ababde234ddeafe16687aacd2b810b42</id>
<content type='text'>
Problem: glusterd2 build is failed due to undefined symbol
         (xlator_mem_cleanup , glusterfsd_ctx) in server.so

Solution: To resolve the same done below two changes
          1) Move xlator_mem_cleanup code from glusterfsd-mgmt.c
             to xlator.c to be part of libglusterfs.so
          2) replace glusterfsd_ctx to this-&gt;ctx because symbol
             glusterfsd_ctx is not part of server.so

BUG: 1544090
Change-Id: Ie5e6fba9ed458931d08eb0948d450aa962424ae5
fixes: bz#1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: glusterd2 build is failed due to undefined symbol
         (xlator_mem_cleanup , glusterfsd_ctx) in server.so

Solution: To resolve the same done below two changes
          1) Move xlator_mem_cleanup code from glusterfsd-mgmt.c
             to xlator.c to be part of libglusterfs.so
          2) replace glusterfsd_ctx to this-&gt;ctx because symbol
             glusterfsd_ctx is not part of server.so

BUG: 1544090
Change-Id: Ie5e6fba9ed458931d08eb0948d450aa962424ae5
fixes: bz#1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gluster: Sometimes Brick process is crashed at the time of stopping brick</title>
<updated>2018-04-19T04:31:51+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-03-12T14:13:15+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0043c63f70776444f69667a4ef9596217ecb42b7'/>
<id>0043c63f70776444f69667a4ef9596217ecb42b7</id>
<content type='text'>
Problem: Sometimes brick process is getting crashed at the time
         of stop brick while brick mux is enabled.

Solution: Brick process was getting crashed because of rpc connection
          was not cleaning properly while brick mux is enabled.In this patch
          after sending GF_EVENT_CLEANUP notification to xlator(server)
          waits for all rpc client connection destroy for specific xlator.Once rpc
          connections are destroyed in server_rpc_notify for all associated client
          for that brick then call xlator_mem_cleanup for for brick xlator as well as
          all child xlators.To avoid races at the time of cleanup introduce
          two new flags at each xlator cleanup_starting, call_cleanup.

BUG: 1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;

Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/)
      with same patch after enable brick mux forcefully, all test cases are
      passed.

Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf
updates: bz#1544090
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: Sometimes brick process is getting crashed at the time
         of stop brick while brick mux is enabled.

Solution: Brick process was getting crashed because of rpc connection
          was not cleaning properly while brick mux is enabled.In this patch
          after sending GF_EVENT_CLEANUP notification to xlator(server)
          waits for all rpc client connection destroy for specific xlator.Once rpc
          connections are destroyed in server_rpc_notify for all associated client
          for that brick then call xlator_mem_cleanup for for brick xlator as well as
          all child xlators.To avoid races at the time of cleanup introduce
          two new flags at each xlator cleanup_starting, call_cleanup.

BUG: 1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;

Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/)
      with same patch after enable brick mux forcefully, all test cases are
      passed.

Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf
updates: bz#1544090
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: volume inode/fd status broken with brick mux</title>
<updated>2018-04-19T02:54:50+00:00</updated>
<author>
<name>hari gowtham</name>
<email>hgowtham@redhat.com</email>
</author>
<published>2018-04-11T12:08:26+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=be26b0da2f1a7fe336400de6a1c016716983bd38'/>
<id>be26b0da2f1a7fe336400de6a1c016716983bd38</id>
<content type='text'>
Problem:
The values for inode/fd was populated from the ctx received
from the server xlator.
Without brickmux, every brick from a volume belonged to a
single brick from the volume.
So searching the server and populating it worked.

With brickmux, a number of bricks can be confined to a single
process. These bricks can be from different volumes too (if
we use the max-bricks-per-process option).
If they are from different volumes, using the server xlator
to populate causes problem.

Fix:
Use the brick to validate and populate the inode/fd status.

Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;

Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd
fixes: bz#1566067
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
The values for inode/fd was populated from the ctx received
from the server xlator.
Without brickmux, every brick from a volume belonged to a
single brick from the volume.
So searching the server and populating it worked.

With brickmux, a number of bricks can be confined to a single
process. These bricks can be from different volumes too (if
we use the max-bricks-per-process option).
If they are from different volumes, using the server xlator
to populate causes problem.

Fix:
Use the brick to validate and populate the inode/fd status.

Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;

Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd
fixes: bz#1566067
</pre>
</div>
</content>
</entry>
<entry>
<title>rpcsvc: enable ownthread feature for glusterfs4_0_fop_prog</title>
<updated>2018-03-22T02:49:34+00:00</updated>
<author>
<name>Milind Changire</name>
<email>mchangir@redhat.com</email>
</author>
<published>2018-03-21T16:27:50+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=286871f550b9356025f964ca8af85aabf083f01d'/>
<id>286871f550b9356025f964ca8af85aabf083f01d</id>
<content type='text'>
Ownthread feature needs enabling for glusterfs4_0_fop_prog

Change-Id: Idce63eb094ae0fdfcddbd52d0dee25aa0e074926
BUG: 1559075
Signed-off-by: Milind Changire &lt;mchangir@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Ownthread feature needs enabling for glusterfs4_0_fop_prog

Change-Id: Idce63eb094ae0fdfcddbd52d0dee25aa0e074926
BUG: 1559075
Signed-off-by: Milind Changire &lt;mchangir@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpcsvc: correct event-thread scaling</title>
<updated>2018-03-12T08:48:42+00:00</updated>
<author>
<name>Milind Changire</name>
<email>mchangir@redhat.com</email>
</author>
<published>2018-03-09T10:47:38+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0c3d984287d91d3fe1ffeef297252d912c08a410'/>
<id>0c3d984287d91d3fe1ffeef297252d912c08a410</id>
<content type='text'>
Problem:
Auto thread count derived from the number of attachs and detachs
was reset to 1 when server_reconfigure() was called.

Solution:
Avoid auto-thread-count reset to 1.

Change-Id: Ic00e86adb81ba3c828e354a6ccb638209ae58b3e
BUG: 1547888
Signed-off-by: Milind Changire &lt;mchangir@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Auto thread count derived from the number of attachs and detachs
was reset to 1 when server_reconfigure() was called.

Solution:
Avoid auto-thread-count reset to 1.

Change-Id: Ic00e86adb81ba3c828e354a6ccb638209ae58b3e
BUG: 1547888
Signed-off-by: Milind Changire &lt;mchangir@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol: Fix 4.0 client, parsing older iatt in dict</title>
<updated>2018-03-11T04:12:48+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-03-11T04:08:04+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=ece3f0f669dd5497f49c6169634b59b307d6e18b'/>
<id>ece3f0f669dd5497f49c6169634b59b307d6e18b</id>
<content type='text'>
In a mixed mode cluster involving 4.0 and older 3.x bricks, if
clients are newer, then the iatt encoded in the dictionary can be
of the older iatt format, which a newer client will map incorrectly
to the newer structure.

This causes failures in FOPs that depend on this iatt for some
functionality (seen in mkdir operations failing as EIO, when DHT
hits its internal setxattr call).

The fix provided is to convert the iatt in the dict, based on which
RPC version is used to communicate with the server.

IOW, this is the reverse of change in commit "b966c7790e"

Tested using a mixed mode cluster (i.e bricks in 3.12 and 4.0 versions)
and a mixed set of clients, 3.12 and 4.0 clients.

There is no regression test provided, as this needs a mixed mode cluster
to test and validate.

Change-Id: I454e54651ca836b9f7c28f45f51d5956106aefa9
BUG: 1554053
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In a mixed mode cluster involving 4.0 and older 3.x bricks, if
clients are newer, then the iatt encoded in the dictionary can be
of the older iatt format, which a newer client will map incorrectly
to the newer structure.

This causes failures in FOPs that depend on this iatt for some
functionality (seen in mkdir operations failing as EIO, when DHT
hits its internal setxattr call).

The fix provided is to convert the iatt in the dict, based on which
RPC version is used to communicate with the server.

IOW, this is the reverse of change in commit "b966c7790e"

Tested using a mixed mode cluster (i.e bricks in 3.12 and 4.0 versions)
and a mixed set of clients, 3.12 and 4.0 clients.

There is no regression test provided, as this needs a mixed mode cluster
to test and validate.

Change-Id: I454e54651ca836b9f7c28f45f51d5956106aefa9
BUG: 1554053
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
