<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators, branch v5.0rc1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>cluster/afr: Make data eager-lock decision based on number of locks</title>
<updated>2018-10-05T14:37:01+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2018-09-18T06:45:57+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=45b5b48bc1e21099ab04cf0b1d432c6cb015cc05'/>
<id>45b5b48bc1e21099ab04cf0b1d432c6cb015cc05</id>
<content type='text'>
For both Virt and block workloads the file is opened multiple times
leading to dynamically setting eager-lock to off for the workload.
Instead of depending on the number-of-open-fds, if we change the
logic to depend on number of inodelks, then it will give better
performance than the earlier logic. When there is an eager-lock
and number of inodelks is more than 1 we know that there is a
conflicting lock, so depend on that information to decide whether
to keep the current transaction go through delayed-post-op or not.

Locks xlator doesn't have implementation to query number of locks in
fxattrop in releases older than 3.10 so to keep things backward
compatible in 3.12, data transactions will use new logic where as
fxattrop transactions will use old logic. I am planning to send one
more patch which makes metadata domain locks also depend on
inodelk-count

Profile info for a dd of 500MB to a file with another fd opened
on the file using exec 250&gt;filename

Without this patch:
 0.14      67.41 us      16.72 us    3870.82 us  892 FINODELK
 0.59     279.87 us      95.71 us    2085.89 us  898 FXATTROP
 3.46     366.43 us      81.75 us    6952.79 us 4000 WRITE
95.79  148733.99 us   50568.12 us  919127.86 us  273 FSYNC

With this patch:
 0.00      51.01 us      38.07 us      80.16 us    4 FINODELK
 0.00     235.43 us     235.43 us     235.43 us    1 TRUNCATE
 0.00     125.07 us      56.80 us     193.33 us    2 GETXATTR
 0.00     135.86 us      62.13 us     209.59 us    2  INODELK
 0.00     197.88 us     155.39 us     253.90 us    4 FXATTROP
 0.00     450.59 us     394.28 us     506.89 us    2  XATTROP
 0.00      56.96 us      19.06 us     406.59 us   23    FLUSH
37.81  273648.93 us      48.43 us 6017657.05 us   44   LOOKUP
62.18    4951.86 us      93.80 us 1143154.75 us 3999    WRITE

postgresql benchmark performance changed from ~1130 TPS to ~2300TPS
randio fio job inside Ovirt based VM went from ~600IOPs to ~2000IOPS

fixes bz#1635972
Change-Id: If7f7388d2f08cf7f17ca517a4ea222560661dc36
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
For both Virt and block workloads the file is opened multiple times
leading to dynamically setting eager-lock to off for the workload.
Instead of depending on the number-of-open-fds, if we change the
logic to depend on number of inodelks, then it will give better
performance than the earlier logic. When there is an eager-lock
and number of inodelks is more than 1 we know that there is a
conflicting lock, so depend on that information to decide whether
to keep the current transaction go through delayed-post-op or not.

Locks xlator doesn't have implementation to query number of locks in
fxattrop in releases older than 3.10 so to keep things backward
compatible in 3.12, data transactions will use new logic where as
fxattrop transactions will use old logic. I am planning to send one
more patch which makes metadata domain locks also depend on
inodelk-count

Profile info for a dd of 500MB to a file with another fd opened
on the file using exec 250&gt;filename

Without this patch:
 0.14      67.41 us      16.72 us    3870.82 us  892 FINODELK
 0.59     279.87 us      95.71 us    2085.89 us  898 FXATTROP
 3.46     366.43 us      81.75 us    6952.79 us 4000 WRITE
95.79  148733.99 us   50568.12 us  919127.86 us  273 FSYNC

With this patch:
 0.00      51.01 us      38.07 us      80.16 us    4 FINODELK
 0.00     235.43 us     235.43 us     235.43 us    1 TRUNCATE
 0.00     125.07 us      56.80 us     193.33 us    2 GETXATTR
 0.00     135.86 us      62.13 us     209.59 us    2  INODELK
 0.00     197.88 us     155.39 us     253.90 us    4 FXATTROP
 0.00     450.59 us     394.28 us     506.89 us    2  XATTROP
 0.00      56.96 us      19.06 us     406.59 us   23    FLUSH
37.81  273648.93 us      48.43 us 6017657.05 us   44   LOOKUP
62.18    4951.86 us      93.80 us 1143154.75 us 3999    WRITE

postgresql benchmark performance changed from ~1130 TPS to ~2300TPS
randio fio job inside Ovirt based VM went from ~600IOPs to ~2000IOPS

fixes bz#1635972
Change-Id: If7f7388d2f08cf7f17ca517a4ea222560661dc36
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Batch writes in same lock even when multiple fds are open</title>
<updated>2018-10-05T14:37:01+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2018-09-06T09:39:42+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=320901d1dc9e1d4b2985e7277ca22816e7936f2f'/>
<id>320901d1dc9e1d4b2985e7277ca22816e7936f2f</id>
<content type='text'>
Problem:
When eager-lock is disabled because of multiple-fds opened and app
writes come on conflicting regions, the number of locks grows very
fast leading to all the CPU being spent just in locking and unlocking
by traversing huge queues in locks xlator for granting locks.

Fix:
Reduce the number of locks in transit by bundling the writes in the
same lock and disable delayed piggy-pack when we learn that multiple
fds are open on the file. This will reduce the size of queues in the
locks xlator.  This also reduces the number of network calls like
inodelk/fxattrop.

Please note that this problem can still happen if eager-lock is
disabled as the writes will not be bundled in the same lock.

fixes bz#1635975
Change-Id: I8fd1cf229aed54ce5abd4e6226351a039924dd91
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
When eager-lock is disabled because of multiple-fds opened and app
writes come on conflicting regions, the number of locks grows very
fast leading to all the CPU being spent just in locking and unlocking
by traversing huge queues in locks xlator for granting locks.

Fix:
Reduce the number of locks in transit by bundling the writes in the
same lock and disable delayed piggy-pack when we learn that multiple
fds are open on the file. This will reduce the size of queues in the
locks xlator.  This also reduces the number of network calls like
inodelk/fxattrop.

Please note that this problem can still happen if eager-lock is
disabled as the writes will not be bundled in the same lock.

fixes bz#1635975
Change-Id: I8fd1cf229aed54ce5abd4e6226351a039924dd91
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/glusterd: use proper path to the volfile</title>
<updated>2018-10-05T14:24:35+00:00</updated>
<author>
<name>Raghavendra Bhat</name>
<email>raghavendra@redhat.com</email>
</author>
<published>2018-10-01T21:30:19+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=5254a4b0c15ff75502806cc61a9eb9d21b55d411'/>
<id>5254a4b0c15ff75502806cc61a9eb9d21b55d411</id>
<content type='text'>
Till now, glusterd was generating the volfile path for the snapshot
volume's bricks like this.

/snaps/&lt;snap name&gt;/&lt;brick volfile&gt;

But in reality, the path to the brick volfile for a snapshot volume is

/snaps/&lt;snap name&gt;/&lt;snap volume name&gt;/&lt;brick volfile&gt;

The above workaround was used to distinguish between a mount command used
to mount the snapshot volume, and a brick of the snapshot volume, so that
based on what is actually happening, glusterd can return the proper volfile
(client volfile for the former and the brick volfile for the latter). But,
this was causing problems for snapshot restore when brick multiplexing is
enabled. Because, with brick multiplexing, it tries to find the volfile
and sends GETSPEC rpc call to glusterd using the 2nd style of path i.e.

/snaps/&lt;snap name&gt;/&lt;snap volume name&gt;/&lt;brick volfile&gt;

So, when the snapshot brick (which is multiplexed) sends a GETSPEC rpc
request to glusterd for obtaining the brick volume file, glusterd was
returning the client volume file of the snapshot volume instead of the
brick volume file.

Change-Id: I28b2dfa5d9b379fe943db92c2fdfea879a6a594e
fixes: bz#1636162
Signed-off-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
(cherry picked from commit 83a89296a3d12a3fc2a643c0630be5ce659204ea)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Till now, glusterd was generating the volfile path for the snapshot
volume's bricks like this.

/snaps/&lt;snap name&gt;/&lt;brick volfile&gt;

But in reality, the path to the brick volfile for a snapshot volume is

/snaps/&lt;snap name&gt;/&lt;snap volume name&gt;/&lt;brick volfile&gt;

The above workaround was used to distinguish between a mount command used
to mount the snapshot volume, and a brick of the snapshot volume, so that
based on what is actually happening, glusterd can return the proper volfile
(client volfile for the former and the brick volfile for the latter). But,
this was causing problems for snapshot restore when brick multiplexing is
enabled. Because, with brick multiplexing, it tries to find the volfile
and sends GETSPEC rpc call to glusterd using the 2nd style of path i.e.

/snaps/&lt;snap name&gt;/&lt;snap volume name&gt;/&lt;brick volfile&gt;

So, when the snapshot brick (which is multiplexed) sends a GETSPEC rpc
request to glusterd for obtaining the brick volume file, glusterd was
returning the client volume file of the snapshot volume instead of the
brick volume file.

Change-Id: I28b2dfa5d9b379fe943db92c2fdfea879a6a594e
fixes: bz#1636162
Signed-off-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
(cherry picked from commit 83a89296a3d12a3fc2a643c0630be5ce659204ea)
</pre>
</div>
</content>
</entry>
<entry>
<title>mdcache: Fix asan reported potential heap buffer overflow</title>
<updated>2018-10-02T18:41:39+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-09-28T17:47:33+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=9eeca96885560ee555826ca0eb4294d226928728'/>
<id>9eeca96885560ee555826ca0eb4294d226928728</id>
<content type='text'>
The char pointer mdc_xattr_str in function mdc_xattr_list_populate
is malloc'd and doing a strcat into a malloc'd region can
overflow content allocated based on prior contents of the
memory region.

Added a NULL terimation to the malloc'd region to prevent
the overflow, and treat it as an empty string.

Change-Id: If0decab669551581230a8ede4c44c319ff04bac9
Updates: bz#1635373
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
(cherry picked from commit d00a2a1b398346bbdc5ac9b3ba4b09fb1ce1e699)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The char pointer mdc_xattr_str in function mdc_xattr_list_populate
is malloc'd and doing a strcat into a malloc'd region can
overflow content allocated based on prior contents of the
memory region.

Added a NULL terimation to the malloc'd region to prevent
the overflow, and treat it as an empty string.

Change-Id: If0decab669551581230a8ede4c44c319ff04bac9
Updates: bz#1635373
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
(cherry picked from commit d00a2a1b398346bbdc5ac9b3ba4b09fb1ce1e699)
</pre>
</div>
</content>
</entry>
<entry>
<title>ctime: Provide noatime option</title>
<updated>2018-10-02T12:45:23+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2018-09-03T13:07:58+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=315b45f85ecba15d7fc8f2342468b89ee4747c48'/>
<id>315b45f85ecba15d7fc8f2342468b89ee4747c48</id>
<content type='text'>
Most of the applications are {c|m}time dependant
and very few are atime dependant. So provide noatime
option to not update atime when ctime feature is
enabled.

Also this option has to be enabled with ctime
feature to avoid unnecessary self heal. Since
AFR/EC reads data from single subvolume, atime
is only updated in one subvolume triggering self
heal.

Backport of:
&gt; Patch: https://review.gluster.org/21073
&gt; BUG: 1593538
&gt; Change-Id: I085fb33c882296545345f5df194cde7b6cbc337e
&gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit 89636be4c73b12de2e11c75d8e59527bb243f147)

updates: bz#1633015
Change-Id: I085fb33c882296545345f5df194cde7b6cbc337e
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Most of the applications are {c|m}time dependant
and very few are atime dependant. So provide noatime
option to not update atime when ctime feature is
enabled.

Also this option has to be enabled with ctime
feature to avoid unnecessary self heal. Since
AFR/EC reads data from single subvolume, atime
is only updated in one subvolume triggering self
heal.

Backport of:
&gt; Patch: https://review.gluster.org/21073
&gt; BUG: 1593538
&gt; Change-Id: I085fb33c882296545345f5df194cde7b6cbc337e
&gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit 89636be4c73b12de2e11c75d8e59527bb243f147)

updates: bz#1633015
Change-Id: I085fb33c882296545345f5df194cde7b6cbc337e
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: acquire lock to update volinfo structure</title>
<updated>2018-09-27T15:36:12+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-09-11T08:49:42+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=a39d7273eb8efecf88613a7ba284a9a3cd970be9'/>
<id>a39d7273eb8efecf88613a7ba284a9a3cd970be9</id>
<content type='text'>
Problem: With commit cb0339f92, we are using a separate syntask
for restart_bricks. There can be a situation where two threads
are accessing the same volinfo structure at the same time and
updating volinfo structure. This can lead volinfo to have
inconsistent values and assertion failures because of unexpected
values.

Solution: While updating the volinfo structure, acquire a
store_volinfo_lock, and release the lock only when the thread
completed its critical section part.

&gt; BUG: bz#1627610
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;

&gt; Change-Id: I545e4e2368e3285d8f7aa28081ff4448abb72f5d
(cherry picked from commit 484f417da945cf83afdbf136bb4817311862a8d2)

fixes: bz#1633552

Change-Id: I545e4e2368e3285d8f7aa28081ff4448abb72f5d
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: With commit cb0339f92, we are using a separate syntask
for restart_bricks. There can be a situation where two threads
are accessing the same volinfo structure at the same time and
updating volinfo structure. This can lead volinfo to have
inconsistent values and assertion failures because of unexpected
values.

Solution: While updating the volinfo structure, acquire a
store_volinfo_lock, and release the lock only when the thread
completed its critical section part.

&gt; BUG: bz#1627610
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;

&gt; Change-Id: I545e4e2368e3285d8f7aa28081ff4448abb72f5d
(cherry picked from commit 484f417da945cf83afdbf136bb4817311862a8d2)

fixes: bz#1633552

Change-Id: I545e4e2368e3285d8f7aa28081ff4448abb72f5d
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: make sure that brickinfo-&gt;uuid is not null</title>
<updated>2018-09-27T06:18:39+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-09-25T18:06:48+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7ab0f691f0bd08585d06a26f23dc24f62ddd6251'/>
<id>7ab0f691f0bd08585d06a26f23dc24f62ddd6251</id>
<content type='text'>
Problem: After an upgrade from the version where shared-brick-count
option is not present to a version which introduced this option
causes issue at the mount point i.e, size of the volume at mount
point will be reduced by shared-brick-count value times.

Cause: shared-brick-count is equal to the number of bricks that
are sharing the file system. gd_set_shared_brick_count() calculates
the shared-brick-count value based on uuid of the node and fsid of
the brick. https://review.gluster.org/#/c/glusterfs/+/19484 handles
setting of fsid properly during an upgrade path. This patch assumed
that when the code path is reached, brickinfo-&gt;uuid is non-null.
But brickinfo-&gt;uuid is null for all the bricks, as the uuid is null
https://review.gluster.org/#/c/glusterfs/+/19484 couldn't reached the
code path to set the fsid for bricks. So, we had fsid as 0 for all
bricks, which resulted in gd_set_shared_brick_count() to calculate
shared-brick-count in a wrong way. i.e, the logic written in
gd_set_shared_brick_count() didn't work as expected since fsid is 0.

Solution: Before control reaches the code path written by
https://review.gluster.org/#/c/glusterfs/+/19484,
adding a check for whether brickinfo-&gt;uuid is null and
if brickinfo-&gt;uuid is having null value, calling
glusterd_resolve_brick will set the brickinfo-&gt;uuid to a 
proper value. When we have proper uuid, fsid for the bricks
will be set properly and shared-brick-count value will be
caluculated correctly.

Please take a look at the bug https://bugzilla.redhat.com/show_bug.cgi?id=1632889
for complete RCA

Steps followed to test the fix:
1. Created a 2 node cluster, the cluster is running with binary
which doesn't have shared-brick-count option
2. Created a 2x(2+1) volume and started it
3. Mouted the volume, checked size of volume using df
4. Upgrade to a version where shared-brick-count is introduced
(upgraded the nodes one by one i.e, stop the glusterd, upgrade the node
and start the glusterd).
5. after upgrading both the nodes, bumped up the cluster.op-version
6. At mount point, df shows the correct size for volume.

&gt; backport of https://review.gluster.org/#/c/glusterfs/+/21278/
&gt; Change-Id: Ib9f078aafb15e899a01086eae113270657ea916b
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit f1e9b878ce2067db83a0baa5f384eda87287719d)

fixes: bz#1633242
Change-Id: Ib9f078aafb15e899a01086eae113270657ea916b
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: After an upgrade from the version where shared-brick-count
option is not present to a version which introduced this option
causes issue at the mount point i.e, size of the volume at mount
point will be reduced by shared-brick-count value times.

Cause: shared-brick-count is equal to the number of bricks that
are sharing the file system. gd_set_shared_brick_count() calculates
the shared-brick-count value based on uuid of the node and fsid of
the brick. https://review.gluster.org/#/c/glusterfs/+/19484 handles
setting of fsid properly during an upgrade path. This patch assumed
that when the code path is reached, brickinfo-&gt;uuid is non-null.
But brickinfo-&gt;uuid is null for all the bricks, as the uuid is null
https://review.gluster.org/#/c/glusterfs/+/19484 couldn't reached the
code path to set the fsid for bricks. So, we had fsid as 0 for all
bricks, which resulted in gd_set_shared_brick_count() to calculate
shared-brick-count in a wrong way. i.e, the logic written in
gd_set_shared_brick_count() didn't work as expected since fsid is 0.

Solution: Before control reaches the code path written by
https://review.gluster.org/#/c/glusterfs/+/19484,
adding a check for whether brickinfo-&gt;uuid is null and
if brickinfo-&gt;uuid is having null value, calling
glusterd_resolve_brick will set the brickinfo-&gt;uuid to a 
proper value. When we have proper uuid, fsid for the bricks
will be set properly and shared-brick-count value will be
caluculated correctly.

Please take a look at the bug https://bugzilla.redhat.com/show_bug.cgi?id=1632889
for complete RCA

Steps followed to test the fix:
1. Created a 2 node cluster, the cluster is running with binary
which doesn't have shared-brick-count option
2. Created a 2x(2+1) volume and started it
3. Mouted the volume, checked size of volume using df
4. Upgrade to a version where shared-brick-count is introduced
(upgraded the nodes one by one i.e, stop the glusterd, upgrade the node
and start the glusterd).
5. after upgrading both the nodes, bumped up the cluster.op-version
6. At mount point, df shows the correct size for volume.

&gt; backport of https://review.gluster.org/#/c/glusterfs/+/21278/
&gt; Change-Id: Ib9f078aafb15e899a01086eae113270657ea916b
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit f1e9b878ce2067db83a0baa5f384eda87287719d)

fixes: bz#1633242
Change-Id: Ib9f078aafb15e899a01086eae113270657ea916b
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>geo-rep: Fix issues related config set</title>
<updated>2018-09-19T04:10:40+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2018-09-14T07:42:26+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=112b50070861101be2d6cc8d8e96af75359a8ca3'/>
<id>112b50070861101be2d6cc8d8e96af75359a8ca3</id>
<content type='text'>
1. '--ignore-mising-args' option for rsync is not
   being used even though the rsync version is
   greater than 3.1.0. Fixed the same.

2. '--existing' option for rsync is also not being
   used. Fixed the same.

3. geo-rep config fails to set rsync-options as the
   value contains '--'. Interestingly, python argsparse
   treats the value with '--' (e.g., --ignore-missing-args)
   as option. But when passed with something like
   --value=--ignore-missing-args, it succeeds. Fixed the
   same.

Backport of:
 &gt; Patch: https://review.gluster.org/21191
 &gt; Change-Id: Iaeb838acaff1c2920fee9c7f920c99edce13a0a1
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
 &gt; BUG: 1629561
(cherry picked from commit b977b44dd0adfcd7a3b432844260de4b8d1c4adf)

Change-Id: Iaeb838acaff1c2920fee9c7f920c99edce13a0a1
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
fixes: bz#1630673
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
1. '--ignore-mising-args' option for rsync is not
   being used even though the rsync version is
   greater than 3.1.0. Fixed the same.

2. '--existing' option for rsync is also not being
   used. Fixed the same.

3. geo-rep config fails to set rsync-options as the
   value contains '--'. Interestingly, python argsparse
   treats the value with '--' (e.g., --ignore-missing-args)
   as option. But when passed with something like
   --value=--ignore-missing-args, it succeeds. Fixed the
   same.

Backport of:
 &gt; Patch: https://review.gluster.org/21191
 &gt; Change-Id: Iaeb838acaff1c2920fee9c7f920c99edce13a0a1
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
 &gt; BUG: 1629561
(cherry picked from commit b977b44dd0adfcd7a3b432844260de4b8d1c4adf)

Change-Id: Iaeb838acaff1c2920fee9c7f920c99edce13a0a1
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
fixes: bz#1630673
</pre>
</div>
</content>
</entry>
<entry>
<title>core: remove experimental xlators and associated tests</title>
<updated>2018-09-17T14:31:19+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-09-13T18:05:02+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c1314445cf008cf78a2157cb425bee836de5594c'/>
<id>c1314445cf008cf78a2157cb425bee836de5594c</id>
<content type='text'>
experimental xlators removed from 5.0

Change-Id: I47219d8b95efc3d5875ec9224d1e79f8371e9f76
Updates: bz#1628620
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
experimental xlators removed from 5.0

Change-Id: I47219d8b95efc3d5875ec9224d1e79f8371e9f76
Updates: bz#1628620
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Update op-version from 4.2 to 5.0</title>
<updated>2018-09-17T14:26:25+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-09-13T16:50:30+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=afc9f3b8716e88410ba50a6ce8abbfa186ee7c46'/>
<id>afc9f3b8716e88410ba50a6ce8abbfa186ee7c46</id>
<content type='text'>
Post changing the max op-version to 4.2, after release
4.1 branching, the decision was to go with increasing
release numbers. Thus this needs to change to 5.0.

This commit addresses the above change.

Fixes: bz#1628668
Change-Id: Ifcc0c6da90fdd51e4eceea40749511110a432cce
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Post changing the max op-version to 4.2, after release
4.1 branching, the decision was to go with increasing
release numbers. Thus this needs to change to 5.0.

This commit addresses the above change.

Fixes: bz#1628668
Change-Id: Ifcc0c6da90fdd51e4eceea40749511110a432cce
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
