<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests, branch v6.4</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: add GF_TRANSPORT_BOTH_TCP_RDMA in glusterd_get_gfproxy_client_volfile</title>
<updated>2019-07-16T04:56:14+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2019-06-11T04:22:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d06f676a3af0fc09074699ead25a4872d0a6020d'/>
<id>d06f676a3af0fc09074699ead25a4872d0a6020d</id>
<content type='text'>
... with out which volume creation fails with "volume create: &lt;xyz&gt;: failed:
Failed to create volume files"

&gt;Fixes: bz#1716812
&gt;Change-Id: I2f4c2c6d5290f066b54e1c1db19e25db9937bedb
&gt;Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;

Fixes: bz#1721105
Change-Id: I2f4c2c6d5290f066b54e1c1db19e25db9937bedb
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
... with out which volume creation fails with "volume create: &lt;xyz&gt;: failed:
Failed to create volume files"

&gt;Fixes: bz#1716812
&gt;Change-Id: I2f4c2c6d5290f066b54e1c1db19e25db9937bedb
&gt;Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;

Fixes: bz#1721105
Change-Id: I2f4c2c6d5290f066b54e1c1db19e25db9937bedb
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Fix block-count accounting upon truncate to lower size</title>
<updated>2019-07-03T11:16:10+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2019-05-08T07:30:51+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=25e4a1249f3904a2a918194541566e3cda512c6b'/>
<id>25e4a1249f3904a2a918194541566e3cda512c6b</id>
<content type='text'>
Backport of:
&gt; BUG: bz#1705884
&gt; Change-Id: I9128a192e9bf8c3c3a959e96b7400879d03d7c53
&gt; Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;

The way delta_blocks is computed in shard is incorrect, when a file
is truncated to a lower size. The accounting only considers change
in size of the last of the truncated shards.

FIX:

Get the block-count of each shard just before an unlink at posix in
xdata.  Their summation plus the change in size of last shard
(from an actual truncate) is used to compute delta_blocks which is
used in the xattrop for size update.

Change-Id: I9128a192e9bf8c3c3a959e96b7400879d03d7c53
fixes: bz#1716871
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
(cherry picked from commit 400b66d568ad18fefcb59949d1f8368d487b9a80)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of:
&gt; BUG: bz#1705884
&gt; Change-Id: I9128a192e9bf8c3c3a959e96b7400879d03d7c53
&gt; Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;

The way delta_blocks is computed in shard is incorrect, when a file
is truncated to a lower size. The accounting only considers change
in size of the last of the truncated shards.

FIX:

Get the block-count of each shard just before an unlink at posix in
xdata.  Their summation plus the change in size of last shard
(from an actual truncate) is used to compute delta_blocks which is
used in the xattrop for size update.

Change-Id: I9128a192e9bf8c3c3a959e96b7400879d03d7c53
fixes: bz#1716871
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
(cherry picked from commit 400b66d568ad18fefcb59949d1f8368d487b9a80)
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: subdir-mount.t is failing for brick_mux regrssion</title>
<updated>2019-07-03T06:40:51+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2019-06-17T05:40:42+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b6abdb8f936f92345cc44dfdcb282691cfbd3db8'/>
<id>b6abdb8f936f92345cc44dfdcb282691cfbd3db8</id>
<content type='text'>
To avoid the failure wait to run hook script S13create-subdir-mounts.sh
after executed add-brick command by test case.

Change-Id: I063b6d0f86a550ed0a0527255e4dfbe8f0a8c02e
fixes: bz#1726327
&gt; fixes: bz#1720993
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
&gt; (Cherry pick from commit 25ad5aca23b257cdd129cd1d4518b048fbba87bb)
&gt; (Reviewed on upstream link https://review.gluster.org/#/c/glusterfs/+/22877/)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
To avoid the failure wait to run hook script S13create-subdir-mounts.sh
after executed add-brick command by test case.

Change-Id: I063b6d0f86a550ed0a0527255e4dfbe8f0a8c02e
fixes: bz#1726327
&gt; fixes: bz#1720993
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
&gt; (Cherry pick from commit 25ad5aca23b257cdd129cd1d4518b048fbba87bb)
&gt; (Reviewed on upstream link https://review.gluster.org/#/c/glusterfs/+/22877/)
</pre>
</div>
</content>
</entry>
<entry>
<title>posix/ctime: Fix ctime upgrade issue</title>
<updated>2019-07-02T07:40:33+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2019-06-13T10:53:21+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b7b76714691d464b09a6363ccc2783080cb17ea2'/>
<id>b7b76714691d464b09a6363ccc2783080cb17ea2</id>
<content type='text'>
Problem:
On a EC volume, during upgrade from the older version where
ctime feature is not enabled(or not present) to the newer
version where the ctime feature is available (enabled default),
the self heal hangs and doesn't complete.

Cause:
The ctime feature has both client side code (utime) and
server side code (posix). The feature is driven from client.
Only if the client side sets the time in the frame, should
the server side sets the time attributes in xattr. But posix
setattr/fseattr was not doing that. When one of the server
nodes is updated, since ctime is enabled by default, it
starts setting xattr on setattr/fseattr on the updated node/brick.

On a EC volume the first two updated nodes(bricks) are not a
problem because there are 4 other bricks with consistent data.
However once the third brick is updated, the new attribute(mdata xattr)
will cause an inconsistency on metadata on 3 bricks, which
prevents the file to be repaired.

Fix:
Don't create mdata xattr with utimes/utimensat system call.
Only update if already present.

Backport of:
 &gt; Patch: https://review.gluster.org/22858
 &gt; Change-Id: Ieacedecb8a738bb437283ef3e0f042fd49dc4c8c
 &gt; BUG: 1720201
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;

Change-Id: Ieacedecb8a738bb437283ef3e0f042fd49dc4c8c
fixes: bz#1722805
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
On a EC volume, during upgrade from the older version where
ctime feature is not enabled(or not present) to the newer
version where the ctime feature is available (enabled default),
the self heal hangs and doesn't complete.

Cause:
The ctime feature has both client side code (utime) and
server side code (posix). The feature is driven from client.
Only if the client side sets the time in the frame, should
the server side sets the time attributes in xattr. But posix
setattr/fseattr was not doing that. When one of the server
nodes is updated, since ctime is enabled by default, it
starts setting xattr on setattr/fseattr on the updated node/brick.

On a EC volume the first two updated nodes(bricks) are not a
problem because there are 4 other bricks with consistent data.
However once the third brick is updated, the new attribute(mdata xattr)
will cause an inconsistency on metadata on 3 bricks, which
prevents the file to be repaired.

Fix:
Don't create mdata xattr with utimes/utimensat system call.
Only update if already present.

Backport of:
 &gt; Patch: https://review.gluster.org/22858
 &gt; Change-Id: Ieacedecb8a738bb437283ef3e0f042fd49dc4c8c
 &gt; BUG: 1720201
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;

Change-Id: Ieacedecb8a738bb437283ef3e0f042fd49dc4c8c
fixes: bz#1722805
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests/utils: Fix py2/py3 util python scripts</title>
<updated>2019-06-27T06:40:27+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2019-06-06T07:24:04+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=5de46c74b281e6d0cf7168f8869bd179e7fff489'/>
<id>5de46c74b281e6d0cf7168f8869bd179e7fff489</id>
<content type='text'>
Following files are fixed.

tests/bugs/distribute/overlap.py
tests/utils/changelogparser.py
tests/utils/create-files.py
tests/utils/gfid-access.py
tests/utils/libcxattr.py

Backport of:
&gt; Change-Id: I3db857cc19e19163d368d913eaec1269fbc37140
&gt; BUG: 1193929
&gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;

Change-Id: I3db857cc19e19163d368d913eaec1269fbc37140
updates: bz#1679998
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Following files are fixed.

tests/bugs/distribute/overlap.py
tests/utils/changelogparser.py
tests/utils/create-files.py
tests/utils/gfid-access.py
tests/utils/libcxattr.py

Backport of:
&gt; Change-Id: I3db857cc19e19163d368d913eaec1269fbc37140
&gt; BUG: 1193929
&gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;

Change-Id: I3db857cc19e19163d368d913eaec1269fbc37140
updates: bz#1679998
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: honor contention notifications for partially acquired locks</title>
<updated>2019-06-03T04:08:06+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-05-09T09:07:18+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7a387f97315f55e1c741d6ad749fb5621f067de0'/>
<id>7a387f97315f55e1c741d6ad749fb5621f067de0</id>
<content type='text'>
EC was ignoring lock contention notifications received while a lock was
being acquired. When a lock is partially acquired (some bricks have
granted the lock but some others not yet) we can receive notifications
from acquired bricks, which should be honored, since we may not receive
more notifications after that.

Since EC was ignoring them, once the lock was acquired, it was not
released until the eager-lock timeout, causing unnecessary delays on
other clients.

This fix takes into consideration the notifications received before
having completed the full lock acquisition. After that, the lock will
be releaed as soon as possible.

Backport of:
&gt; BUG: bz#1708156
&gt; Change-Id: I2a306dbdb29fb557dcab7788a258bd75d826cc12
&gt; Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;

Fixes: bz#1714172
Change-Id: I2a306dbdb29fb557dcab7788a258bd75d826cc12
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
EC was ignoring lock contention notifications received while a lock was
being acquired. When a lock is partially acquired (some bricks have
granted the lock but some others not yet) we can receive notifications
from acquired bricks, which should be honored, since we may not receive
more notifications after that.

Since EC was ignoring them, once the lock was acquired, it was not
released until the eager-lock timeout, causing unnecessary delays on
other clients.

This fix takes into consideration the notifications received before
having completed the full lock acquisition. After that, the lock will
be releaed as soon as possible.

Backport of:
&gt; BUG: bz#1708156
&gt; Change-Id: I2a306dbdb29fb557dcab7788a258bd75d826cc12
&gt; Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;

Fixes: bz#1714172
Change-Id: I2a306dbdb29fb557dcab7788a258bd75d826cc12
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>geo-rep: Fix sync hang with tarssh</title>
<updated>2019-05-21T05:14:53+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2019-05-08T05:56:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=60df33ab0b7d57e3945d70ed933a5091c4d0b86c'/>
<id>60df33ab0b7d57e3945d70ed933a5091c4d0b86c</id>
<content type='text'>
Problem:
Geo-rep sync hangs when tarssh is used as sync
engine at heavy workload.

Analysis and Root cause:
It's found out that the tar process was hung.
When debugged further, it's found out that stderr
buffer of tar process on master was full i.e., 64k.
When the buffer was copied to a file from /proc/pid/fd/2,
the hang is resolved.

This can happen when files picked by tar process
to sync doesn't exist on master anymore. If this count
increases around 1k, the stderr buffer is filled up.

Fix:
The tar process is executed using Popen with stderr as PIPE.
The final execution is something like below.

tar | ssh &lt;args&gt; root@slave tar --overwrite -xf - -C &lt;path&gt;

It was waiting on ssh process first using communicate() and then tar.
Note that communicate() reads stdout and stderr. So when stderr of tar
process is filled up, there is no one to read until untar via ssh is
completed. This can't happen and leads to deadlock.
Hence we should be waiting on both process parallely, so that stderr is
read on both processes.

Backport of:
 &gt; Patch: https://review.gluster.org/22684/
 &gt; Change-Id: I609c7cc5c07e210c504771115b4d551a2e891adf
 &gt; BUG: 1707728
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;

Change-Id: I609c7cc5c07e210c504771115b4d551a2e891adf
fixes: bz#1709738
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Geo-rep sync hangs when tarssh is used as sync
engine at heavy workload.

Analysis and Root cause:
It's found out that the tar process was hung.
When debugged further, it's found out that stderr
buffer of tar process on master was full i.e., 64k.
When the buffer was copied to a file from /proc/pid/fd/2,
the hang is resolved.

This can happen when files picked by tar process
to sync doesn't exist on master anymore. If this count
increases around 1k, the stderr buffer is filled up.

Fix:
The tar process is executed using Popen with stderr as PIPE.
The final execution is something like below.

tar | ssh &lt;args&gt; root@slave tar --overwrite -xf - -C &lt;path&gt;

It was waiting on ssh process first using communicate() and then tar.
Note that communicate() reads stdout and stderr. So when stderr of tar
process is filled up, there is no one to read until untar via ssh is
completed. This can't happen and leads to deadlock.
Hence we should be waiting on both process parallely, so that stderr is
read on both processes.

Backport of:
 &gt; Patch: https://review.gluster.org/22684/
 &gt; Change-Id: I609c7cc5c07e210c504771115b4d551a2e891adf
 &gt; BUG: 1707728
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;

Change-Id: I609c7cc5c07e210c504771115b4d551a2e891adf
fixes: bz#1709738
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests/geo-rep: Fix arequal checksum comparison</title>
<updated>2019-05-21T05:14:53+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2019-05-08T08:40:05+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d6f523927bdd5d8914650aae7bd6e6f69e91b49f'/>
<id>d6f523927bdd5d8914650aae7bd6e6f69e91b49f</id>
<content type='text'>
The arequal checkusm comparison was always returning
as successful, eventhough, if it was not. Fixed the same.

Backport of:
&gt; Patch: https://review.gluster.org/22682
&gt; Change-Id: I5083da25c0954126e452d06311d2d376f8540555
&gt; BUG: 1707742
&gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit 288cffd1ab7180cccfcdea36d0c469b9fa52108f)

Change-Id: I5083da25c0954126e452d06311d2d376f8540555
fixes: bz#1712220
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The arequal checkusm comparison was always returning
as successful, eventhough, if it was not. Fixed the same.

Backport of:
&gt; Patch: https://review.gluster.org/22682
&gt; Change-Id: I5083da25c0954126e452d06311d2d376f8540555
&gt; BUG: 1707742
&gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit 288cffd1ab7180cccfcdea36d0c469b9fa52108f)

Change-Id: I5083da25c0954126e452d06311d2d376f8540555
fixes: bz#1712220
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>geo-rep: Fix sync-method config</title>
<updated>2019-05-17T07:47:53+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2019-05-08T05:26:31+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=072a21576a65b5b0b2597115280972376f076a91'/>
<id>072a21576a65b5b0b2597115280972376f076a91</id>
<content type='text'>
Problem:
When 'use_tarssh' is set to true, it exits with successful
message but the default 'rsync' was used as sync-engine.
The new config 'sync-method' is not allowed to set from cli.

Analysis and Fix:
The 'use_tarssh' config is deprecated with new
config framework and 'sync-method' is the new
config to choose sync-method i.e. tarssh or rsync.
This patch fixes the 'sync-method' config. The allowed
values are tarssh and rsync.

Backport of:
 &gt; Patch: https://review.gluster.org/22683
 &gt; Change-Id: I0edb0319cad0455b29e49f2f08a64ce324735e84
 &gt; BUG: 1707686
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;

Change-Id: I0edb0319cad0455b29e49f2f08a64ce324735e84
fixes: bz#1709737
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
When 'use_tarssh' is set to true, it exits with successful
message but the default 'rsync' was used as sync-engine.
The new config 'sync-method' is not allowed to set from cli.

Analysis and Fix:
The 'use_tarssh' config is deprecated with new
config framework and 'sync-method' is the new
config to choose sync-method i.e. tarssh or rsync.
This patch fixes the 'sync-method' config. The allowed
values are tarssh and rsync.

Backport of:
 &gt; Patch: https://review.gluster.org/22683
 &gt; Change-Id: I0edb0319cad0455b29e49f2f08a64ce324735e84
 &gt; BUG: 1707686
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;

Change-Id: I0edb0319cad0455b29e49f2f08a64ce324735e84
fixes: bz#1709737
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>geo-rep: Fix rename with existing destination with same gfid</title>
<updated>2019-05-17T07:47:53+00:00</updated>
<author>
<name>Sunny Kumar</name>
<email>sunkumar@redhat.com</email>
</author>
<published>2019-04-02T07:08:09+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=219c9bc92c721d49de78fd5a4d98aca7d3c66ad4'/>
<id>219c9bc92c721d49de78fd5a4d98aca7d3c66ad4</id>
<content type='text'>
Problem:
   Geo-rep fails to sync the rename properly if destination exists.
It results in source to be remained on slave causing more number of
files on slave. Also heavy rename workload like logrotate caused
lot of ESTALE errors

Cause:
   Geo-rep fails to sync rename if destination exists if creation
of source file also falls into single batch of changelogs being
processed. This is because, after fixing problematic gfids verifying
from master, while re-processing original entries, CREATE also was
re-processed causing more files on slave and rename to be failed.

Solution:
   Entries need to be removed from retrial list after fixing
problematic gfids on slave so that it's not re-created again on slave.
   Also treat ESTALE as EEXIST so that the error is properly handled
verifying the op on master volume.

Backport of:
 &gt; Patch: https://review.gluster.org/22519/
 &gt; Change-Id: I50cf289e06b997adddff0552bf2466d9201dd1f9
 &gt; BUG: 1694820
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
 &gt; Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;

Change-Id: I50cf289e06b997adddff0552bf2466d9201dd1f9
fixes: bz#1709734
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
   Geo-rep fails to sync the rename properly if destination exists.
It results in source to be remained on slave causing more number of
files on slave. Also heavy rename workload like logrotate caused
lot of ESTALE errors

Cause:
   Geo-rep fails to sync rename if destination exists if creation
of source file also falls into single batch of changelogs being
processed. This is because, after fixing problematic gfids verifying
from master, while re-processing original entries, CREATE also was
re-processed causing more files on slave and rename to be failed.

Solution:
   Entries need to be removed from retrial list after fixing
problematic gfids on slave so that it's not re-created again on slave.
   Also treat ESTALE as EEXIST so that the error is properly handled
verifying the op on master volume.

Backport of:
 &gt; Patch: https://review.gluster.org/22519/
 &gt; Change-Id: I50cf289e06b997adddff0552bf2466d9201dd1f9
 &gt; BUG: 1694820
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
 &gt; Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;

Change-Id: I50cf289e06b997adddff0552bf2466d9201dd1f9
fixes: bz#1709734
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
