<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/cluster/dht/src/dht-common.h, branch release-3.13</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>cluster/dht: Add migration checks to dht_(f)xattrop</title>
<updated>2018-01-03T05:08:54+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2018-01-03T05:06:58+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=cdb682572ce4a04d847f997dc5ea93e47d3223e3'/>
<id>cdb682572ce4a04d847f997dc5ea93e47d3223e3</id>
<content type='text'>
The dht_(f)xattrop implementation did not implement
migration phase1/phase2 checks which could cause issues
with rebalance on sharded volumes.
This does not solve the issue where fops may reach the target
out of order.

&gt; Change-Id: I2416fc35115e60659e35b4b717fd51f20746586c
&gt; BUG: 1471031
&gt; Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;

Change-Id: I2416fc35115e60659e35b4b717fd51f20746586c
BUG: 1515434
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The dht_(f)xattrop implementation did not implement
migration phase1/phase2 checks which could cause issues
with rebalance on sharded volumes.
This does not solve the issue where fops may reach the target
out of order.

&gt; Change-Id: I2416fc35115e60659e35b4b717fd51f20746586c
&gt; BUG: 1471031
&gt; Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;

Change-Id: I2416fc35115e60659e35b4b717fd51f20746586c
BUG: 1515434
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Serialize mds update code path with lookup unwind in selfheal</title>
<updated>2018-01-02T18:41:54+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2017-10-06T09:43:02+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=fe1008455ddaa4e3f95a5fe3180e6947afdb6c75'/>
<id>fe1008455ddaa4e3f95a5fe3180e6947afdb6c75</id>
<content type='text'>
Problem: Sometime test case ./tests/bugs/bug-1371806_1.t is failing on
         centos due to race condition between fresh lookup and setxattr fop.

Solution: In selfheal code path we do save mds on inode_ctx, it was not
          serialize with lookup unwind. Due to this behavior after lookup
          unwind if mds is not saved on inode_ctx and if any subsequent
          setxattr fop call it has failed with ENOENT because
          no mds has found on inode ctx.To resolve it save mds on
          inode ctx has been serialize with lookup unwind.

&gt; BUG: 1498966
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Change-Id: I8d4bb40a6cbf0cec35d181ec0095cc7142b02e29
BUG: 1529055
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: Sometime test case ./tests/bugs/bug-1371806_1.t is failing on
         centos due to race condition between fresh lookup and setxattr fop.

Solution: In selfheal code path we do save mds on inode_ctx, it was not
          serialize with lookup unwind. Due to this behavior after lookup
          unwind if mds is not saved on inode_ctx and if any subsequent
          setxattr fop call it has failed with ENOENT because
          no mds has found on inode ctx.To resolve it save mds on
          inode ctx has been serialize with lookup unwind.

&gt; BUG: 1498966
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Change-Id: I8d4bb40a6cbf0cec35d181ec0095cc7142b02e29
BUG: 1529055
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: make rebalance use truncate incase</title>
<updated>2017-12-11T04:53:59+00:00</updated>
<author>
<name>Susant Palai</name>
<email>spalai@redhat.com</email>
</author>
<published>2017-10-24T13:05:20+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=af59eedfb8185fefe4cc3d73e88211893da69d51'/>
<id>af59eedfb8185fefe4cc3d73e88211893da69d51</id>
<content type='text'>
..
the brick file system does not support fallocate.

&gt; Change-Id: Id76cda2d8bb3b223b779e5e7a34f17c8bfa6283c
&gt; BUG: 1488103
&gt; Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;

Change-Id: Id76cda2d8bb3b223b779e5e7a34f17c8bfa6283c
BUG: 1520232
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
..
the brick file system does not support fallocate.

&gt; Change-Id: Id76cda2d8bb3b223b779e5e7a34f17c8bfa6283c
&gt; BUG: 1488103
&gt; Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;

Change-Id: Id76cda2d8bb3b223b779e5e7a34f17c8bfa6283c
BUG: 1520232
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Don't store the entire uuid for subvols</title>
<updated>2017-10-10T08:58:39+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-07-21T11:08:14+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c4a608799a577a4f38139f6bb8a47da8efb0fec3'/>
<id>c4a608799a577a4f38139f6bb8a47da8efb0fec3</id>
<content type='text'>
Comparing the uuid string of the local node against that stored in the
local_subvol information is inefficient, especially as it is
done for every file to be migrated. The code has now been changed
to set the value of info to 1 if the nodeuuid is that of the node
making the comparison so this becomes an integer comparison.

Change-Id: I7491d59caad3b71dbf5facc94dcde0cd53962775
BUG: 1451434
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Comparing the uuid string of the local node against that stored in the
local_subvol information is inefficient, especially as it is
done for every file to be migrated. The code has now been changed
to set the value of info to 1 if the nodeuuid is that of the node
making the comparison so this becomes an integer comparison.

Change-Id: I7491d59caad3b71dbf5facc94dcde0cd53962775
BUG: 1451434
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht : User xattrs are not healed after brick stop/start</title>
<updated>2017-10-04T09:55:35+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2017-05-12T15:42:47+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=9b4de61a136b8e5ba7bf0e48690cdb1292d0dee8'/>
<id>9b4de61a136b8e5ba7bf0e48690cdb1292d0dee8</id>
<content type='text'>
Problem: In a distributed volume custom extended attribute value for a directory
         does not display correct value after stop/start or added newly brick.
         If any extended(acl) attribute value is set for a directory after stop/added
         the brick the attribute(user|acl|quota) value is not updated on brick
         after start the brick.

Solution: First store hashed subvol or subvol(has internal xattr) on inode ctx and
          consider it as a MDS subvol.At the time of update custom xattr
          (user,quota,acl, selinux) on directory first check the mds from
          inode ctx, if mds is not present on inode ctx then throw EINVAL error
          to application otherwise set xattr on MDS subvol with internal xattr
          value of -1 and then try to update the attribute on other non MDS
          volumes also.If mds subvol is down in that case throw an
          error "Transport endpoint is not connected". In dht_dir_lookup_cbk|
          dht_revalidate_cbk|dht_discover_complete call dht_call_dir_xattr_heal
          to heal custom extended attribute.
          In case of gnfs server if hashed subvol has not found based on
          loc then wind a call on all subvol to update xattr.

Fix:    1) Save MDS subvol on inode ctx
        2) Check if mds subvol is present on inode ctx
        3) If mds subvol is down then call unwind with error ENOTCONN and if it is up
           then set new xattr "GF_DHT_XATTR_MDS" to -1 and wind a call on other
           subvol.
        4) If setxattr fop is successful on non-mds subvol then increment the value of
           internal xattr to +1
        5) At the time of directory_lookup check the value of new xattr GF_DHT_XATTR_MDS
        6) If value is not 0 in dht_lookup_dir_cbk(other cbk) functions then call heal
           function to heal user xattr
        7) syncop_setxattr on hashed_subvol to reset the value of xattr to 0
           if heal is successful on all subvol.

Test : To reproduce the issue followed below steps
       1) Create a distributed volume and create mount point
       2) Create some directory from mount point mkdir tmp{1..5}
       3) Kill any one brick from the volume
       4) Set extended attribute from mount point on directory
          setfattr -n user.foo -v "abc" ./tmp{1..5}
          It will throw error " Transport End point is not connected "
          for those hashed subvol is down
       5) Start volume with force option to start brick process
       6) Execute getfattr command on mount point for directory
       7) Check extended attribute on brick
          getfattr -n user.foo &lt;volume-location&gt;/tmp{1..5}
          It shows correct value for directories for those
          xattr fop were executed successfully.

Note: The patch will resolve xattr healing problem only for fuse mount
      not for nfs mount.

BUG: 1371806
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;

Change-Id: I4eb137eace24a8cb796712b742f1d177a65343d5
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: In a distributed volume custom extended attribute value for a directory
         does not display correct value after stop/start or added newly brick.
         If any extended(acl) attribute value is set for a directory after stop/added
         the brick the attribute(user|acl|quota) value is not updated on brick
         after start the brick.

Solution: First store hashed subvol or subvol(has internal xattr) on inode ctx and
          consider it as a MDS subvol.At the time of update custom xattr
          (user,quota,acl, selinux) on directory first check the mds from
          inode ctx, if mds is not present on inode ctx then throw EINVAL error
          to application otherwise set xattr on MDS subvol with internal xattr
          value of -1 and then try to update the attribute on other non MDS
          volumes also.If mds subvol is down in that case throw an
          error "Transport endpoint is not connected". In dht_dir_lookup_cbk|
          dht_revalidate_cbk|dht_discover_complete call dht_call_dir_xattr_heal
          to heal custom extended attribute.
          In case of gnfs server if hashed subvol has not found based on
          loc then wind a call on all subvol to update xattr.

Fix:    1) Save MDS subvol on inode ctx
        2) Check if mds subvol is present on inode ctx
        3) If mds subvol is down then call unwind with error ENOTCONN and if it is up
           then set new xattr "GF_DHT_XATTR_MDS" to -1 and wind a call on other
           subvol.
        4) If setxattr fop is successful on non-mds subvol then increment the value of
           internal xattr to +1
        5) At the time of directory_lookup check the value of new xattr GF_DHT_XATTR_MDS
        6) If value is not 0 in dht_lookup_dir_cbk(other cbk) functions then call heal
           function to heal user xattr
        7) syncop_setxattr on hashed_subvol to reset the value of xattr to 0
           if heal is successful on all subvol.

Test : To reproduce the issue followed below steps
       1) Create a distributed volume and create mount point
       2) Create some directory from mount point mkdir tmp{1..5}
       3) Kill any one brick from the volume
       4) Set extended attribute from mount point on directory
          setfattr -n user.foo -v "abc" ./tmp{1..5}
          It will throw error " Transport End point is not connected "
          for those hashed subvol is down
       5) Start volume with force option to start brick process
       6) Execute getfattr command on mount point for directory
       7) Check extended attribute on brick
          getfattr -n user.foo &lt;volume-location&gt;/tmp{1..5}
          It shows correct value for directories for those
          xattr fop were executed successfully.

Note: The patch will resolve xattr healing problem only for fuse mount
      not for nfs mount.

BUG: 1371806
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;

Change-Id: I4eb137eace24a8cb796712b742f1d177a65343d5
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: EBADF handling for fremovexattr and fsetxattr</title>
<updated>2017-08-09T02:44:41+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-08-08T17:03:24+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=747a08d34e2a1e94d7fce68a3577370288bb1955'/>
<id>747a08d34e2a1e94d7fce68a3577370288bb1955</id>
<content type='text'>
Add EBADF handling for dht_fremovexattr and dht_fsetxattr.

Change-Id: Ide0d5812dae79655d2565157e5baabcd753b4309
BUG: 1476665
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17999
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Add EBADF handling for dht_fremovexattr and dht_fsetxattr.

Change-Id: Ide0d5812dae79655d2565157e5baabcd753b4309
BUG: 1476665
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17999
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Check for open fd only on EBADF</title>
<updated>2017-08-08T10:21:18+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-08-04T09:16:38+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=cdca1cb26a0aba390c6d8485c0d6d95e22ffc8bd'/>
<id>cdca1cb26a0aba390c6d8485c0d6d95e22ffc8bd</id>
<content type='text'>
DHT fd based fops used to check if the fd was open
on the cached subvol before winding the call. However,
this introduced a performance regression of about
30% for reads.

This check was introduced to handle cases where files
were migrated while IOs were happening. As this is not
the common case, dht will now check if the fd is
open on the cached subvol only if the call fails
with EBADF.

This will prevent a performance hit where a rebalance
is not running.

Change-Id: I2035a858d63c3fcd22bb634055bbb0ad01686808
BUG: 1476665
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17976
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Susant Palai &lt;spalai@redhat.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
DHT fd based fops used to check if the fd was open
on the cached subvol before winding the call. However,
this introduced a performance regression of about
30% for reads.

This check was introduced to handle cases where files
were migrated while IOs were happening. As this is not
the common case, dht will now check if the fd is
open on the cached subvol only if the call fails
with EBADF.

This will prevent a performance hit where a rebalance
is not running.

Change-Id: I2035a858d63c3fcd22bb634055bbb0ad01686808
BUG: 1476665
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17976
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Susant Palai &lt;spalai@redhat.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/rebalance: Fix hardlink migration failures</title>
<updated>2017-07-13T05:38:40+00:00</updated>
<author>
<name>Susant Palai</name>
<email>spalai@redhat.com</email>
</author>
<published>2017-07-12T06:31:40+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0d75e39834d4880dce0cb3c79bef4b70bb32874d'/>
<id>0d75e39834d4880dce0cb3c79bef4b70bb32874d</id>
<content type='text'>
A brief about how hardlink migration works:
  - Different hardlinks (to the same file) may hash to different bricks,
but their cached subvol will be same. Rebalance picks up the first hardlink,
calculates it's  hash(call it TARGET) and set the hashed subvolume as an xattr
on the data file.
  - Now all the hardlinks those come after this will fetch that xattr and will
create linkto files on TARGET (all linkto files for the hardlinks will be hardlink
to each other on TARGET).
  - When number of hardlinks on source is equal to the number of hardlinks on
TARGET, the data migration will happen.

RACE:1
  Since rebalance is multi-threaded, the first lookup (which decides where the TARGET
subvol should be), can be called by two hardlink migration parallely and they may end
up creating linkto files on two different TARGET subvols. Hence, hardlinks won't be
migrated.

Fix: Rely on the xattr response of lookup inside gf_defrag_handle_hardlink since it
is executed under synclock.

RACE:2
  The linkto files on TARGET can be created by other clients also if they are doing
lookup on the hardlinks.  Consider a scenario where you have 100 hardlinks.  When
rebalance is migrating 99th hardlink, as a result of continuous lookups from other
client, linkcount on TARGET is equal to source linkcount. Rebalance will migrate data
on the 99th hardlink itself. On 100th hardlink migration, hardlink will have TARGET as
cached subvolume. If it's hash is also the same, then a migration will be triggered from
TARGET to TARGET leading to data loss.

Fix: Make sure before the final data migration, source is not same as destination.

RACE:3
  Since a hardlink can be migrating to a non-hashed subvolume, a lookup from other
client or even the rebalance it self, might delete the linkto file on TARGET leading
to hardlinks never getting migrated.

This will be addressed in a different patch in future.

Change-Id: If0f6852f0e662384ee3875a2ac9d19ac4a6cea98
BUG: 1469964
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17755
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
A brief about how hardlink migration works:
  - Different hardlinks (to the same file) may hash to different bricks,
but their cached subvol will be same. Rebalance picks up the first hardlink,
calculates it's  hash(call it TARGET) and set the hashed subvolume as an xattr
on the data file.
  - Now all the hardlinks those come after this will fetch that xattr and will
create linkto files on TARGET (all linkto files for the hardlinks will be hardlink
to each other on TARGET).
  - When number of hardlinks on source is equal to the number of hardlinks on
TARGET, the data migration will happen.

RACE:1
  Since rebalance is multi-threaded, the first lookup (which decides where the TARGET
subvol should be), can be called by two hardlink migration parallely and they may end
up creating linkto files on two different TARGET subvols. Hence, hardlinks won't be
migrated.

Fix: Rely on the xattr response of lookup inside gf_defrag_handle_hardlink since it
is executed under synclock.

RACE:2
  The linkto files on TARGET can be created by other clients also if they are doing
lookup on the hardlinks.  Consider a scenario where you have 100 hardlinks.  When
rebalance is migrating 99th hardlink, as a result of continuous lookups from other
client, linkcount on TARGET is equal to source linkcount. Rebalance will migrate data
on the 99th hardlink itself. On 100th hardlink migration, hardlink will have TARGET as
cached subvolume. If it's hash is also the same, then a migration will be triggered from
TARGET to TARGET leading to data loss.

Fix: Make sure before the final data migration, source is not same as destination.

RACE:3
  Since a hardlink can be migrating to a non-hashed subvolume, a lookup from other
client or even the rebalance it self, might delete the linkto file on TARGET leading
to hardlinks never getting migrated.

This will be addressed in a different patch in future.

Change-Id: If0f6852f0e662384ee3875a2ac9d19ac4a6cea98
BUG: 1469964
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17755
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Use size to calculate estimates</title>
<updated>2017-07-10T14:35:34+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-07-03T07:43:35+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=9156a743aa76c955d18c9bfcb7c1a38ba00da890'/>
<id>9156a743aa76c955d18c9bfcb7c1a38ba00da890</id>
<content type='text'>
The earlier approach of using the number of files
to determine when the rebalance would complete did
not work well when file sizes differed widely.

The new approach now gets the total data size and
uses that information to determine how long
the rebalance is expected to take.

Change-Id: I84e80a0893efab72ff06130e4596fa71c9c8c868
BUG: 1467209
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17668
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: MOHIT AGRAWAL &lt;moagrawa@redhat.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The earlier approach of using the number of files
to determine when the rebalance would complete did
not work well when file sizes differed widely.

The new approach now gets the total data size and
uses that information to determine how long
the rebalance is expected to take.

Change-Id: I84e80a0893efab72ff06130e4596fa71c9c8c868
BUG: 1467209
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17668
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: MOHIT AGRAWAL &lt;moagrawa@redhat.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Check if fd is opened on dst subvol</title>
<updated>2017-06-28T11:42:21+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-06-26T15:42:56+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=91db0d47ca267aecfc6124a3f337a4e2f2c9f1e2'/>
<id>91db0d47ca267aecfc6124a3f337a4e2f2c9f1e2</id>
<content type='text'>
If an fd is opened on a file, the file is migrated
and the cached subvol is updated in the inode_ctx
before an fd based fop is sent, the fop is sent to
the dst subvol on which the fd is not opened.
This causes the FOP to fail with EBADF.

Now, every fd based fop will check to see that the fd
has been opened on the dst subvol before winding it down.

Change-Id: Id92ef5eb7a5b5226688e2d2868b15e383f5f240e
BUG: 1465075
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17630
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-by: Susant Palai &lt;spalai@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If an fd is opened on a file, the file is migrated
and the cached subvol is updated in the inode_ctx
before an fd based fop is sent, the fop is sent to
the dst subvol on which the fd is not opened.
This causes the FOP to fail with EBADF.

Now, every fd based fop will check to see that the fd
has been opened on the dst subvol before winding it down.

Change-Id: Id92ef5eb7a5b5226688e2d2868b15e383f5f240e
BUG: 1465075
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17630
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-by: Susant Palai &lt;spalai@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
</feed>
