<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests/basic/afr, branch v3.11.0beta1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>cluster/afr: GFID split brain resolution with favorite-child-policy</title>
<updated>2017-04-21T00:38:54+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2017-03-09T12:38:28+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=799a2ff8299db6d6dc75f1533f4bd5a3bb72164d'/>
<id>799a2ff8299db6d6dc75f1533f4bd5a3bb72164d</id>
<content type='text'>
Problem:
Currently the automatic split brain resolution with favorite child policy
is not resolving the GFID split brains.

Fix:
When there is a GFID split brain and the favorite child policy is set to
size/mtime/ctime/majority, based on the policy decide on the source and
sinks. Delete the entry from the sinks and recreate it from the source.
Mark the appropriate pending attributes and resolve the GFID split brain.
When the heal takes place it will complete the pending heals and reset
the attributes.

Change-Id: Ie30e5373f94ca6f276745d9c3ad662b8acca6946
BUG: 1430719
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16878
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Currently the automatic split brain resolution with favorite child policy
is not resolving the GFID split brains.

Fix:
When there is a GFID split brain and the favorite child policy is set to
size/mtime/ctime/majority, based on the policy decide on the source and
sinks. Delete the entry from the sinks and recreate it from the source.
Mark the appropriate pending attributes and resolve the GFID split brain.
When the heal takes place it will complete the pending heals and reset
the attributes.

Change-Id: Ie30e5373f94ca6f276745d9c3ad662b8acca6946
BUG: 1430719
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16878
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: disallow increasing replica count for arbiter volumes</title>
<updated>2017-03-06T13:26:53+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2017-02-27T08:21:09+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b7ba77ab3ffb641d06223f7af5145d3d670b032a'/>
<id>b7ba77ab3ffb641d06223f7af5145d3d670b032a</id>
<content type='text'>
Problem: add-brick command to increase replica count in an arbiter
volume succeeds, causing undesirable effects like the 4th brick being
loaded with the arbiter xlator, the 3rd one losing the arbiter xlator
(when the brick process is restarted), arbitration logic in afr going
for a toss etc.

Fix: Arbiter configuration should always be a replica 3 volume (of
which 3rd brick is arbiter). Hence disallow increasing replica count for
arbiter volume configurations.

Change-Id: I9fe4edac880d0f711e6d44324ad5562974e53e51
BUG: 1429200
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16845
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: add-brick command to increase replica count in an arbiter
volume succeeds, causing undesirable effects like the 4th brick being
loaded with the arbiter xlator, the 3rd one losing the arbiter xlator
(when the brick process is restarted), arbitration logic in afr going
for a toss etc.

Fix: Arbiter configuration should always be a replica 3 volume (of
which 3rd brick is arbiter). Hence disallow increasing replica count for
arbiter volume configurations.

Change-Id: I9fe4edac880d0f711e6d44324ad5562974e53e51
BUG: 1429200
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16845
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: Fix and enable split-brain-healing mtime check</title>
<updated>2017-03-01T17:29:38+00:00</updated>
<author>
<name>Niklas Hambüchen</name>
<email>mail@nh2.me</email>
</author>
<published>2017-02-28T13:05:59+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c78d07fb8efdd5f63284332473852619d5da4244'/>
<id>c78d07fb8efdd5f63284332473852619d5da4244</id>
<content type='text'>
This test was commented out with the belief that it depended
on utimensat() support, but in fact it was not necessary because
`stat -c %Y` only outputs second resolution.

Simply commenting in the test made it fail because it checked
the values *before* the heal, while intended was to check them
*after* the heal. This commit fixes that.

Change-Id: I4194ac645b365a1f906a3ac9bcbbdb1f05000e27
BUG: 1422074
Signed-off-by: Niklas Hambüchen &lt;mail@nh2.me&gt;
Reviewed-on: https://review.gluster.org/16789
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Tested-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niklas Hambüchen
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This test was commented out with the belief that it depended
on utimensat() support, but in fact it was not necessary because
`stat -c %Y` only outputs second resolution.

Simply commenting in the test made it fail because it checked
the values *before* the heal, while intended was to check them
*after* the heal. This commit fixes that.

Change-Id: I4194ac645b365a1f906a3ac9bcbbdb1f05000e27
BUG: 1422074
Signed-off-by: Niklas Hambüchen &lt;mail@nh2.me&gt;
Reviewed-on: https://review.gluster.org/16789
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Tested-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niklas Hambüchen
</pre>
</div>
</content>
</entry>
<entry>
<title>core: run many bricks within one glusterfsd process</title>
<updated>2017-01-31T00:13:58+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2016-12-08T21:24:15+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=1a95fc3036db51b82b6a80952f0908bc2019d24a'/>
<id>1a95fc3036db51b82b6a80952f0908bc2019d24a</id>
<content type='text'>
This patch adds support for multiple brick translator stacks running
in a single brick server process.  This reduces our per-brick memory usage by
approximately 3x, and our appetite for TCP ports even more.  It also creates
potential to avoid process/thread thrashing, and to improve QoS by scheduling
more carefully across the bricks, but realizing that potential will require
further work.

Multiplexing is controlled by the "cluster.brick-multiplex" global option.  By
default it's off, and bricks are started in separate processes as before.  If
multiplexing is enabled, then *compatible* bricks (mostly those with the same
transport options) will be started in the same process.

Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
BUG: 1385758
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/14763
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch adds support for multiple brick translator stacks running
in a single brick server process.  This reduces our per-brick memory usage by
approximately 3x, and our appetite for TCP ports even more.  It also creates
potential to avoid process/thread thrashing, and to improve QoS by scheduling
more carefully across the bricks, but realizing that potential will require
further work.

Multiplexing is controlled by the "cluster.brick-multiplex" global option.  By
default it's off, and bricks are started in separate processes as before.  If
multiplexing is enabled, then *compatible* bricks (mostly those with the same
transport options) will be started in the same process.

Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
BUG: 1385758
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/14763
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>readdir-ahead : Perform STACK_UNWIND outside of mutex locks</title>
<updated>2017-01-10T04:52:28+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2017-01-09T04:25:26+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c89a065af2adc11d5aca3a4500d2e8c1ea02ed28'/>
<id>c89a065af2adc11d5aca3a4500d2e8c1ea02ed28</id>
<content type='text'>
Currently STACK_UNWIND is performnd within ctx-&gt;lock.
If readdir-ahead is loaded as a child of dht, then there
can be scenarios where the function calling STACK_UNWIND
becomes re-entrant. Its a good practice to not call
STACK_WIND/UNWIND within local mutex's

Change-Id: If4e869849d99ce233014a8aad7c4d5eef8dc2e98
BUG: 1401812
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16068
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently STACK_UNWIND is performnd within ctx-&gt;lock.
If readdir-ahead is loaded as a child of dht, then there
can be scenarios where the function calling STACK_UNWIND
becomes re-entrant. Its a good practice to not call
STACK_WIND/UNWIND within local mutex's

Change-Id: If4e869849d99ce233014a8aad7c4d5eef8dc2e98
BUG: 1401812
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16068
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Fail add-brick on replica count change, if brick is down</title>
<updated>2017-01-06T09:19:38+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2017-01-05T08:36:21+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c916a2ffc257b0cfa493410e31b6af28f428c53a'/>
<id>c916a2ffc257b0cfa493410e31b6af28f428c53a</id>
<content type='text'>
Problem:
1. Have a replica 2 volume with bricks b1 and b2
2. Before setting the layout, b1 goes down
3. Set the layout write some data, which gets populated on b2
4. b2 goes down then b1 comes up
5. Add another brick b3, and heal will take place from b1 to b3, which
   basically have no data
6. Write some data. Both b1 and b3 will mark b2 for pending writes
7. b1 goes down, and b2 comes up
8. b2 gets heald from b1. During heal it removes the data which is already
   in b2, considering that as stale data. This leads to data loss.

Solution:
1. In glusterd stage-op, while adding bricks, check whether the replica
   count is being increased
2. If yes, then check whether any of the bricks are down at that time
3. If yes, then fail the add-brick to avoid such data loss
4. Else continue the normal operation.

This check will work enen when we convert plain distribute volume to replicate

Test:
1. Create a replica 2 volume
2. Kill one brick from the volume
3. Try adding a brick to the volume
4. It should fail with all bricks are not up error
5. Cretae a distribute volume and kill one of the brick
6. Try to convert it to replicate volume, by adding bricks.
7. This should also fail.

Change-Id: I9c8d2ab104263e4206814c94c19212ab914ed07c
BUG: 1406411
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16330
Tested-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
1. Have a replica 2 volume with bricks b1 and b2
2. Before setting the layout, b1 goes down
3. Set the layout write some data, which gets populated on b2
4. b2 goes down then b1 comes up
5. Add another brick b3, and heal will take place from b1 to b3, which
   basically have no data
6. Write some data. Both b1 and b3 will mark b2 for pending writes
7. b1 goes down, and b2 comes up
8. b2 gets heald from b1. During heal it removes the data which is already
   in b2, considering that as stale data. This leads to data loss.

Solution:
1. In glusterd stage-op, while adding bricks, check whether the replica
   count is being increased
2. If yes, then check whether any of the bricks are down at that time
3. If yes, then fail the add-brick to avoid such data loss
4. Else continue the normal operation.

This check will work enen when we convert plain distribute volume to replicate

Test:
1. Create a replica 2 volume
2. Kill one brick from the volume
3. Try adding a brick to the volume
4. It should fail with all bricks are not up error
5. Cretae a distribute volume and kill one of the brick
6. Try to convert it to replicate volume, by adding bricks.
7. This should also fail.

Change-Id: I9c8d2ab104263e4206814c94c19212ab914ed07c
BUG: 1406411
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16330
Tested-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: Fix split-brain-favorite-child-policy.t failures</title>
<updated>2017-01-03T06:13:51+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2016-12-29T12:10:00+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=76fff8cb2a164b596ca67e65c99623f5b68361fd'/>
<id>76fff8cb2a164b596ca67e65c99623f5b68361fd</id>
<content type='text'>
Problem:
In CentOS-7, the file was receving an extra removexattr(security.ima)
FOP which changed its ctime, breaking the assumption that a particular brick
had the latest ctime based on the writevs done in the .t

Fix:
1. Compare the ctime of both files in the backend and pick the one with
the latest ctime for the fav-child policy. Also unmount the volume
before comparing, to avoid any further FOPS on the file that
can possibly modify the timestamps.

2. Added floating point handling in stat function. Thanks to Pranith for
the helping debugging the regex.

Change-Id: I06041a0f39a29d2593b867af8685d65c7cd99150
BUG: 1408757
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16288
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In CentOS-7, the file was receving an extra removexattr(security.ima)
FOP which changed its ctime, breaking the assumption that a particular brick
had the latest ctime based on the writevs done in the .t

Fix:
1. Compare the ctime of both files in the backend and pick the one with
the latest ctime for the fav-child policy. Also unmount the volume
before comparing, to avoid any further FOPS on the file that
can possibly modify the timestamps.

2. Added floating point handling in stat function. Thanks to Pranith for
the helping debugging the regex.

Change-Id: I06041a0f39a29d2593b867af8685d65c7cd99150
BUG: 1408757
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16288
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glfsheal: Explicitly enable self-heal xlator options</title>
<updated>2016-12-15T15:46:26+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2016-12-14T17:18:20+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=209c2d447be874047cb98d86492b03fa807d1832'/>
<id>209c2d447be874047cb98d86492b03fa807d1832</id>
<content type='text'>
Enable data, metadata and entry self-heal as xlator-options so that glfs-heal.c
can heal split-brain files even if they are disabled on the volume via volume
set commands.

Change-Id: Ic191a1017131db1ded94d97c932079d7bfd79457
BUG: 1234054
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11333
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Enable data, metadata and entry self-heal as xlator-options so that glfs-heal.c
can heal split-brain files even if they are disabled on the volume via volume
set commands.

Change-Id: Ic191a1017131db1ded94d97c932079d7bfd79457
BUG: 1234054
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11333
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: CLI for granular entry heal enablement/disablement</title>
<updated>2016-11-28T11:56:33+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2016-09-22T11:18:54+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=6dfc90fcd36956dcc4f624b3912bfb8e9c95757f'/>
<id>6dfc90fcd36956dcc4f624b3912bfb8e9c95757f</id>
<content type='text'>
When there are already existing non-granular indices created that are
yet to be healed, if granular-entry-heal option is toggled from 'off' to
'on', AFR self-heal whenever it kicks in, will try to look for granular
indices in 'entry-changes'. Because of the absence of name indices,
granular entry healing logic will fail to heal these directories, and
worse yet unset pending extended attributes with the assumption that
are no entries that need heal.

To get around this, a new CLI is introduced which will invoke glfsheal
program to figure whether at the time an attempt is made to enable
granular entry heal, there are pending heals on the volume OR there
are one or more bricks that are down. If either of them is true, the
command will be failed with the appropriate error.

New CLI: gluster volume heal &lt;VOL&gt; granular-entry-heal {enable,disable}

Change-Id: I1f4fe8162813b9068e198965d94169fee4adc099
BUG: 1370410
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15747
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When there are already existing non-granular indices created that are
yet to be healed, if granular-entry-heal option is toggled from 'off' to
'on', AFR self-heal whenever it kicks in, will try to look for granular
indices in 'entry-changes'. Because of the absence of name indices,
granular entry healing logic will fail to heal these directories, and
worse yet unset pending extended attributes with the assumption that
are no entries that need heal.

To get around this, a new CLI is introduced which will invoke glfsheal
program to figure whether at the time an attempt is made to enable
granular entry heal, there are pending heals on the volume OR there
are one or more bricks that are down. If either of them is true, the
command will be failed with the appropriate error.

New CLI: gluster volume heal &lt;VOL&gt; granular-entry-heal {enable,disable}

Change-Id: I1f4fe8162813b9068e198965d94169fee4adc099
BUG: 1370410
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15747
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Fix bugs in [f]inodelk/[f]entrylk</title>
<updated>2016-11-26T15:34:59+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2016-11-07T09:17:34+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=6be7bd936eb30aa8d2b908061f60e1534e797657'/>
<id>6be7bd936eb30aa8d2b908061f60e1534e797657</id>
<content type='text'>
Problems:
1) Inodelk is not taking quorum into account
2) finodelk, [f]entrylk are not implemented correctly
3) By default afr doesn't go for non-blocking parallel locks.

Fix:
Implemented a common framework which can be used by
[f]inodelk/[f]entrylk.  Used quorum for the same.

Change-Id: I239f13875a065298630d266941df10cfa3addc85
BUG: 1369077
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15802
Tested-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problems:
1) Inodelk is not taking quorum into account
2) finodelk, [f]entrylk are not implemented correctly
3) By default afr doesn't go for non-blocking parallel locks.

Fix:
Implemented a common framework which can be used by
[f]inodelk/[f]entrylk.  Used quorum for the same.

Change-Id: I239f13875a065298630d266941df10cfa3addc85
BUG: 1369077
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15802
Tested-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
</feed>
