<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/cluster, branch v3.7.16</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>cluster/ec: Use locks for opendir</title>
<updated>2016-09-15T19:13:05+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2016-08-24T15:31:05+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=36692a522ff99fe4c6127c359f4af1cc9aad8de8'/>
<id>36692a522ff99fe4c6127c359f4af1cc9aad8de8</id>
<content type='text'>
Problem:
In some cases we see that readdir keeps winding to the brick that doesn't have
any blocked locks i.e. first brick. This is leading to the client assuming that
there are no blocking locks on the inode so it won't give away the lock. Other
clients end up blocked on the lock as if the command hung.

Fix:
Proper way to fix this issue is to use infra present in
http://review.gluster.org/14736 This is a stop gap fix where we start taking
inodelks in opendir which goes to all the bricks, this will detect if there is
any contention.

cherry picked from commit f013335400d033a9677797377b90b968803135f4:

&gt;BUG: 1346719
&gt;Change-Id: I91109107a26f6535b945ac476338e9f21dc31eb9
&gt;Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/15309
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Ashish Pandey &lt;aspandey@redhat.com&gt;

Change-Id: I91109107a26f6535b945ac476338e9f21dc31eb9
BUG: 1373392
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15406
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In some cases we see that readdir keeps winding to the brick that doesn't have
any blocked locks i.e. first brick. This is leading to the client assuming that
there are no blocking locks on the inode so it won't give away the lock. Other
clients end up blocked on the lock as if the command hung.

Fix:
Proper way to fix this issue is to use infra present in
http://review.gluster.org/14736 This is a stop gap fix where we start taking
inodelks in opendir which goes to all the bricks, this will detect if there is
any contention.

cherry picked from commit f013335400d033a9677797377b90b968803135f4:

&gt;BUG: 1346719
&gt;Change-Id: I91109107a26f6535b945ac476338e9f21dc31eb9
&gt;Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/15309
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Ashish Pandey &lt;aspandey@redhat.com&gt;

Change-Id: I91109107a26f6535b945ac476338e9f21dc31eb9
BUG: 1373392
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15406
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>dht: "replica.split-brain-status" attribute value is not correct</title>
<updated>2016-09-13T04:49:55+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2016-08-19T05:03:50+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=9723d353c6ca41aec620da21563a6ade79c71c59'/>
<id>9723d353c6ca41aec620da21563a6ade79c71c59</id>
<content type='text'>
Problem: In a distributed-replicate volume attribute
         "replica.split-brain-status" value does not display split-brain
           condition though directory is in split-brain.
         If directory is in split brain on mutiple replica-pairs
         it does not show full list of replica pairs.

Solution: Update the dht_aggregate code to aggregate the xattr
          value in this specific condition.

Fix:      1) function getChoices returns the choices from split-brain
             status string.
          2) function add_opt adding the choices to local buffer to
             store in dictionary
          3) For the key "replica.split-brain-status" function dht_aggregate
             call dht_aggregate_split_brain_xattr to prepare the list.

Test:     To verify the patch followed below steps
          1) Create a distributed replica volume and create mount point
          2) Stop heal daemon
          3) Touch file and directories on mount point
             mkdir test{1..5};touch tmp{1..5}
          4) Down brick process on one of the replica set
             pkill -9 glusterfsd
          5) Change permission of dir on mount point
             chmod 755 test{1..5}
          6) Restart brick process on node with force option
          7) kill brick process on other node in same replica set
          8) Change permission of dir again on mount point
             chmod 766 test{1..5}
          9) Reexecute same step from 4-9 on other replica set also
          10) After check heal status on server it will show dir's are
              in split brain on all replica sets
          11) After check the replica.split-brain-status attr on mount
              point it will show wrong status of split brain.
          12) After apply the patch the attribute shows correct value.

&gt; Change-Id: Icdfd72005a4aa82337c342762775a3d1761bbe4a
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/15201
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; (cherry picked from commit c4e9ec653c946002ab6d4c71ee8e6df056438a04)

Change-Id: Ia5234e8a2291a7e8a7211c82368f4df1c99fa099
Backport of commit c4e9ec653c946002ab6d4c71ee8e6df056438a04
BUG: 1375099
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15466
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: In a distributed-replicate volume attribute
         "replica.split-brain-status" value does not display split-brain
           condition though directory is in split-brain.
         If directory is in split brain on mutiple replica-pairs
         it does not show full list of replica pairs.

Solution: Update the dht_aggregate code to aggregate the xattr
          value in this specific condition.

Fix:      1) function getChoices returns the choices from split-brain
             status string.
          2) function add_opt adding the choices to local buffer to
             store in dictionary
          3) For the key "replica.split-brain-status" function dht_aggregate
             call dht_aggregate_split_brain_xattr to prepare the list.

Test:     To verify the patch followed below steps
          1) Create a distributed replica volume and create mount point
          2) Stop heal daemon
          3) Touch file and directories on mount point
             mkdir test{1..5};touch tmp{1..5}
          4) Down brick process on one of the replica set
             pkill -9 glusterfsd
          5) Change permission of dir on mount point
             chmod 755 test{1..5}
          6) Restart brick process on node with force option
          7) kill brick process on other node in same replica set
          8) Change permission of dir again on mount point
             chmod 766 test{1..5}
          9) Reexecute same step from 4-9 on other replica set also
          10) After check heal status on server it will show dir's are
              in split brain on all replica sets
          11) After check the replica.split-brain-status attr on mount
              point it will show wrong status of split brain.
          12) After apply the patch the attribute shows correct value.

&gt; Change-Id: Icdfd72005a4aa82337c342762775a3d1761bbe4a
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/15201
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; (cherry picked from commit c4e9ec653c946002ab6d4c71ee8e6df056438a04)

Change-Id: Ia5234e8a2291a7e8a7211c82368f4df1c99fa099
Backport of commit c4e9ec653c946002ab6d4c71ee8e6df056438a04
BUG: 1375099
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15466
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Prevent split-brain when bricks are brought off and on in cyclic order</title>
<updated>2016-08-22T10:05:08+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2016-07-28T15:59:59+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=febaa1e46d3a91a29c4786a17abf29cfc7178254'/>
<id>febaa1e46d3a91a29c4786a17abf29cfc7178254</id>
<content type='text'>
        Backport of: http://review.gluster.org/15080

When the bricks are brought offline and then online in cyclic
order while writes are in progress on a file, thanks to inode
refresh in write txns, AFR will mostly fail the write attempt
when the only good copy is offline. However, there is still a
remote possibility that the file will run into split-brain if
the brick that has the lone good copy goes offline *after* the
inode refresh but *before* the write txn completes (I call it
in-flight split-brain in the patch for ease of reference),
requiring intervention from admin to resolve the split-brain
before the IO can resume normally on the file. To get around this,
the patch does the following things:
i) retains the dirty xattrs on the file
ii) avoids marking the last of the good copies as bad (or accused)
    in case it is the one to go down during the course of a write.
iii) fails that particular write with the appropriate errno.

This way, we still have one good copy left despite the split-brain situation
which when it is back online, will be chosen as source to do the heal.

Change-Id: I7c13c6ddd5b8fe88b0f2684e8ce5f4a9c3a24a08
BUG: 1367270
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15222
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Oleksandr Natalenko &lt;oleksandr@natalenko.name&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
        Backport of: http://review.gluster.org/15080

When the bricks are brought offline and then online in cyclic
order while writes are in progress on a file, thanks to inode
refresh in write txns, AFR will mostly fail the write attempt
when the only good copy is offline. However, there is still a
remote possibility that the file will run into split-brain if
the brick that has the lone good copy goes offline *after* the
inode refresh but *before* the write txn completes (I call it
in-flight split-brain in the patch for ease of reference),
requiring intervention from admin to resolve the split-brain
before the IO can resume normally on the file. To get around this,
the patch does the following things:
i) retains the dirty xattrs on the file
ii) avoids marking the last of the good copies as bad (or accused)
    in case it is the one to go down during the course of a write.
iii) fails that particular write with the appropriate errno.

This way, we still have one good copy left despite the split-brain situation
which when it is back online, will be chosen as source to do the heal.

Change-Id: I7c13c6ddd5b8fe88b0f2684e8ce5f4a9c3a24a08
BUG: 1367270
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15222
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Oleksandr Natalenko &lt;oleksandr@natalenko.name&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: copy loc before passing to syncop</title>
<updated>2016-08-17T11:44:11+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2016-08-02T09:49:00+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=318aacabbc482bcc2e1686988a77ad0bc054837e'/>
<id>318aacabbc482bcc2e1686988a77ad0bc054837e</id>
<content type='text'>
Problem:
When io-threads is enabled on the client side, io-threads destroys the
call-stub in which the loc is stored as soon as the c-stack unwinds.
Because afr is creating a syncop with the address of loc passed in
setxattr by the time syncop tries to access it, io-threads would have
already freed the call-stub. This will lead to crash.

Fix:
Copy loc to frame-&gt;local and use it's address.

&gt; Reviewed-on: http://review.gluster.org/15070
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;

BUG: 1367305
Change-Id: I16987e491e24b0b4e3d868a6968e802e47c77f7a
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Signed-off-by: Oleksandr Natalenko &lt;oleksandr@natalenko.name&gt;
Reviewed-on: http://review.gluster.org/15168
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
When io-threads is enabled on the client side, io-threads destroys the
call-stub in which the loc is stored as soon as the c-stack unwinds.
Because afr is creating a syncop with the address of loc passed in
setxattr by the time syncop tries to access it, io-threads would have
already freed the call-stub. This will lead to crash.

Fix:
Copy loc to frame-&gt;local and use it's address.

&gt; Reviewed-on: http://review.gluster.org/15070
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;

BUG: 1367305
Change-Id: I16987e491e24b0b4e3d868a6968e802e47c77f7a
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Signed-off-by: Oleksandr Natalenko &lt;oleksandr@natalenko.name&gt;
Reviewed-on: http://review.gluster.org/15168
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Bug fixes in txn codepath</title>
<updated>2016-08-17T10:22:53+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2016-08-05T06:48:05+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e65e066c4f993aac626112e718ee66d35d15c6a8'/>
<id>e65e066c4f993aac626112e718ee66d35d15c6a8</id>
<content type='text'>
        Backport of: http://review.gluster.org/15145

AFR sets transaction.pre_op[] array even before actually doing the
pre-op on-disk. Therefore, AFR must not only consider the pre_op[] array
but also the failed_subvols[] information before setting the pre_op_done[]
flag. This patch fixes that.

Change-Id: I8163256a6de254be43a7a526c6d2f9dc30e0e1df
BUG: 1367270
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15162
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Anuradha Talur &lt;atalur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
        Backport of: http://review.gluster.org/15145

AFR sets transaction.pre_op[] array even before actually doing the
pre-op on-disk. Therefore, AFR must not only consider the pre_op[] array
but also the failed_subvols[] information before setting the pre_op_done[]
flag. This patch fixes that.

Change-Id: I8163256a6de254be43a7a526c6d2f9dc30e0e1df
BUG: 1367270
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15162
Reviewed-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Anuradha Talur &lt;atalur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: initialize cbk before attempting inode-link</title>
<updated>2016-08-17T06:28:07+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2016-06-13T06:56:24+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2440dace036a4955ca60d833b2ae514bab679126'/>
<id>2440dace036a4955ca60d833b2ae514bab679126</id>
<content type='text'>
Otherwise inode-link failures in selfheal codepath will result in a
crash.

This regression was introduced in master as fix to 1334164. But, that
patch never made into 3.7. Hence, in essence this patch is 3.7 version
of fix to 1334164, minus the regression.

&gt; Change-Id: I9061629ae9d1eb1ac945af5f448d0d8b397a5022
&gt; BUG: 1345748
&gt; Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/14707
&gt; Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
&gt; Reviewed-by: Susant Palai &lt;spalai@redhat.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;

Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Change-Id: I9061629ae9d1eb1ac945af5f448d0d8b397a6022
BUG: 1366483
Reviewed-on: http://review.gluster.org/15163
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Otherwise inode-link failures in selfheal codepath will result in a
crash.

This regression was introduced in master as fix to 1334164. But, that
patch never made into 3.7. Hence, in essence this patch is 3.7 version
of fix to 1334164, minus the regression.

&gt; Change-Id: I9061629ae9d1eb1ac945af5f448d0d8b397a5022
&gt; BUG: 1345748
&gt; Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/14707
&gt; Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
&gt; Reviewed-by: Susant Palai &lt;spalai@redhat.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;

Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Change-Id: I9061629ae9d1eb1ac945af5f448d0d8b397a6022
BUG: 1366483
Reviewed-on: http://review.gluster.org/15163
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>dht/rebalance: allocate migrator thread pool dynamically</title>
<updated>2016-08-05T10:55:39+00:00</updated>
<author>
<name>Susant Palai</name>
<email>spalai@redhat.com</email>
</author>
<published>2016-08-04T07:01:24+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=59378892641a10ba0268a044310264e52afe8ea0'/>
<id>59378892641a10ba0268a044310264e52afe8ea0</id>
<content type='text'>
Problems: The maximum number of migratior threads created was static set
to "40". And the number of these threads get created in rebalance depends
on the number of cores user has. If the number of cores exceeds 40, a
crash or memory corruption can be seen.

Fix: Make the migratior thread pool dynamic.

&gt; Change-Id: Ifbdac8a1a396363dd75e2f6bcb454070cfdbf839
&gt; BUG: 1362070
&gt; Reviewed-on: http://review.gluster.org/15000
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
(cherry picked from commit b8e8bfc7e4d3eaf76bb637221bc6392ec10ca54b)

Change-Id: Ifbdac8a1a396363dd75e2f6bcb454070cfdbf839
BUG: 1362070
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15062
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problems: The maximum number of migratior threads created was static set
to "40". And the number of these threads get created in rebalance depends
on the number of cores user has. If the number of cores exceeds 40, a
crash or memory corruption can be seen.

Fix: Make the migratior thread pool dynamic.

&gt; Change-Id: Ifbdac8a1a396363dd75e2f6bcb454070cfdbf839
&gt; BUG: 1362070
&gt; Reviewed-on: http://review.gluster.org/15000
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
(cherry picked from commit b8e8bfc7e4d3eaf76bb637221bc6392ec10ca54b)

Change-Id: Ifbdac8a1a396363dd75e2f6bcb454070cfdbf839
BUG: 1362070
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15062
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: Unlock stale locks when inodelk/entrylk/lk fails</title>
<updated>2016-07-30T01:04:22+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2016-06-11T13:13:42+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=4a49d3116b00811b7a4ba98f51bb27da0fa63d5c'/>
<id>4a49d3116b00811b7a4ba98f51bb27da0fa63d5c</id>
<content type='text'>
Thanks to Rafi for hinting a while back that this kind of
problem he saw once. I didn't think the theory was valid.
Could have caught it earlier if I had tested his theory.

 &gt;Change-Id: Iac6ffcdba2950aa6f8cf94f8994adeed6e6a9c9b
 &gt;BUG: 1344836
 &gt;Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
 &gt;Reviewed-on: http://review.gluster.org/14703
 &gt;Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
 &gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
 &gt;Tested-by: mohammed rafi  kc &lt;rkavunga@redhat.com&gt;
 &gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
 &gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;

BUG: 1361402
Change-Id: If9ccf0b3db7159b87ddcdc7b20e81cde8c3c76f0
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15040
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Thanks to Rafi for hinting a while back that this kind of
problem he saw once. I didn't think the theory was valid.
Could have caught it earlier if I had tested his theory.

 &gt;Change-Id: Iac6ffcdba2950aa6f8cf94f8994adeed6e6a9c9b
 &gt;BUG: 1344836
 &gt;Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
 &gt;Reviewed-on: http://review.gluster.org/14703
 &gt;Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
 &gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
 &gt;Tested-by: mohammed rafi  kc &lt;rkavunga@redhat.com&gt;
 &gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
 &gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;

BUG: 1361402
Change-Id: If9ccf0b3db7159b87ddcdc7b20e81cde8c3c76f0
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15040
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: some coverity fixes</title>
<updated>2016-07-28T13:54:36+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2016-07-12T04:37:48+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=53b54f1a03a5fb93266f66453d228258598f8ef6'/>
<id>53b54f1a03a5fb93266f66453d228258598f8ef6</id>
<content type='text'>
Backport of http://review.gluster.org/#/c/14895/

Thanks to Krutika for a cleaner way to track inode refs in
afr_set_split_brain_choice().

Change-Id: I2d968d05b815ad764b7e3f8aa9ad95a792b3c1df
BUG: 1360549
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15017
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of http://review.gluster.org/#/c/14895/

Thanks to Krutika for a cleaner way to track inode refs in
afr_set_split_brain_choice().

Change-Id: I2d968d05b815ad764b7e3f8aa9ad95a792b3c1df
BUG: 1360549
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15017
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: Handle absence of keys in some callback dict</title>
<updated>2016-07-27T07:00:04+00:00</updated>
<author>
<name>Ashish Pandey</name>
<email>aspandey@redhat.com</email>
</author>
<published>2016-06-17T12:22:56+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=1e3a8f47cd88c39c41519d143b001d45387eb4b8'/>
<id>1e3a8f47cd88c39c41519d143b001d45387eb4b8</id>
<content type='text'>
Problem: This issue arises when we do a rolling update
from 3.7.5 to 3.7.9.
For 4+2 volume running 3.7.5, if we update 2 nodes
and after heal completion  kill 2 older nodes, this
problem can be seen. After update and killing of
bricks, 2 nodes will return inodelk count key in dict
while other 2 nodes will not have inodelk count in dict.
This is also true for get-link-count.
During dictionary match , ec_dict_compare, this will
lead to mismatch of answers and the file operation
on mount point will fail with IO error.

Solution:
Don't match inode, entry and link count keys while
comparing two dictionaries. However, while combining the
data in ec_dict_combine, go through all the dictionaries
and select the maximum values received in different dicts
for these keys.

master-
http://review.gluster.org/#/c/14761/

Change-Id: I33546e3619fe8f909286ee48fb0df2009cd3d22f
BUG: 1360152
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14761
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15012
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: This issue arises when we do a rolling update
from 3.7.5 to 3.7.9.
For 4+2 volume running 3.7.5, if we update 2 nodes
and after heal completion  kill 2 older nodes, this
problem can be seen. After update and killing of
bricks, 2 nodes will return inodelk count key in dict
while other 2 nodes will not have inodelk count in dict.
This is also true for get-link-count.
During dictionary match , ec_dict_compare, this will
lead to mismatch of answers and the file operation
on mount point will fail with IO error.

Solution:
Don't match inode, entry and link count keys while
comparing two dictionaries. However, while combining the
data in ec_dict_combine, go through all the dictionaries
and select the maximum values received in different dicts
for these keys.

master-
http://review.gluster.org/#/c/14761/

Change-Id: I33546e3619fe8f909286ee48fb0df2009cd3d22f
BUG: 1360152
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14761
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15012
</pre>
</div>
</content>
</entry>
</feed>
