<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/extras/ganesha, branch release-3.9</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>common-ha: unable to start HA, Connection Error</title>
<updated>2017-02-26T19:14:44+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2017-02-20T15:41:51+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=333dd916ebe645230c302fd9a1c2645c519b1d6d'/>
<id>333dd916ebe645230c302fd9a1c2645c519b1d6d</id>
<content type='text'>
See BZ 1284404. pcsd behavior has changed and pcsd will not accept
connections until SSL certificates have fully propagated throughout
all the nodes

HA devels suggest a 12 second delay between the `pcs cluster setup ...`
and the `pcs cluster start --all`

Change-Id: If94b6991a62f346dbead023c7e7f8282a995728c
BUG: 1425110
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16690
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
See BZ 1284404. pcsd behavior has changed and pcsd will not accept
connections until SSL certificates have fully propagated throughout
all the nodes

HA devels suggest a 12 second delay between the `pcs cluster setup ...`
and the `pcs cluster start --all`

Change-Id: If94b6991a62f346dbead023c7e7f8282a995728c
BUG: 1425110
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16690
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>common-ha: All statd related files need to be owned by rpcuser</title>
<updated>2017-01-19T23:32:03+00:00</updated>
<author>
<name>Soumya Koduri</name>
<email>skoduri@redhat.com</email>
</author>
<published>2017-01-19T09:31:12+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b17a00eb62d1120dc957e7f57ba8da3f9b31ad83'/>
<id>b17a00eb62d1120dc957e7f57ba8da3f9b31ad83</id>
<content type='text'>
Statd service is started as rpcuser by default. Hence the
files/directories needed by it under '/var/lib/nfs' should be
owned by the same user.

Note: This change is not in mainline as the cluster-bits
are being moved to storehaug project -
http://review.gluster.org/#/c/16349/
http://review.gluster.org/#/c/16333/

Change-Id: I89fd06aa9700c5ce60026ac825da7c154d9f48fd
BUG: 1414665
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16433
Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Statd service is started as rpcuser by default. Hence the
files/directories needed by it under '/var/lib/nfs' should be
owned by the same user.

Note: This change is not in mainline as the cluster-bits
are being moved to storehaug project -
http://review.gluster.org/#/c/16349/
http://review.gluster.org/#/c/16333/

Change-Id: I89fd06aa9700c5ce60026ac825da7c154d9f48fd
BUG: 1414665
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16433
Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ganesha/scripts : Prevent removal of entries in ganesha.conf during deletion of a node</title>
<updated>2016-12-22T22:04:04+00:00</updated>
<author>
<name>Jiffin Tony Thottan</name>
<email>jthottan@redhat.com</email>
</author>
<published>2016-12-20T05:12:31+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f3c2d3877de7c364e87c27fbbb08d4c797927444'/>
<id>f3c2d3877de7c364e87c27fbbb08d4c797927444</id>
<content type='text'>
Upstream reference :
&gt;Change-Id: Ia6c653eeb9bef7ff4107757f845218c2316db2e4
&gt;BUG: 1406249
&gt;Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/16209
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
&gt;(cherry picked from commit 8b42e1b5688f8600086ecc0e33ac4abf5e7c2772)

Change-Id: Ia6c653eeb9bef7ff4107757f845218c2316db2e4
BUG: 1408111
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16268
Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Upstream reference :
&gt;Change-Id: Ia6c653eeb9bef7ff4107757f845218c2316db2e4
&gt;BUG: 1406249
&gt;Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/16209
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
&gt;(cherry picked from commit 8b42e1b5688f8600086ecc0e33ac4abf5e7c2772)

Change-Id: Ia6c653eeb9bef7ff4107757f845218c2316db2e4
BUG: 1408111
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16268
Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>common-ha: Correct the VIP assigned to the new node added</title>
<updated>2016-12-22T15:50:02+00:00</updated>
<author>
<name>Soumya Koduri</name>
<email>skoduri@redhat.com</email>
</author>
<published>2016-12-20T12:52:02+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=5a65416b14da7b06aff9319f39c5c4f4e7c884fb'/>
<id>5a65416b14da7b06aff9319f39c5c4f4e7c884fb</id>
<content type='text'>
There is a regression introduced with patch#16115. An incorrect
VIP gets assigned to the new node being added to the cluster.
This patch fixes the same.

This is backport of below mainline patch:

http://review.gluster.org/16213

&gt; Change-Id: I468c7d16bf7e4efa04692db83b1c5ee58fbb7d5f
&gt; BUG: 1406410
&gt; Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;

Change-Id: Iccac83720280d823b36c1e47194b2e17226c91db
BUG: 1408110
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16269
Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There is a regression introduced with patch#16115. An incorrect
VIP gets assigned to the new node being added to the cluster.
This patch fixes the same.

This is backport of below mainline patch:

http://review.gluster.org/16213

&gt; Change-Id: I468c7d16bf7e4efa04692db83b1c5ee58fbb7d5f
&gt; BUG: 1406410
&gt; Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;

Change-Id: Iccac83720280d823b36c1e47194b2e17226c91db
BUG: 1408110
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16269
Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>common-ha: add node create new node dirs in shared storage</title>
<updated>2016-12-22T14:20:19+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2016-12-20T15:56:35+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=6e8ac80b29b37c215f18f646ee4a3f3fffccb3bc'/>
<id>6e8ac80b29b37c215f18f646ee4a3f3fffccb3bc</id>
<content type='text'>
When adding a node to the ganesha HA cluster, create the directory
tree in shared storage for the added node and create sets of symlinks
to match what is/was created for the other nodes.  I.e. in a four
node cluster the new node needs a set of links to the four existing
nodes:
 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e1 -&gt; e1
 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e2 -&gt; e2
 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e3 -&gt; e3
 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e4 -&gt; e4
and all the existing nodes need links added for the new node:
 /run/gluster/shared/nfs-ganesha/$e1/nfs/{ganesha,statd}/$new -&gt; new
 /run/gluster/shared/nfs-ganesha/$e2/nfs/{ganesha,statd}/$new -&gt; new
 /run/gluster/shared/nfs-ganesha/$e3/nfs/{ganesha,statd}/$new -&gt; new
 /run/gluster/shared/nfs-ganesha/$e5/nfs/{ganesha,statd}/$new -&gt; new

Likewise when deleting, remove the dir and symlinks.

original change http://review.gluster.org/16036
original change release-3.9 http://review.gluster.org/16170
master change http://review.gluster.org/16216
master BZ 1400613

Change-Id: I52839046745728d06ab5a07f38081c032093bff6
BUG: 1405576
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16217
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When adding a node to the ganesha HA cluster, create the directory
tree in shared storage for the added node and create sets of symlinks
to match what is/was created for the other nodes.  I.e. in a four
node cluster the new node needs a set of links to the four existing
nodes:
 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e1 -&gt; e1
 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e2 -&gt; e2
 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e3 -&gt; e3
 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e4 -&gt; e4
and all the existing nodes need links added for the new node:
 /run/gluster/shared/nfs-ganesha/$e1/nfs/{ganesha,statd}/$new -&gt; new
 /run/gluster/shared/nfs-ganesha/$e2/nfs/{ganesha,statd}/$new -&gt; new
 /run/gluster/shared/nfs-ganesha/$e3/nfs/{ganesha,statd}/$new -&gt; new
 /run/gluster/shared/nfs-ganesha/$e5/nfs/{ganesha,statd}/$new -&gt; new

Likewise when deleting, remove the dir and symlinks.

original change http://review.gluster.org/16036
original change release-3.9 http://review.gluster.org/16170
master change http://review.gluster.org/16216
master BZ 1400613

Change-Id: I52839046745728d06ab5a07f38081c032093bff6
BUG: 1405576
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16217
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>common-ha: add node create new node dirs in shared storage</title>
<updated>2016-12-18T12:44:01+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2016-12-16T18:21:03+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=88a096ed0952ee2ae8e8684c99aadd2c1f3f9e5e'/>
<id>88a096ed0952ee2ae8e8684c99aadd2c1f3f9e5e</id>
<content type='text'>
When adding a node to the ganesha HA cluster, create the directory
tree in shared storage for the added node and create sets of symlinks
to match what is/was created for the other nodes.  I.e. in a four
node cluster the new node needs a set of links to the four existing
nodes:
 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e1 -&gt; e1
 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e2 -&gt; e2
 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e3 -&gt; e3
 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e4 -&gt; e4
and all the existing nodes need links added for the new node:
 /run/gluster/shared/nfs-ganesha/$e1/nfs/{ganesha,statd}/$new -&gt; new
 /run/gluster/shared/nfs-ganesha/$e2/nfs/{ganesha,statd}/$new -&gt; new
 /run/gluster/shared/nfs-ganesha/$e3/nfs/{ganesha,statd}/$new -&gt; new
 /run/gluster/shared/nfs-ganesha/$e5/nfs/{ganesha,statd}/$new -&gt; new

Likewise when deleting, remove the dir and symlinks.

master BZ: 1400613

Change-Id: Id2f78f70946f29c3503e1e6db141b66cb431e0ea
BUG: 1405576
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16170
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When adding a node to the ganesha HA cluster, create the directory
tree in shared storage for the added node and create sets of symlinks
to match what is/was created for the other nodes.  I.e. in a four
node cluster the new node needs a set of links to the four existing
nodes:
 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e1 -&gt; e1
 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e2 -&gt; e2
 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e3 -&gt; e3
 /run/gluster/shared/nfs-ganesha/$new/nfs/{ganesha,statd}/$e4 -&gt; e4
and all the existing nodes need links added for the new node:
 /run/gluster/shared/nfs-ganesha/$e1/nfs/{ganesha,statd}/$new -&gt; new
 /run/gluster/shared/nfs-ganesha/$e2/nfs/{ganesha,statd}/$new -&gt; new
 /run/gluster/shared/nfs-ganesha/$e3/nfs/{ganesha,statd}/$new -&gt; new
 /run/gluster/shared/nfs-ganesha/$e5/nfs/{ganesha,statd}/$new -&gt; new

Likewise when deleting, remove the dir and symlinks.

master BZ: 1400613

Change-Id: Id2f78f70946f29c3503e1e6db141b66cb431e0ea
BUG: 1405576
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16170
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>common-ha: explicitly set udpu transport for corosync</title>
<updated>2016-12-15T15:54:24+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2016-12-15T11:16:49+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=fb937945ab344e7ba63fb04aaa9a0c71ce80f261'/>
<id>fb937945ab344e7ba63fb04aaa9a0c71ce80f261</id>
<content type='text'>
On RHEL7 corosync uses udpu (udp unicast) by default. On RHEL6 the
default is (now) udp multi-cast. In network environments that don't
support udp multi-cast this causes the ever growing lists of
[TOTEM ] Retransmit errors.

Always specifying --transport udpu is thus a no-op on RHEL7.

Using the same transport on both RHEL6 and RHEL7 may (or may not
give similar behavior and performance--it's hard to say.

It remains a mystery why things have always worked on RHEL6 prior to
now. Further investigation is required to uncover why this is the
case.

main http://review.gluster.org/16122
main BZ 1404410

Change-Id: I4d0de97fe4425c47f249beaaf51aeca3e91731fa
BUG: 1405002
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16139
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
On RHEL7 corosync uses udpu (udp unicast) by default. On RHEL6 the
default is (now) udp multi-cast. In network environments that don't
support udp multi-cast this causes the ever growing lists of
[TOTEM ] Retransmit errors.

Always specifying --transport udpu is thus a no-op on RHEL7.

Using the same transport on both RHEL6 and RHEL7 may (or may not
give similar behavior and performance--it's hard to say.

It remains a mystery why things have always worked on RHEL6 prior to
now. Further investigation is required to uncover why this is the
case.

main http://review.gluster.org/16122
main BZ 1404410

Change-Id: I4d0de97fe4425c47f249beaaf51aeca3e91731fa
BUG: 1405002
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16139
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>common-ha: Create portblock RA as part of add/delete-node</title>
<updated>2016-12-14T09:35:22+00:00</updated>
<author>
<name>Soumya Koduri</name>
<email>skoduri@redhat.com</email>
</author>
<published>2016-12-09T06:48:28+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=82694e7d8a8b63483b43afa4fa283f66fab90672'/>
<id>82694e7d8a8b63483b43afa4fa283f66fab90672</id>
<content type='text'>
When a node is added to or deleted from existing nfs-ganesha cluster,
we need to create or cleanup portblock RA as well. This patch is
to address the same. Also we need to adjust the quorum-policy with
increase/decrease in the number of nodes in the cluster.

&gt;Change-Id: I31a896715b9b7fc931009723d1570bf7aa4da9b6
&gt;BUG: 1403130
&gt;Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/16089
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;
&gt;Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
&gt;(cherry picked from commit 885ecce6e2df6464b388f42c91211ed31e17654d)

Change-Id: I4572ddb8dee2596d81c33c3685c808bb9bf4d38f
BUG: 1404133
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16115
Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When a node is added to or deleted from existing nfs-ganesha cluster,
we need to create or cleanup portblock RA as well. This patch is
to address the same. Also we need to adjust the quorum-policy with
increase/decrease in the number of nodes in the cluster.

&gt;Change-Id: I31a896715b9b7fc931009723d1570bf7aa4da9b6
&gt;BUG: 1403130
&gt;Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
&gt;Reviewed-on: http://review.gluster.org/16089
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;
&gt;Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
&gt;(cherry picked from commit 885ecce6e2df6464b388f42c91211ed31e17654d)

Change-Id: I4572ddb8dee2596d81c33c3685c808bb9bf4d38f
BUG: 1404133
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16115
Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>common-ha: IPaddr RA is not stopped when pacemaker quorum is lost</title>
<updated>2016-12-02T09:53:51+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2016-12-01T14:31:38+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=07179c8188152d53e6be2c892bd24f3d8e6b70ea'/>
<id>07179c8188152d53e6be2c892bd24f3d8e6b70ea</id>
<content type='text'>
Ken Gaillot writes:
The other is pacemaker's no-quorum-policy cluster property. The
default (which has not changed) is "stop" (stop all resources).
Other values are "ignore" (act as if quorum was not lost),
"freeze" (continue running existing resources but don't recover
resources from unseen nodes) or "suicide" (shut down).

But on my four node cluster
% pcs property show no-quorum-policy
Cluster Properties:
%

i.e. shows nothing.

But: % pcs property list --all
Cluster Properties:
...
no-quorum-policy: stop
...
%

Seems to think it knows about it.

and then
% pcs property set no-quorum-policy=stop
% pcs property show no-quorum-policy
Cluster Properties:
 no-quorum-policy: stop
%

Which looks rather inconsistent. So we will try explicitly
setting it to "stop" when there are three or more nodes.

master bug 1400237
master patch http://review.gluster.org/#/c/15981/

Change-Id: I47fc7ee84fcd6ad52ccb776913511978a8d517b4
BUG: 1400572
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15996
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Ken Gaillot writes:
The other is pacemaker's no-quorum-policy cluster property. The
default (which has not changed) is "stop" (stop all resources).
Other values are "ignore" (act as if quorum was not lost),
"freeze" (continue running existing resources but don't recover
resources from unseen nodes) or "suicide" (shut down).

But on my four node cluster
% pcs property show no-quorum-policy
Cluster Properties:
%

i.e. shows nothing.

But: % pcs property list --all
Cluster Properties:
...
no-quorum-policy: stop
...
%

Seems to think it knows about it.

and then
% pcs property set no-quorum-policy=stop
% pcs property show no-quorum-policy
Cluster Properties:
 no-quorum-policy: stop
%

Which looks rather inconsistent. So we will try explicitly
setting it to "stop" when there are three or more nodes.

master bug 1400237
master patch http://review.gluster.org/#/c/15981/

Change-Id: I47fc7ee84fcd6ad52ccb776913511978a8d517b4
BUG: 1400572
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15996
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>common-ha: add cluster HA status to --status output for gdeploy</title>
<updated>2016-12-01T20:40:24+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2016-11-18T18:07:50+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=5fbab30470004280627777c176326d358d23833c'/>
<id>5fbab30470004280627777c176326d358d23833c</id>
<content type='text'>
gdeploy desires a one-liner "health" assessment.

If all the VIP and port block/unblock RAs are located on their
prefered nodes and 'Started', then the cluster is deemed to be
good (healthy).

N.B. status originally only checked the "online" nodes obtained
from `pcs status` but we really want to consider all the configured
nodes, whether they are online or not.

Also one `pcs status` is enough.

master bug 1395648
master http://review.gluster.org/15882
Change-Id: Id0e0380b6982e23763edeb0488843b5363e370b8
BUG: 1395649
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-by: Arthy Loganathan &lt;aloganat@redhat.com&gt;
Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15991
Tested-by: soumya k &lt;skoduri@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
gdeploy desires a one-liner "health" assessment.

If all the VIP and port block/unblock RAs are located on their
prefered nodes and 'Started', then the cluster is deemed to be
good (healthy).

N.B. status originally only checked the "online" nodes obtained
from `pcs status` but we really want to consider all the configured
nodes, whether they are online or not.

Also one `pcs status` is enough.

master bug 1395648
master http://review.gluster.org/15882
Change-Id: Id0e0380b6982e23763edeb0488843b5363e370b8
BUG: 1395649
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-by: Arthy Loganathan &lt;aloganat@redhat.com&gt;
Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15991
Tested-by: soumya k &lt;skoduri@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
</feed>
