<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-sm.c, branch v3.8.12</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: Add a new event to handle multi-net probes</title>
<updated>2016-03-29T04:43:35+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@redhat.com</email>
</author>
<published>2016-03-22T11:02:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d0cb21b5e3dd90a851e43bcfac9b1b2edf3db9c2'/>
<id>d0cb21b5e3dd90a851e43bcfac9b1b2edf3db9c2</id>
<content type='text'>
This allows GlusterD to send updates to all other nodes when attaching
new addresses using multi-net peer probe.

Change-Id: I62846be750ab3721912e7b49656594347ea61723
BUG: 1320458
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13817
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This allows GlusterD to send updates to all other nodes when attaching
new addresses using multi-net peer probe.

Change-Id: I62846be750ab3721912e7b49656594347ea61723
BUG: 1320458
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13817
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: fixing few memory leak in glusterd</title>
<updated>2016-02-24T04:19:05+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>garg.gaurav52@gmail.com</email>
</author>
<published>2015-12-09T14:42:17+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e38bf1bdeda3c7a89be3193ad62a72b9139358dd'/>
<id>e38bf1bdeda3c7a89be3193ad62a72b9139358dd</id>
<content type='text'>
Current glusterd code base having memory leak. This is because of
memory allocate by dict_allocate_and_serialize function in
"gd_syncop_mgmt_v3_lock" and "gd_syncop_mgmt_v3_unlock"
function is not freeing up meory upon exit.

Fix is to free the memory after exit of the above function.

Thanx Carlos and Roman for finding out the issue and fix.

Change-Id: Id67aa794c84969830ca7ea8c2374f80c64d7a639
BUG: 1287517
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Signed-off-by: Carlos Chinea &lt;carlos.chinea@nokia.com&gt;
Signed-off-by: Roman Tereshonkov &lt;roman.tereshonkov@nokia.com&gt;
Reviewed-on: http://review.gluster.org/12927
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Current glusterd code base having memory leak. This is because of
memory allocate by dict_allocate_and_serialize function in
"gd_syncop_mgmt_v3_lock" and "gd_syncop_mgmt_v3_unlock"
function is not freeing up meory upon exit.

Fix is to free the memory after exit of the above function.

Thanx Carlos and Roman for finding out the issue and fix.

Change-Id: Id67aa794c84969830ca7ea8c2374f80c64d7a639
BUG: 1287517
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Signed-off-by: Carlos Chinea &lt;carlos.chinea@nokia.com&gt;
Signed-off-by: Roman Tereshonkov &lt;roman.tereshonkov@nokia.com&gt;
Reviewed-on: http://review.gluster.org/12927
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: reduce friend update flood</title>
<updated>2015-12-23T03:52:19+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@redhat.com</email>
</author>
<published>2015-12-17T05:43:36+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f624abd6885752eeaa8d07101ff00f52af48de26'/>
<id>f624abd6885752eeaa8d07101ff00f52af48de26</id>
<content type='text'>
When in a befriended state, glusterd would broadcast friend updates to
all other peers whenver a ACC or LOCAL_ACC event occurred.

When a downed glusterd came back up and established connections again,
this lead to a flood of friend updates to happen on the order of N^2 (N
is the number of peers in the cluster)

In larger clusters this was problematic, and could lead to very long
times for the cluster to settle down when a peer came back up. Multiple
peers coming back up at the same time would compound the problem.

Broadcasting of friend updates doesn't have much use in places other
that during a peer probe. Instead of broadcasting friend updates on
connection re-establishment, updates can just be exchanged between the
peers involved in the connection.

This patch changes the glusterd friend state-machine to send updates
only to the required peer for ACC or LOCAL_ACC events when in befriended
state. The number of updates sent now is in the order of N.

For a 10 node cluster, the number of updates reduced by 5 times. When
creating the 10 node cluster, the updates reduced from ~500 to ~150.
When a glusterd restarted, the number of exchanges reduced from ~160 to
~35.

BUG: 1292749
Change-Id: Ib6072090c7069b081d018cdaa3dc878819ab1d18
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12999
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When in a befriended state, glusterd would broadcast friend updates to
all other peers whenver a ACC or LOCAL_ACC event occurred.

When a downed glusterd came back up and established connections again,
this lead to a flood of friend updates to happen on the order of N^2 (N
is the number of peers in the cluster)

In larger clusters this was problematic, and could lead to very long
times for the cluster to settle down when a peer came back up. Multiple
peers coming back up at the same time would compound the problem.

Broadcasting of friend updates doesn't have much use in places other
that during a peer probe. Instead of broadcasting friend updates on
connection re-establishment, updates can just be exchanged between the
peers involved in the connection.

This patch changes the glusterd friend state-machine to send updates
only to the required peer for ACC or LOCAL_ACC events when in befriended
state. The number of updates sent now is in the order of N.

For a 10 node cluster, the number of updates reduced by 5 times. When
creating the 10 node cluster, the updates reduced from ~500 to ~150.
When a glusterd restarted, the number of exchanges reduced from ~160 to
~35.

BUG: 1292749
Change-Id: Ib6072090c7069b081d018cdaa3dc878819ab1d18
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12999
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: stop daemon services upon peer detach correctly</title>
<updated>2015-12-03T11:28:04+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>garg.gaurav52@gmail.com</email>
</author>
<published>2015-12-01T13:44:08+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=cae9512d60f5715459ea5883c657c679197982d9'/>
<id>cae9512d60f5715459ea5883c657c679197982d9</id>
<content type='text'>
Problem:
Currently glusterd is stopping all the daemons service upon peer detach.
If user have multi node cluster and if user want to detach any node
from the cluster, and detached node having stand alone volume then upon
detaching glusterd stopping all the daemon's of the detached node,
which is having running volume.

Fix:
Upon peer detach it should do peer detach cleanup properly and it should
stop only those daemon on the node on which it require.

Change-Id: I98b8099166f82e235ded6d02261f59a6511a003b
BUG: 1287455
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12838
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Currently glusterd is stopping all the daemons service upon peer detach.
If user have multi node cluster and if user want to detach any node
from the cluster, and detached node having stand alone volume then upon
detaching glusterd stopping all the daemon's of the detached node,
which is having running volume.

Fix:
Upon peer detach it should do peer detach cleanup properly and it should
stop only those daemon on the node on which it require.

Change-Id: I98b8099166f82e235ded6d02261f59a6511a003b
BUG: 1287455
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12838
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot:cleanup snaps during unprobe</title>
<updated>2015-08-26T10:47:48+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2015-03-17T14:27:47+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e883e98998404a9e1ef18516d88520cfe2451b3f'/>
<id>e883e98998404a9e1ef18516d88520cfe2451b3f</id>
<content type='text'>
When doing an unprobe, the volume that doesnot
contain any brick of the particular node will be
deleted. So the snaps associated with that volume
should also delete

Change-Id: I9f3d23bd11b254ebf7d7722cc1e12455d6b024ff
BUG: 1203185
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9930
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When doing an unprobe, the volume that doesnot
contain any brick of the particular node will be
deleted. So the snaps associated with that volume
should also delete

Change-Id: I9f3d23bd11b254ebf7d7722cc1e12455d6b024ff
BUG: 1203185
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9930
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: stop all the daemons services on peer detach</title>
<updated>2015-08-25T05:18:55+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>ggarg@redhat.com</email>
</author>
<published>2015-07-02T12:53:51+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=8e0bf30dc40fed45078c702dec750b5e8bbf5734'/>
<id>8e0bf30dc40fed45078c702dec750b5e8bbf5734</id>
<content type='text'>
Currently glusterd is not stopping all the deamon service on peer detach

With this fix it will do peer detach cleanup properlly and will stop all
the daemon which was running before peer detach on the node.

Change-Id: Ifed403ed09187e84f2a60bf63135156ad1f15775
BUG: 1255386
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11509
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently glusterd is not stopping all the deamon service on peer detach

With this fix it will do peer detach cleanup properlly and will stop all
the daemon which was running before peer detach on the node.

Change-Id: Ifed403ed09187e84f2a60bf63135156ad1f15775
BUG: 1255386
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11509
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Send friend update even for EVENT_RCVD_ACC</title>
<updated>2015-07-11T07:18:09+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@redhat.com</email>
</author>
<published>2015-07-10T09:20:29+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=93d8af231927d3476f8a966505a0e7fab7181385'/>
<id>93d8af231927d3476f8a966505a0e7fab7181385</id>
<content type='text'>
In a multi-network cluster, a new peer being probed into the cluster
will not get all the addresses of the peer that initiated the probe in
some cases as it doesn't recieve friend updates from other peers in the
cluster.

This happens when the new peer establishes connection with the other
peers before the other peers connect to the new peer.

Assuming, F is the initiator peer, O is one of the other peers in the
cluster and N is the new peer, the following series of events occur on O
when N establishes the connection first. N is already in the BEFRIENDED
state on O, and the actions happening will refer the BEFRIENDED state
table.

  EVENT_RCV_FRIEND_REQ -&gt; results in handle_friend_add_req being called
                          which injects a LOCAL_ACC
  EVENT_RCVC_LOCAL_ACC -&gt; results in send_friend_update being called
                          which should have sent an update to N, but O
                          has still not established a connection to N,
                          so the update isn't sent
  EVENT_CONNECTED      -&gt; O now connects to N, this results in O sending a
                          friend_add req to N
  EVENT_RCVD_ACC       -&gt; friend_add_cbk inject this event, but this event
                          results in a NOOP when in BEFRIENDED

As a result this O doesn't recieve all the addresses of F. If the
cluster contains any volumes with bricks attached to the missing
addresses of F and O is restarted in this condition, GlusterD will fail
to start as it wouldn't be able to resolve those bricks.

This commit changes the EVENT_RCVD_ACC action for the BEFRIENDED state
from a NOOP to send_friend_update. This makes sure that the new peer
recieves the updates from the other existing peers, irrespective of who
establishes the connection first, thus solving the problem.

Change-Id: Id807bc3032cf4cb13a5ba83819f2d50c96e76e96
BUG: 1241882
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11625
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In a multi-network cluster, a new peer being probed into the cluster
will not get all the addresses of the peer that initiated the probe in
some cases as it doesn't recieve friend updates from other peers in the
cluster.

This happens when the new peer establishes connection with the other
peers before the other peers connect to the new peer.

Assuming, F is the initiator peer, O is one of the other peers in the
cluster and N is the new peer, the following series of events occur on O
when N establishes the connection first. N is already in the BEFRIENDED
state on O, and the actions happening will refer the BEFRIENDED state
table.

  EVENT_RCV_FRIEND_REQ -&gt; results in handle_friend_add_req being called
                          which injects a LOCAL_ACC
  EVENT_RCVC_LOCAL_ACC -&gt; results in send_friend_update being called
                          which should have sent an update to N, but O
                          has still not established a connection to N,
                          so the update isn't sent
  EVENT_CONNECTED      -&gt; O now connects to N, this results in O sending a
                          friend_add req to N
  EVENT_RCVD_ACC       -&gt; friend_add_cbk inject this event, but this event
                          results in a NOOP when in BEFRIENDED

As a result this O doesn't recieve all the addresses of F. If the
cluster contains any volumes with bricks attached to the missing
addresses of F and O is restarted in this condition, GlusterD will fail
to start as it wouldn't be able to resolve those bricks.

This commit changes the EVENT_RCVD_ACC action for the BEFRIENDED state
from a NOOP to send_friend_update. This makes sure that the new peer
recieves the updates from the other existing peers, irrespective of who
establishes the connection first, thus solving the problem.

Change-Id: Id807bc3032cf4cb13a5ba83819f2d50c96e76e96
BUG: 1241882
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11625
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Porting left out log messages to new framework</title>
<updated>2015-06-27T06:32:01+00:00</updated>
<author>
<name>Nandaja Varma</name>
<email>nandaja.varma@gmail.com</email>
</author>
<published>2015-06-24T19:27:00+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=911e9228f31e89fe5df6e2282ce449b2a94c42b1'/>
<id>911e9228f31e89fe5df6e2282ce449b2a94c42b1</id>
<content type='text'>
Change-Id: I70d40ae3b5f49a21e1b93f82885cd58fa2723647
BUG: 1235538
Signed-off-by: Nandaja Varma &lt;nandaja.varma@gmail.com&gt;
Reviewed-on: http://review.gluster.org/11388
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Anand Nekkunti &lt;anekkunt@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I70d40ae3b5f49a21e1b93f82885cd58fa2723647
BUG: 1235538
Signed-off-by: Nandaja Varma &lt;nandaja.varma@gmail.com&gt;
Reviewed-on: http://review.gluster.org/11388
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Anand Nekkunti &lt;anekkunt@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>sm/glusterd: Porting messages to new logging framework</title>
<updated>2015-06-12T09:05:29+00:00</updated>
<author>
<name>Nandaja Varma</name>
<email>nandaja.varma@gmail.com</email>
</author>
<published>2015-03-18T09:47:45+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=72a7a6ea78289b2897f9846dc4e111f442dd2788'/>
<id>72a7a6ea78289b2897f9846dc4e111f442dd2788</id>
<content type='text'>
Change-Id: I391d1ac6a7b312461187c2e8c6f14d09a0238950
BUG: 1194640
Signed-off-by: Nandaja Varma &lt;nandaja.varma@gmail.com&gt;
Reviewed-on: http://review.gluster.org/9927
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I391d1ac6a7b312461187c2e8c6f14d09a0238950
BUG: 1194640
Signed-off-by: Nandaja Varma &lt;nandaja.varma@gmail.com&gt;
Reviewed-on: http://review.gluster.org/9927
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/shared_storage: Provide a volume set option to create and mount the shared storage</title>
<updated>2015-06-04T09:37:19+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2015-05-14T09:30:59+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=402589f58cbb350dfedafa83e133664855ed37b2'/>
<id>402589f58cbb350dfedafa83e133664855ed37b2</id>
<content type='text'>
Introducing a global volume set option(cluster.enable-shared-storage)
which helps create and set-up the shared storage meta volume.

gluster volume set all cluster.enable-shared-storage enable

On enabling this option, the system analyzes the number of peers
in the cluster, which are currently connected, and chooses three
such peers(including the node the command is issued from). From these
peers a volume(gluster_shared_storage) is created. Depending on the
number of peers available the volume is either a replica 3
volume(if there are 3 connected peers),  or a replica 2 volume(if there
are 2 connected peers). "/var/run/gluster/ss_brick" serves as the
brick path on each node for the shared storage volume. We also mount
the shared storage at "/var/run/gluster/shared_storage" on all the nodes
in the cluster as part of enabling this option. If there is only one node
in the cluster,  or only one node is up then the command will fail

Once the volume is created, and mounted the maintainance of the
volume like adding-bricks, removing bricks etc., is expected to
be the onus of the user.

On disabling the option, we provide the user a warning, and on
affirmation from the user we stop the shared storage volume, and unmount
it from all the nodes in the cluster.

gluster volume set all cluster.enable-shared-storage disable

Change-Id: Idd92d67b93f444244f99ede9f634ef18d2945dbc
BUG: 1222013
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10793
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Introducing a global volume set option(cluster.enable-shared-storage)
which helps create and set-up the shared storage meta volume.

gluster volume set all cluster.enable-shared-storage enable

On enabling this option, the system analyzes the number of peers
in the cluster, which are currently connected, and chooses three
such peers(including the node the command is issued from). From these
peers a volume(gluster_shared_storage) is created. Depending on the
number of peers available the volume is either a replica 3
volume(if there are 3 connected peers),  or a replica 2 volume(if there
are 2 connected peers). "/var/run/gluster/ss_brick" serves as the
brick path on each node for the shared storage volume. We also mount
the shared storage at "/var/run/gluster/shared_storage" on all the nodes
in the cluster as part of enabling this option. If there is only one node
in the cluster,  or only one node is up then the command will fail

Once the volume is created, and mounted the maintainance of the
volume like adding-bricks, removing bricks etc., is expected to
be the onus of the user.

On disabling the option, we provide the user a warning, and on
affirmation from the user we stop the shared storage volume, and unmount
it from all the nodes in the cluster.

gluster volume set all cluster.enable-shared-storage disable

Change-Id: Idd92d67b93f444244f99ede9f634ef18d2945dbc
BUG: 1222013
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10793
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
