<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src/glusterd-messages.h, branch exp</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>snapshot: Fix restore rollback to reassign snap volume ids to bricks</title>
<updated>2016-12-17T07:07:31+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2016-12-13T06:25:19+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d0528cf2408533b45383a796d419c49fa96e810b'/>
<id>d0528cf2408533b45383a796d419c49fa96e810b</id>
<content type='text'>
Added further checks to ensure we do not go beyond prevalidate
when trying to restore a snapshot which has a nfs-gansha conf
file, in a cluster when nfs-ganesha is not enabled

The error message for the particular scenario is:
"Snapshot(&lt;snapname&gt;) has a nfs-ganesha export conf
file. cluster.enable-shared-storage and nfs-ganesha
should be enabled before restoring this snapshot."

Change-Id: I1b87e9907e0a5e162f26ef1ca89fe76e8da8610f
BUG: 1404118
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16116
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Added further checks to ensure we do not go beyond prevalidate
when trying to restore a snapshot which has a nfs-gansha conf
file, in a cluster when nfs-ganesha is not enabled

The error message for the particular scenario is:
"Snapshot(&lt;snapname&gt;) has a nfs-ganesha export conf
file. cluster.enable-shared-storage and nfs-ganesha
should be enabled before restoring this snapshot."

Change-Id: I1b87e9907e0a5e162f26ef1ca89fe76e8da8610f
BUG: 1404118
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16116
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Remove duplicate values for glusterd message macros</title>
<updated>2016-12-01T05:37:48+00:00</updated>
<author>
<name>Samikshan Bairagya</name>
<email>samikshan@gmail.com</email>
</author>
<published>2016-11-30T10:13:26+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=5b809fa434368b7395b180c41b46bce1a38e0cf9'/>
<id>5b809fa434368b7395b180c41b46bce1a38e0cf9</id>
<content type='text'>
Both GD_MSG_BRICK_CLEANUP_SUCCESS and GD_MSG_DAEMON_STATE_REQ_RCVD
macros are assigned the same value (GLUSTERD_COMP_BASE + 584). Also
the number of messages should be 588 instead of 587

Change-Id: I015d32435c05ded1b14cd8ba11911af826bc956b
BUG: 1400026
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: http://review.gluster.org/15980
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Both GD_MSG_BRICK_CLEANUP_SUCCESS and GD_MSG_DAEMON_STATE_REQ_RCVD
macros are assigned the same value (GLUSTERD_COMP_BASE + 584). Also
the number of messages should be 588 instead of 587

Change-Id: I015d32435c05ded1b14cd8ba11911af826bc956b
BUG: 1400026
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: http://review.gluster.org/15980
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/tier: Adding compaction option for metadata databases</title>
<updated>2016-09-05T01:37:57+00:00</updated>
<author>
<name>Diogenes Nunez</name>
<email>dnunez@redhat.com</email>
</author>
<published>2016-07-27T15:09:47+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=261c035c7d0cd1639cc8bd0ead82c30efcc0e93f'/>
<id>261c035c7d0cd1639cc8bd0ead82c30efcc0e93f</id>
<content type='text'>
Problem: As metadata in the database fills up, querying the database
take a long time. As a result, tier migration slows down.  To
counteract this, we added a way to enable the compaction methods of
the underlying database. The goal is to reduce the size of the
underlying file by eliminating database fragmentation.

NOTE: There is currently a bug where sometimes a brick will
attempt to activate compaction. This happens even compaction is already
turned on.

The cause is narrowed down to the compact_mode_switch flipping its value.

Changes: libglusterfs/src/gfdb - Added a gfdb function to compact the
underlying database, compact_db() This is a no-op if the database has
no such option.

- Added a compaction function for SQLite3 that does the following

1) Changes the auto_vacuum pragma of the database
2) Compacts the database according to the type of compaction requested

- Compaction type can be changed by changing the macro
  GF_SQL_COMPACT_DEF to one of the 4 compaction types in
  gfdb_sqlite3.h

  It is currently set to GF_SQL_COMPACT_INCR, or incremental
  vacuuming.

xlators/cluster/dht/src - Added the following command-line option to
enable SQLite3 compaction.

gluster volume set &lt;vol-name&gt; tier-compact &lt;off|on&gt;

- Added the following command-line option to change the frequency the
  hot and cold tier are ordered to compact.

gluster volume set &lt;vol-name&gt; tier-hot-compact-frequency &lt;int&gt;
gluster volume set &lt;vol-name&gt; tier-cold-compact-frequency &lt;int&gt;

- tier daemon periodically sends the (new)
  GFDB_IPC_CTR_SET_COMPACT_PRAGMA IPC to the CTR xlator. The IPC
  triggers compaction of the database.

  The inputs are both gf_boolean_t.

  IPC Input:

  compact_active: Is compaction currently on for the db.
  compact_mode_switched: Did we flip the compaction switch recently?

  IPC Output:

  0 if the compaction succeeds.
  Non-zero otherwise.

xlators/features/changetimerecorder/src/ - When the CTR gets the
compaction IPC, it launches a thread that will perform the
compaction. The IPC ends after the thread is launched. To avoid extra
allocations, the parameters are passed using static variables.

Change-Id: I5e1433becb9eeff2afe8dcb4a5798977bf5ba0dd
Signed-off-by: Diogenes Nunez &lt;dnunez@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15031
Reviewed-by: Milind Changire &lt;mchangir@redhat.com&gt;
Reviewed-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Tested-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: As metadata in the database fills up, querying the database
take a long time. As a result, tier migration slows down.  To
counteract this, we added a way to enable the compaction methods of
the underlying database. The goal is to reduce the size of the
underlying file by eliminating database fragmentation.

NOTE: There is currently a bug where sometimes a brick will
attempt to activate compaction. This happens even compaction is already
turned on.

The cause is narrowed down to the compact_mode_switch flipping its value.

Changes: libglusterfs/src/gfdb - Added a gfdb function to compact the
underlying database, compact_db() This is a no-op if the database has
no such option.

- Added a compaction function for SQLite3 that does the following

1) Changes the auto_vacuum pragma of the database
2) Compacts the database according to the type of compaction requested

- Compaction type can be changed by changing the macro
  GF_SQL_COMPACT_DEF to one of the 4 compaction types in
  gfdb_sqlite3.h

  It is currently set to GF_SQL_COMPACT_INCR, or incremental
  vacuuming.

xlators/cluster/dht/src - Added the following command-line option to
enable SQLite3 compaction.

gluster volume set &lt;vol-name&gt; tier-compact &lt;off|on&gt;

- Added the following command-line option to change the frequency the
  hot and cold tier are ordered to compact.

gluster volume set &lt;vol-name&gt; tier-hot-compact-frequency &lt;int&gt;
gluster volume set &lt;vol-name&gt; tier-cold-compact-frequency &lt;int&gt;

- tier daemon periodically sends the (new)
  GFDB_IPC_CTR_SET_COMPACT_PRAGMA IPC to the CTR xlator. The IPC
  triggers compaction of the database.

  The inputs are both gf_boolean_t.

  IPC Input:

  compact_active: Is compaction currently on for the db.
  compact_mode_switched: Did we flip the compaction switch recently?

  IPC Output:

  0 if the compaction succeeds.
  Non-zero otherwise.

xlators/features/changetimerecorder/src/ - When the CTR gets the
compaction IPC, it launches a thread that will perform the
compaction. The IPC ends after the thread is launched. To avoid extra
allocations, the parameters are passed using static variables.

Change-Id: I5e1433becb9eeff2afe8dcb4a5798977bf5ba0dd
Signed-off-by: Diogenes Nunez &lt;dnunez@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15031
Reviewed-by: Milind Changire &lt;mchangir@redhat.com&gt;
Reviewed-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Tested-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd : Introduce reset brick</title>
<updated>2016-08-30T02:55:53+00:00</updated>
<author>
<name>Anuradha Talur</name>
<email>atalur@redhat.com</email>
</author>
<published>2016-08-22T17:22:03+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=936f8aeac3252951e7fa0cdaa5d260fad3bd5ea0'/>
<id>936f8aeac3252951e7fa0cdaa5d260fad3bd5ea0</id>
<content type='text'>
The command basically allows replace brick with src and
dst bricks as same.

Usage:
gluster v reset-brick &lt;volname&gt; &lt;hostname:brick-path&gt; start
This command kills the brick to be reset. Once this command is run,
admin can do other manual operations that they need to do,
like configuring some options for the brick. Once this is done,
resetting the brick can be continued with the following options.

gluster v reset-brick &lt;vname&gt; &lt;hostname:brick&gt; &lt;hostname:brick&gt; commit {force}

Does the job of resetting the brick. 'force' option should be used
when the brick already contains volinfo id.

Problem: On doing a disk-replacement of a brick in a replicate volume
the following 2 scenarios may occur :

a) there is a chance that reads are served from this replaced-disk brick,
which leads to empty reads. b) potential data loss if next writes succeed
only on replaced brick, and heal is done to other bricks from this one.

Solution: After disk-replacement, make sure that reset-brick command is
run for that brick so that pending markers are set for the brick and it
is not chosen as source for reads and heal. But, as of now replace-brick
for the same brick-path is not allowed. In order to fix the above
mentioned problem, same brick-path replace-brick is needed.
With this patch reset-brick commit {force} will be allowed even when
source and destination &lt;hostname:brickpath&gt; are identical as long as
1) destination brick is not alive
2) source and destination brick have the same brick uuid and path.
Also, the destination brick after replace-brick will use the same port
as the source brick.

Change-Id: I440b9e892ffb781ea4b8563688c3f85c7a7c89de
BUG: 1266876
Signed-off-by: Anuradha Talur &lt;atalur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12250
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The command basically allows replace brick with src and
dst bricks as same.

Usage:
gluster v reset-brick &lt;volname&gt; &lt;hostname:brick-path&gt; start
This command kills the brick to be reset. Once this command is run,
admin can do other manual operations that they need to do,
like configuring some options for the brick. Once this is done,
resetting the brick can be continued with the following options.

gluster v reset-brick &lt;vname&gt; &lt;hostname:brick&gt; &lt;hostname:brick&gt; commit {force}

Does the job of resetting the brick. 'force' option should be used
when the brick already contains volinfo id.

Problem: On doing a disk-replacement of a brick in a replicate volume
the following 2 scenarios may occur :

a) there is a chance that reads are served from this replaced-disk brick,
which leads to empty reads. b) potential data loss if next writes succeed
only on replaced brick, and heal is done to other bricks from this one.

Solution: After disk-replacement, make sure that reset-brick command is
run for that brick so that pending markers are set for the brick and it
is not chosen as source for reads and heal. But, as of now replace-brick
for the same brick-path is not allowed. In order to fix the above
mentioned problem, same brick-path replace-brick is needed.
With this patch reset-brick commit {force} will be allowed even when
source and destination &lt;hostname:brickpath&gt; are identical as long as
1) destination brick is not alive
2) source and destination brick have the same brick uuid and path.
Also, the destination brick after replace-brick will use the same port
as the source brick.

Change-Id: I440b9e892ffb781ea4b8563688c3f85c7a7c89de
BUG: 1266876
Signed-off-by: Anuradha Talur &lt;atalur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12250
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/cli: cli to get local state representation from glusterd</title>
<updated>2016-08-26T15:23:37+00:00</updated>
<author>
<name>Samikshan Bairagya</name>
<email>samikshan@gmail.com</email>
</author>
<published>2016-07-07T15:03:02+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=4a3454753f6e4ddc309c8d1cb11a6e4e432c1da6'/>
<id>4a3454753f6e4ddc309c8d1cb11a6e4e432c1da6</id>
<content type='text'>
Currently there is no existing CLI that can be used to get the
local state representation of the cluster as maintained in glusterd
in a readable as well as parseable format.

The CLI added has the following usage:

 # gluster get-state [daemon] [odir &lt;path/to/output/dir&gt;] [file &lt;filename&gt;]

This would dump data points that reflect the local state
representation of the cluster as maintained in glusterd (no other
daemons are supported as of now) to a file inside the specified
output directory. The default output directory and filename is
/var/run/gluster and glusterd_state_&lt;timestamp&gt; respectively. The
option for specifying the daemon name leaves room to add support for
other daemons in the future. Following are the data points captured
as of now to represent the state from the local glusterd pov:

 * Peer:
    - Primary hostname
    - uuid
    - state
    - connection status
    - List of hostnames

 * Volumes:
    - name, id, transport type, status
    - counts: bricks, snap, subvol, stripe, arbiter, disperse,
 redundancy
    - snapd status
    - quorum status
    - tiering related information
    - rebalance status
    - replace bricks status
    - snapshots

 * Bricks:
    - Path, hostname (for all bricks these info will be shown)
    - port, rdma port, status, mount options, filesystem type and
signed in status for bricks running locally.

 * Services:
    - name, online status for initialised services

 * Others:
    - Base port, last allocated port
    - op-version
    - MYUUID

Change-Id: I4a45cc5407ab92d8afdbbd2098ece851f7e3d618
BUG: 1353156
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: http://review.gluster.org/14873
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently there is no existing CLI that can be used to get the
local state representation of the cluster as maintained in glusterd
in a readable as well as parseable format.

The CLI added has the following usage:

 # gluster get-state [daemon] [odir &lt;path/to/output/dir&gt;] [file &lt;filename&gt;]

This would dump data points that reflect the local state
representation of the cluster as maintained in glusterd (no other
daemons are supported as of now) to a file inside the specified
output directory. The default output directory and filename is
/var/run/gluster and glusterd_state_&lt;timestamp&gt; respectively. The
option for specifying the daemon name leaves room to add support for
other daemons in the future. Following are the data points captured
as of now to represent the state from the local glusterd pov:

 * Peer:
    - Primary hostname
    - uuid
    - state
    - connection status
    - List of hostnames

 * Volumes:
    - name, id, transport type, status
    - counts: bricks, snap, subvol, stripe, arbiter, disperse,
 redundancy
    - snapd status
    - quorum status
    - tiering related information
    - rebalance status
    - replace bricks status
    - snapshots

 * Bricks:
    - Path, hostname (for all bricks these info will be shown)
    - port, rdma port, status, mount options, filesystem type and
signed in status for bricks running locally.

 * Services:
    - name, online status for initialised services

 * Others:
    - Base port, last allocated port
    - op-version
    - MYUUID

Change-Id: I4a45cc5407ab92d8afdbbd2098ece851f7e3d618
BUG: 1353156
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: http://review.gluster.org/14873
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Improve mountbroker logs</title>
<updated>2016-08-26T04:21:50+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2016-08-25T16:50:30+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0343da57b51c8b83ae59743740632ec058a0d333'/>
<id>0343da57b51c8b83ae59743740632ec058a0d333</id>
<content type='text'>
When Mountbroker mount fails, it was just returning
EPERM or EACCESS without logging exact failure.
This patch improves the logging by logging exact
failure.

Change-Id: I3cd905f95865153f70dfcc3bf1fa4dd19af16455
BUG: 1346138
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15319
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When Mountbroker mount fails, it was just returning
EPERM or EACCESS without logging exact failure.
This patch improves the logging by logging exact
failure.

Change-Id: I3cd905f95865153f70dfcc3bf1fa4dd19af16455
BUG: 1346138
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15319
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: clean up old port and allocate new one on every restart</title>
<updated>2016-08-04T04:43:34+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2016-07-25T13:39:08+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c3dee6d35326c6495591eb5bbf7f52f64031e2c4'/>
<id>c3dee6d35326c6495591eb5bbf7f52f64031e2c4</id>
<content type='text'>
GlusterD as of now was blindly assuming that the brick port which was already
allocated would be available to be reused and that assumption is absolutely
wrong.

Solution : On first attempt, we thought GlusterD should check if the already
allocated brick ports are free, if not allocate new port and pass it to the
daemon. But with that approach there is a possibility that if PMAP_SIGNOUT is
missed out, the stale port will be given back to the clients where connection
will keep on failing. Now given the port allocation always start from base_port,
if everytime a new port has to be allocated for the daemons, the port range will
still be under control. So this fix tries to clean up old port using
pmap_registry_remove () if any and then goes for pmap_registry_alloc ()

Change-Id: If54a055d01ab0cbc06589dc1191d8fc52eb2c84f
BUG: 1221623
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15005
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
GlusterD as of now was blindly assuming that the brick port which was already
allocated would be available to be reused and that assumption is absolutely
wrong.

Solution : On first attempt, we thought GlusterD should check if the already
allocated brick ports are free, if not allocate new port and pass it to the
daemon. But with that approach there is a possibility that if PMAP_SIGNOUT is
missed out, the stale port will be given back to the clients where connection
will keep on failing. Now given the port allocation always start from base_port,
if everytime a new port has to be allocated for the daemons, the port range will
still be under control. So this fix tries to clean up old port using
pmap_registry_remove () if any and then goes for pmap_registry_alloc ()

Change-Id: If54a055d01ab0cbc06589dc1191d8fc52eb2c84f
BUG: 1221623
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15005
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli/glusterd: add/remove brick fixes for arbiter volumes</title>
<updated>2016-05-19T16:40:04+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2016-04-29T12:11:18+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=61c1b2cee973b11897a37d508910012e616033bc'/>
<id>61c1b2cee973b11897a37d508910012e616033bc</id>
<content type='text'>
1.Provide a command to convert replica 2 volumes to arbiter volumes.
Existing self-heal logic will automatically heal the file hierarchy into
the arbiter brick, the progress of which can be monitored using the
heal info command.

Syntax: gluster volume add-brick &lt;VOLNAME&gt; replica 3 arbiter 1
&lt;HOST:arbiter-brick-path&gt;

2. Add checks when removing bricks from arbiter volumes:
- When converting from arbiter to replica 2 volume, allow only arbiter
  brick to be removed.
- When converting from arbiter to plain distribute volume, allow only if
  arbiter is one of the bricks that is removed.

3. Some clean-up:
- Use GD_MSG_DICT_GET_SUCCESS instead of GD_MSG_DICT_GET_FAILED to
log messages that are not failures.
- Remove unused variable `brick_list`
- Move 'brickinfo-&gt;group' related functions to glusted-utils.

Change-Id: Ic87b8c7e4d7d3ab03f93e7b9f372b314d80947ce
BUG: 1318289
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14126
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
1.Provide a command to convert replica 2 volumes to arbiter volumes.
Existing self-heal logic will automatically heal the file hierarchy into
the arbiter brick, the progress of which can be monitored using the
heal info command.

Syntax: gluster volume add-brick &lt;VOLNAME&gt; replica 3 arbiter 1
&lt;HOST:arbiter-brick-path&gt;

2. Add checks when removing bricks from arbiter volumes:
- When converting from arbiter to replica 2 volume, allow only arbiter
  brick to be removed.
- When converting from arbiter to plain distribute volume, allow only if
  arbiter is one of the bricks that is removed.

3. Some clean-up:
- Use GD_MSG_DICT_GET_SUCCESS instead of GD_MSG_DICT_GET_FAILED to
log messages that are not failures.
- Remove unused variable `brick_list`
- Move 'brickinfo-&gt;group' related functions to glusted-utils.

Change-Id: Ic87b8c7e4d7d3ab03f93e7b9f372b314d80947ce
BUG: 1318289
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14126
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/geo-rep: slave volume uuid to identify a geo-rep session</title>
<updated>2016-05-13T06:22:59+00:00</updated>
<author>
<name>Saravanakumar Arumugam</name>
<email>sarumuga@redhat.com</email>
</author>
<published>2015-12-29T13:52:36+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=a9128cda34b1f696b717ba09fa0ac5a929be8969'/>
<id>a9128cda34b1f696b717ba09fa0ac5a929be8969</id>
<content type='text'>
Problem:
Currently, it is possible to create multiple geo-rep session from
the Master host to Slave host(s), where Slave host(s) belonging
to the same volume.

For example:
Consider Master Host M1 having volume tv1 and Slave volume tv2,
which spans across two Slave hosts S1 and S2.
Currently, it is possible to create geo-rep session from
M1(tv1) to S1(tv2) as well as from M1(tv1) to S2(tv2).

When the Slave Host is alone modified, it is identified as a new geo-rep
session (as slave host and slave volume together are identifying
Slave side).

Also, it is possible to create both root and non-root geo-rep session between
same Master volume and Slave volume. This should also be avoided.

Solution:
This multiple geo-rep session creation must be avoided and
in order to avoid, use Slave volume uuid to identify a Slave.
This way, we can identify whether a session is already created for
the same Slave volume and avoid creating again (using different host).

When the session creation is forced in the above scenario, rename
the existing geo-rep session directory with new Slave Host mentioned.

Change-Id: I9239759cbc0d15dad63c48b8cf62950bb687c7c8
BUG: 1294813
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Signed-off-by: Aravinda VK &lt;avishwan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13111
Reviewed-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Tested-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Currently, it is possible to create multiple geo-rep session from
the Master host to Slave host(s), where Slave host(s) belonging
to the same volume.

For example:
Consider Master Host M1 having volume tv1 and Slave volume tv2,
which spans across two Slave hosts S1 and S2.
Currently, it is possible to create geo-rep session from
M1(tv1) to S1(tv2) as well as from M1(tv1) to S2(tv2).

When the Slave Host is alone modified, it is identified as a new geo-rep
session (as slave host and slave volume together are identifying
Slave side).

Also, it is possible to create both root and non-root geo-rep session between
same Master volume and Slave volume. This should also be avoided.

Solution:
This multiple geo-rep session creation must be avoided and
in order to avoid, use Slave volume uuid to identify a Slave.
This way, we can identify whether a session is already created for
the same Slave volume and avoid creating again (using different host).

When the session creation is forced in the above scenario, rename
the existing geo-rep session directory with new Slave Host mentioned.

Change-Id: I9239759cbc0d15dad63c48b8cf62950bb687c7c8
BUG: 1294813
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Signed-off-by: Aravinda VK &lt;avishwan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13111
Reviewed-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Tested-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: add defence mechanism to avoid brick port clashes</title>
<updated>2016-05-04T09:25:31+00:00</updated>
<author>
<name>Prasanna Kumar Kalever</name>
<email>prasanna.kalever@redhat.com</email>
</author>
<published>2016-04-27T13:42:19+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=89759de7e47d99eb1fca2763931b4f33ac765173'/>
<id>89759de7e47d99eb1fca2763931b4f33ac765173</id>
<content type='text'>
Intro:
Currently glusterd maintain the portmap registry which contains ports that
are free to use between 49152 - 65535, this registry is initialized
once, and updated accordingly as an then when glusterd sees they are been
used.

Glusterd first checks for a port within the portmap registry and gets a FREE
port marked in it, then checks if that port is currently free using a connect()
function then passes it to brick process which have to bind on it.

Problem:
We see that there is a time gap between glusterd checking the port with
connect() and brick process actually binding on it. In this time gap it could
be so possible that any process would have occupied this port because of which
brick will fail to bind and exit.

Case 1:
To avoid the gluster client process occupying the port supplied by glusterd :

we have separated the client port map range with brick port map range more @
http://review.gluster.org/#/c/13998/

Case 2: (Handled by this patch)
To avoid the other foreign process occupying the port supplied by glusterd :

To handle above situation this patch implements a mechanism to return EADDRINUSE
error code to glusterd, upon which a new port is allocated and try to restart
the brick process with the newly allocated port.

Note: Incase of glusterd restarts i.e. runner_run_nowait() there is no way to
handle Case 2, becuase runner_run_nowait() will not wait to get the return/exit
code of the executed command (brick process). Hence as of now in such case,
we cannot know with what error the brick has failed to connect.

This patch also fix the runner_end() to perform some cleanup w.r.t
return values.

Change-Id: Iec52e7f5d87ce938d173f8ef16aa77fd573f2c5e
BUG: 1322805
Signed-off-by: Prasanna Kumar Kalever &lt;prasanna.kalever@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14043
Tested-by: Prasanna Kumar Kalever &lt;pkalever@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Intro:
Currently glusterd maintain the portmap registry which contains ports that
are free to use between 49152 - 65535, this registry is initialized
once, and updated accordingly as an then when glusterd sees they are been
used.

Glusterd first checks for a port within the portmap registry and gets a FREE
port marked in it, then checks if that port is currently free using a connect()
function then passes it to brick process which have to bind on it.

Problem:
We see that there is a time gap between glusterd checking the port with
connect() and brick process actually binding on it. In this time gap it could
be so possible that any process would have occupied this port because of which
brick will fail to bind and exit.

Case 1:
To avoid the gluster client process occupying the port supplied by glusterd :

we have separated the client port map range with brick port map range more @
http://review.gluster.org/#/c/13998/

Case 2: (Handled by this patch)
To avoid the other foreign process occupying the port supplied by glusterd :

To handle above situation this patch implements a mechanism to return EADDRINUSE
error code to glusterd, upon which a new port is allocated and try to restart
the brick process with the newly allocated port.

Note: Incase of glusterd restarts i.e. runner_run_nowait() there is no way to
handle Case 2, becuase runner_run_nowait() will not wait to get the return/exit
code of the executed command (brick process). Hence as of now in such case,
we cannot know with what error the brick has failed to connect.

This patch also fix the runner_end() to perform some cleanup w.r.t
return values.

Change-Id: Iec52e7f5d87ce938d173f8ef16aa77fd573f2c5e
BUG: 1322805
Signed-off-by: Prasanna Kumar Kalever &lt;prasanna.kalever@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14043
Tested-by: Prasanna Kumar Kalever &lt;pkalever@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
