<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/libglusterfs/src, branch v3.10.0rc0</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>libglusterfs: make memory pools more thread-friendly</title>
<updated>2017-02-03T00:44:09+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2016-10-14T14:04:07+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=1ed73ffa16cb7fe4415acbdb095da6a4628f711a'/>
<id>1ed73ffa16cb7fe4415acbdb095da6a4628f711a</id>
<content type='text'>
Early multiplexing tests revealed *massive* contention on certain
pools' global locks - especially for dictionaries and secondarily for
call stubs.  For the thread counts that multiplexing can create, a
more lock-free solution is clearly needed.  Also, the current mem-pool
implementation does a poor job releasing memory back to the system,
artificially inflating memory usage to match whatever the worst case
was since the process started.  This is bad in general, but especially
so for multiplexing where there are more pools and a major point of
the whole exercise is to reduce memory consumption.

The basic ideas for the new design are these

  There is one pool, globally, for each power-of-two size range.
  Every attempt to create a new pool within this range will instead
  add a reference to the existing pool.

  Instead of adding pools for each translator within each multiplexed
  brick (potentially infinite and quite possibly thousands), we
  allocate one set of size-based pools per *thread* (hundreds at
  worst).

  Each per-thread pool is divided into hot and cold lists.  Every
  allocation first attempts to use the hot list, then the cold list.
  When objects are freed, they always go on the hot list.

  There is one global "pool sweeper" thread, which periodically
  reclaims everything in each pool's cold list and then "demotes" the
  current hot list to be the new cold list.

  For normal allocation activity, only a per-thread lock need be
  taken, and even that only to guard against very rare contention from
  the pool sweeper.  When threads start and stop, a global lock must
  be taken to add them to the pool sweeper's list.  Lock contention is
  therefore extremely low, and the hot/cold lists also provide good
  locality.

A more complete explanation (of a similar earlier design) can be found
here:

 http://www.gluster.org/pipermail/gluster-devel/2016-October/051160.html

Backport of:
&gt; Change-Id: I5bc8a1ba57cfb553998f979a498886e0d006e665
&gt; BUG: 1385758
&gt; Reviewed-on: https://review.gluster.org/15645

BUG: 1418091
Change-Id: Id09bbea41f65fcd245822607bc204f3a34904dc2
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16531
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Early multiplexing tests revealed *massive* contention on certain
pools' global locks - especially for dictionaries and secondarily for
call stubs.  For the thread counts that multiplexing can create, a
more lock-free solution is clearly needed.  Also, the current mem-pool
implementation does a poor job releasing memory back to the system,
artificially inflating memory usage to match whatever the worst case
was since the process started.  This is bad in general, but especially
so for multiplexing where there are more pools and a major point of
the whole exercise is to reduce memory consumption.

The basic ideas for the new design are these

  There is one pool, globally, for each power-of-two size range.
  Every attempt to create a new pool within this range will instead
  add a reference to the existing pool.

  Instead of adding pools for each translator within each multiplexed
  brick (potentially infinite and quite possibly thousands), we
  allocate one set of size-based pools per *thread* (hundreds at
  worst).

  Each per-thread pool is divided into hot and cold lists.  Every
  allocation first attempts to use the hot list, then the cold list.
  When objects are freed, they always go on the hot list.

  There is one global "pool sweeper" thread, which periodically
  reclaims everything in each pool's cold list and then "demotes" the
  current hot list to be the new cold list.

  For normal allocation activity, only a per-thread lock need be
  taken, and even that only to guard against very rare contention from
  the pool sweeper.  When threads start and stop, a global lock must
  be taken to add them to the pool sweeper's list.  Lock contention is
  therefore extremely low, and the hot/cold lists also provide good
  locality.

A more complete explanation (of a similar earlier design) can be found
here:

 http://www.gluster.org/pipermail/gluster-devel/2016-October/051160.html

Backport of:
&gt; Change-Id: I5bc8a1ba57cfb553998f979a498886e0d006e665
&gt; BUG: 1385758
&gt; Reviewed-on: https://review.gluster.org/15645

BUG: 1418091
Change-Id: Id09bbea41f65fcd245822607bc204f3a34904dc2
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16531
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>performance/write-behind: access stub only if available during</title>
<updated>2017-02-02T17:36:43+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2017-01-20T10:39:13+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=16f51cd98c56bcc61209d7cbbc4061bbd3aa8e5b'/>
<id>16f51cd98c56bcc61209d7cbbc4061bbd3aa8e5b</id>
<content type='text'>
statedump

&gt;Change-Id: Ia5dd718458a5e32138012f81f014d13fc6b28be2
&gt;BUG: 1415115
&gt;Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt;Reviewed-on: https://review.gluster.org/16440
&gt;Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;

Change-Id: Ia5dd718458a5e32138012f81f014d13fc6b28be2
BUG: 1418623
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
(cherry picked from commit 85d7f1d1ee24ac400d4aa223478727643532693a)
Reviewed-on: https://review.gluster.org/16519
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
statedump

&gt;Change-Id: Ia5dd718458a5e32138012f81f014d13fc6b28be2
&gt;BUG: 1415115
&gt;Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt;Reviewed-on: https://review.gluster.org/16440
&gt;Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;

Change-Id: Ia5dd718458a5e32138012f81f014d13fc6b28be2
BUG: 1418623
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
(cherry picked from commit 85d7f1d1ee24ac400d4aa223478727643532693a)
Reviewed-on: https://review.gluster.org/16519
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs+transport+io-threads: fix 256KB stack abuse</title>
<updated>2017-02-02T17:32:49+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2016-10-27T15:51:47+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c10507ce75547a7a7899fbf36be650ddc89ba467'/>
<id>c10507ce75547a7a7899fbf36be650ddc89ba467</id>
<content type='text'>
Some functions were allocating 64K booleans, which are (crazily)
mapped to 4-byte ints, for a total of 256KB per call.  Changed to use
bitfields instead, so usage is now only 8KB per call.  This was the
impediment to changing the io-threads stack size, so that has been
adjusted too.

Backport of:
&gt; Change-Id: I8781c4f2c8f2b830f4535e366995fac8dd0a8653
&gt; BUG: 1418095
&gt; Reviewed-on: https://review.gluster.org/15745

Change-Id: Ia5dada61703e6bea95f2511da71feb573fc9a429
BUG: 1418536
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16511
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Some functions were allocating 64K booleans, which are (crazily)
mapped to 4-byte ints, for a total of 256KB per call.  Changed to use
bitfields instead, so usage is now only 8KB per call.  This was the
impediment to changing the io-threads stack size, so that has been
adjusted too.

Backport of:
&gt; Change-Id: I8781c4f2c8f2b830f4535e366995fac8dd0a8653
&gt; BUG: 1418095
&gt; Reviewed-on: https://review.gluster.org/15745

Change-Id: Ia5dada61703e6bea95f2511da71feb573fc9a429
BUG: 1418536
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16511
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: run many bricks within one glusterfsd process</title>
<updated>2017-02-02T00:54:58+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2017-01-31T19:49:45+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=83803b4b2d70e9e6e16bb050d7ac8e49ba420893'/>
<id>83803b4b2d70e9e6e16bb050d7ac8e49ba420893</id>
<content type='text'>
This patch adds support for multiple brick translator stacks running in
a single brick server process.  This reduces our per-brick memory usage
by approximately 3x, and our appetite for TCP ports even more.  It also
creates potential to avoid process/thread thrashing, and to improve QoS
by scheduling more carefully across the bricks, but realizing that
potential will require further work.

Multiplexing is controlled by the "cluster.brick-multiplex" global
option.  By default it's off, and bricks are started in separate
processes as before.  If multiplexing is enabled, then *compatible*
bricks (mostly those with the same transport options) will be started in
the same process.

Backport of:
&gt; Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
&gt; BUG: 1385758
&gt; Reviewed-on: https://review.gluster.org/14763

Change-Id: I4bce9080f6c93d50171823298fdf920258317ee8
BUG: 1418091
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16496
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch adds support for multiple brick translator stacks running in
a single brick server process.  This reduces our per-brick memory usage
by approximately 3x, and our appetite for TCP ports even more.  It also
creates potential to avoid process/thread thrashing, and to improve QoS
by scheduling more carefully across the bricks, but realizing that
potential will require further work.

Multiplexing is controlled by the "cluster.brick-multiplex" global
option.  By default it's off, and bricks are started in separate
processes as before.  If multiplexing is enabled, then *compatible*
bricks (mostly those with the same transport options) will be started in
the same process.

Backport of:
&gt; Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
&gt; BUG: 1385758
&gt; Reviewed-on: https://review.gluster.org/14763

Change-Id: I4bce9080f6c93d50171823298fdf920258317ee8
BUG: 1418091
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16496
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core (3.10): correct max op version for 3.10</title>
<updated>2017-01-25T13:56:40+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2017-01-24T15:17:01+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=34213d9f34df15dbec42e389f327a893799f1c75'/>
<id>34213d9f34df15dbec42e389f327a893799f1c75</id>
<content type='text'>
remove GD_OP_VERSION_4_0_0, set GS_OP_VERSION_MAX to 3_10_0

BUG: 1415245
Change-Id: I37edef6ab67e4ef64adbd02266942a8e4c5484c5
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16465
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
remove GD_OP_VERSION_4_0_0, set GS_OP_VERSION_MAX to 3_10_0

BUG: 1415245
Change-Id: I37edef6ab67e4ef64adbd02266942a8e4c5484c5
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16465
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>inode: Add per xlator ref count for inode</title>
<updated>2017-01-18T09:29:45+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2016-03-15T07:14:16+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c9239db7961afd648f1fa3310e5ce9b8281c8ad2'/>
<id>c9239db7961afd648f1fa3310e5ce9b8281c8ad2</id>
<content type='text'>
Debugging inode ref leaks is very difficult as there is no info
except for the ref count on the inode. Hence this patch is first step
towards debugging inode ref leaks. With this patch, in the statedump
we get additional info that tells the ref count taken by each xlator
on the inode.

Change-Id: I7802f7e7b13c04eb4d41fdf52d5469fd2c2a185a
BUG: 1325531
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13736
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Debugging inode ref leaks is very difficult as there is no info
except for the ref count on the inode. Hence this patch is first step
towards debugging inode ref leaks. With this patch, in the statedump
we get additional info that tells the ref count taken by each xlator
on the inode.

Change-Id: I7802f7e7b13c04eb4d41fdf52d5469fd2c2a185a
BUG: 1325531
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13736
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tier : Tier as a service</title>
<updated>2017-01-17T04:49:47+00:00</updated>
<author>
<name>hari gowtham</name>
<email>hgowtham@redhat.com</email>
</author>
<published>2016-07-12T11:10:28+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=3263d1c4f4b7efd1a018c17e1ba4dd9245094f48'/>
<id>3263d1c4f4b7efd1a018c17e1ba4dd9245094f48</id>
<content type='text'>
tierd is implemented by separating from rebalance process.

The commands affected:

1) Attach tier will trigger this process instead of old one
2) tier start and tier start force will also trigger this process.
3) volume status [tier] will show tier daemon as a process instead
of task and normal tier status and tier detach status works.
4) tier stop implemented.
5) detach tier implemented separately along with new detach tier
status
6) volume tier volname status will work using the changes.
7) volume set works

This patch has separated the tier translator from the legacy
DHT rebalance code. It now sends the RPCs from the CLI
to glusterd separate to the DHT rebalance code.
The daemon is now a service, similar to the snapshot daemon,
and can be viewed using the volume status command.

The code for the validation and commit phase are the same
as the earlier tier validation code in DHT rebalance.

The “brickop” phase has been changed so that the status
command can use this framework.

The service management framework is now used.
DHT rebalance does not use this framework.

This service framework takes care of :

*) spawning the daemon, killing it and other such processes.
*) volume set options , which are written on the volfile.
*) restart and reconfigure functions. Restart is to restart
the daemon at two points
        1)after gluster goes down and comes up.
        2) to stop detach tier.
*) reconfigure is used to make immediate volfile changes.
By doing this, we don’t restart the daemon.
it has the code to rewrite the volfile for topological
changes too (which comes into place during add and remove brick).

With this patch the log, pid, and volfile are separated
and put into respective directories.

Change-Id: I3681d0d66894714b55aa02ca2a30ac000362a399
BUG: 1313838
Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13365
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: hari gowtham &lt;hari.gowtham005@gmail.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
tierd is implemented by separating from rebalance process.

The commands affected:

1) Attach tier will trigger this process instead of old one
2) tier start and tier start force will also trigger this process.
3) volume status [tier] will show tier daemon as a process instead
of task and normal tier status and tier detach status works.
4) tier stop implemented.
5) detach tier implemented separately along with new detach tier
status
6) volume tier volname status will work using the changes.
7) volume set works

This patch has separated the tier translator from the legacy
DHT rebalance code. It now sends the RPCs from the CLI
to glusterd separate to the DHT rebalance code.
The daemon is now a service, similar to the snapshot daemon,
and can be viewed using the volume status command.

The code for the validation and commit phase are the same
as the earlier tier validation code in DHT rebalance.

The “brickop” phase has been changed so that the status
command can use this framework.

The service management framework is now used.
DHT rebalance does not use this framework.

This service framework takes care of :

*) spawning the daemon, killing it and other such processes.
*) volume set options , which are written on the volfile.
*) restart and reconfigure functions. Restart is to restart
the daemon at two points
        1)after gluster goes down and comes up.
        2) to stop detach tier.
*) reconfigure is used to make immediate volfile changes.
By doing this, we don’t restart the daemon.
it has the code to rewrite the volfile for topological
changes too (which comes into place during add and remove brick).

With this patch the log, pid, and volfile are separated
and put into respective directories.

Change-Id: I3681d0d66894714b55aa02ca2a30ac000362a399
BUG: 1313838
Signed-off-by: hari gowtham &lt;hgowtham@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13365
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: hari gowtham &lt;hari.gowtham005@gmail.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: fix statvfs in FreeBSD</title>
<updated>2017-01-10T17:08:00+00:00</updated>
<author>
<name>Xavier Hernandez</name>
<email>xhernandez@datalab.es</email>
</author>
<published>2017-01-09T12:10:19+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d6bc8da62f1b0d454fa5187687fdbf894403c7ce'/>
<id>d6bc8da62f1b0d454fa5187687fdbf894403c7ce</id>
<content type='text'>
FreeBSD interprets statvfs' f_bsize field in a different way than Linux.

This fix modifies the value returned by statvfs() on FreeBSD to match
the expected value by Gluster.

Change-Id: I930dab6e895671157238146d333e95874ea28a08
BUG: 1356076
Signed-off-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Reviewed-on: http://review.gluster.org/16361
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
FreeBSD interprets statvfs' f_bsize field in a different way than Linux.

This fix modifies the value returned by statvfs() on FreeBSD to match
the expected value by Gluster.

Change-Id: I930dab6e895671157238146d333e95874ea28a08
BUG: 1356076
Signed-off-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Reviewed-on: http://review.gluster.org/16361
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>dht: At places needed use STACK_WIND_COOKIE</title>
<updated>2017-01-09T18:37:02+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2016-12-06T09:13:10+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=a988741713752c2ec04a0680224d8fa4d42dc203'/>
<id>a988741713752c2ec04a0680224d8fa4d42dc203</id>
<content type='text'>
Issue:
frame has a void * cookie pointer.
In case of STACK_WIND_COOKIE frame-&gt;cookie is assigned
to what is sent by the caller.
In case of STACK_WIND frame-&gt;cookie is assigned to point
point to the frame itself.

For ease of coding, at many places, the cookie in the cbk
is used to get the pointer to the next xl. This is
inconsistent when STACK_WIND_TAIL comes into picture.

Eg: dht_setxattr () {
    for (i = 0 ; i &lt; conf-&gt;subvolume_cnt ; i++) {
       STACK_WIND (..dht_checking_pathinfo_cbk,
                   conf-&gt;subvolumes[i] ..);
    }

    dht_checking_pathinfo_cbk (...void *cookie...) {
        prev = cookie;
        ...
        for (i = 0; i &lt; conf-&gt;subvolume_cnt; i++) {
            if (conf-&gt;subvolumes[i] == prev-&gt;this) {
                 ...
            }
        }
    }

    Consider the below graph:
    dht (parent)
    readdir-ahead  =&gt; Doesn't define setxattr and uses STACK_WIND_TAIL
    protocol-client

    With this graph, when dht_checking_pathinfo_cbk is called,
    cookie will have frame pointing to protocol-client.
    i.e. prev-&gt;this will be protocol-client. But dht was expecting
    it to be readdir-ahead as it has stored in conf-&gt;subvolumes[i]

Solution:
    Hence, as a thumb rule, if cbk is using cookie, then we explicitly
    call STACK_WIND_COOKIE.

Change-Id: I83aea1e24c809c5a91a0db7283e908e125471bd4
BUG: 1401812
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16039
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Issue:
frame has a void * cookie pointer.
In case of STACK_WIND_COOKIE frame-&gt;cookie is assigned
to what is sent by the caller.
In case of STACK_WIND frame-&gt;cookie is assigned to point
point to the frame itself.

For ease of coding, at many places, the cookie in the cbk
is used to get the pointer to the next xl. This is
inconsistent when STACK_WIND_TAIL comes into picture.

Eg: dht_setxattr () {
    for (i = 0 ; i &lt; conf-&gt;subvolume_cnt ; i++) {
       STACK_WIND (..dht_checking_pathinfo_cbk,
                   conf-&gt;subvolumes[i] ..);
    }

    dht_checking_pathinfo_cbk (...void *cookie...) {
        prev = cookie;
        ...
        for (i = 0; i &lt; conf-&gt;subvolume_cnt; i++) {
            if (conf-&gt;subvolumes[i] == prev-&gt;this) {
                 ...
            }
        }
    }

    Consider the below graph:
    dht (parent)
    readdir-ahead  =&gt; Doesn't define setxattr and uses STACK_WIND_TAIL
    protocol-client

    With this graph, when dht_checking_pathinfo_cbk is called,
    cookie will have frame pointing to protocol-client.
    i.e. prev-&gt;this will be protocol-client. But dht was expecting
    it to be readdir-ahead as it has stored in conf-&gt;subvolumes[i]

Solution:
    Hence, as a thumb rule, if cbk is using cookie, then we explicitly
    call STACK_WIND_COOKIE.

Change-Id: I83aea1e24c809c5a91a0db7283e908e125471bd4
BUG: 1401812
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
Reviewed-on: http://review.gluster.org/16039
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Get maximum supported op-version in a cluster</title>
<updated>2017-01-09T05:16:12+00:00</updated>
<author>
<name>Samikshan Bairagya</name>
<email>samikshan@gmail.com</email>
</author>
<published>2016-12-19T09:37:14+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=76fbeafbf56a61768c81f622b354e3c95a00e986'/>
<id>76fbeafbf56a61768c81f622b354e3c95a00e986</id>
<content type='text'>
gluster volume get &lt;VOLNAME&gt; cluster.opversion gives us the current
op-version on which the cluster is operating. There is no command
that lets the user know the maximum supported op-version that the
cluster can run on.

This patch adds a new global option cluster.max-op-version, that
can be used to retrieve the maximum supported op-version in a
cluster.

Usage:
	# gluster volume get all cluster.max-op-version

Example output:

Option                                  Value
------                                  -----
cluster.max-op-version                  30900

NOTE: The only way to test this feature for now is to set the
GD_OP_VERSION_MAX macro to different values (30800 for 3.8,30900
for 3.9, and so on) and rebuild glusterd. Since the regression test
framework currently doesn't have support to simulate these tests,
there are no accompanying regression tests for this feature. It
should be possible to add tests once glusto comes in and makes it
easier to run a heterogeneous cluster.

Change-Id: I547480ee5e7912664784643e436feb198b6d16d0
BUG: 1365822
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: http://review.gluster.org/16283
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
gluster volume get &lt;VOLNAME&gt; cluster.opversion gives us the current
op-version on which the cluster is operating. There is no command
that lets the user know the maximum supported op-version that the
cluster can run on.

This patch adds a new global option cluster.max-op-version, that
can be used to retrieve the maximum supported op-version in a
cluster.

Usage:
	# gluster volume get all cluster.max-op-version

Example output:

Option                                  Value
------                                  -----
cluster.max-op-version                  30900

NOTE: The only way to test this feature for now is to set the
GD_OP_VERSION_MAX macro to different values (30800 for 3.8,30900
for 3.9, and so on) and rebuild glusterd. Since the regression test
framework currently doesn't have support to simulate these tests,
there are no accompanying regression tests for this feature. It
should be possible to add tests once glusto comes in and makes it
easier to run a heterogeneous cluster.

Change-Id: I547480ee5e7912664784643e436feb198b6d16d0
BUG: 1365822
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: http://review.gluster.org/16283
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
