<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/libglusterfs/src/mem-types.h, branch v7.1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>libglusterfs: Move devel headers under glusterfs directory</title>
<updated>2018-12-05T21:47:04+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-11-29T19:08:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5'/>
<id>20ef211cfa5b5fcc437484a879fdc5d4c66bbaf5</id>
<content type='text'>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.

Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation &lt;&gt; in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs

This change although big, is just moving around the headers and
making it correct when including these headers from other sources.

This helps us correctly include libglusterfs includes without
namespace conflicts.

Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Land clang-format changes</title>
<updated>2018-09-12T11:52:48+00:00</updated>
<author>
<name>Gluster Ant</name>
<email>bugzilla-bot@gluster.org</email>
</author>
<published>2018-09-12T11:52:48+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=45a71c0548b6fd2c757aa2e7b7671a1411948894'/>
<id>45a71c0548b6fd2c757aa2e7b7671a1411948894</id>
<content type='text'>
Change-Id: I6f5d8140a06f3c1b2d196849299f8d483028d33b
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I6f5d8140a06f3c1b2d196849299f8d483028d33b
</pre>
</div>
</content>
</entry>
<entry>
<title>Fetch backup volfile servers from glusterd2</title>
<updated>2018-02-16T16:16:25+00:00</updated>
<author>
<name>Prashanth Pai</name>
<email>ppai@redhat.com</email>
</author>
<published>2018-02-09T03:57:03+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=664b946496368f625b5a15646b5aa791078055ef'/>
<id>664b946496368f625b5a15646b5aa791078055ef</id>
<content type='text'>
Clients will request for a list of volfile servers from glusterd2 by
setting a (optional) flag in GETSPEC RPC call. glusterd2 will check for
the presence of this flag and accordingly return a list of glusterd2
servers in GETSPEC RPC reply. Currently, this list of servers returned
only contains servers which have bricks belonging to the volume.

See:
https://github.com/gluster/glusterd2/issues/382
https://github.com/gluster/glusterfs/issues/351

Updates #351
Change-Id: I0eee3d0bf25a87627e562380ef73063926a16b81
Signed-off-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Clients will request for a list of volfile servers from glusterd2 by
setting a (optional) flag in GETSPEC RPC call. glusterd2 will check for
the presence of this flag and accordingly return a list of glusterd2
servers in GETSPEC RPC reply. Currently, this list of servers returned
only contains servers which have bricks belonging to the volume.

See:
https://github.com/gluster/glusterd2/issues/382
https://github.com/gluster/glusterfs/issues/351

Updates #351
Change-Id: I0eee3d0bf25a87627e562380ef73063926a16b81
Signed-off-by: Prashanth Pai &lt;ppai@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd : introduce timer in mgmt_v3_lock</title>
<updated>2017-10-17T15:44:49+00:00</updated>
<author>
<name>Gaurav Yadav</name>
<email>gyadav@redhat.com</email>
</author>
<published>2017-10-05T18:14:46+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=614904fa7a31bf6f69074238b7e710a20e05e1bb'/>
<id>614904fa7a31bf6f69074238b7e710a20e05e1bb</id>
<content type='text'>
Problem:
In a multinode environment, if two of the op-sm transactions
are initiated on one of the receiver nodes at the same time,
there might be a possibility that glusterd  may end up in
stale lock.

Solution:
During mgmt_v3_lock a registration is made to  gf_timer_call_after
which release the lock after certain period of time

Change-Id: I16cc2e5186a2e8a5e35eca2468b031811e093843
BUG: 1499004
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In a multinode environment, if two of the op-sm transactions
are initiated on one of the receiver nodes at the same time,
there might be a possibility that glusterd  may end up in
stale lock.

Solution:
During mgmt_v3_lock a registration is made to  gf_timer_call_after
which release the lock after certain period of time

Change-Id: I16cc2e5186a2e8a5e35eca2468b031811e093843
BUG: 1499004
Signed-off-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mem-pool: add tracking of mem_pool that requested the allocation</title>
<updated>2017-08-28T12:46:16+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-08-04T14:29:51+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2645e730b79b44fc035170657e43bb52f3e855c5'/>
<id>2645e730b79b44fc035170657e43bb52f3e855c5</id>
<content type='text'>
This renames the current 'struct mem_pool' to 'struct mem_pool_shared'.
The mem_pool_shared is globally allocated and not specific for
particular objects.

A new 'struct mem_pool' gets allocated when mem_pool_new() is called. It
points to the mem_pool_shared that handles the actual allocation
requests. The 'struct mem_pool' is only used for accounting of the
objects that the caller requested and free'd.

All of these changes will be used to collect all the memory pools a
glusterfs_ctx_t is consuming, so that statedumps can be collected per
context.

Updates: #307
Change-Id: I6355d3f0251c928e0bbfc71be3431307c6f3a3da
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18073
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This renames the current 'struct mem_pool' to 'struct mem_pool_shared'.
The mem_pool_shared is globally allocated and not specific for
particular objects.

A new 'struct mem_pool' gets allocated when mem_pool_new() is called. It
points to the mem_pool_shared that handles the actual allocation
requests. The 'struct mem_pool' is only used for accounting of the
objects that the caller requested and free'd.

All of these changes will be used to collect all the memory pools a
glusterfs_ctx_t is consuming, so that statedumps can be collected per
context.

Updates: #307
Change-Id: I6355d3f0251c928e0bbfc71be3431307c6f3a3da
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18073
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mem-pool: free objects from pools on mem_pools_fini()</title>
<updated>2017-07-20T11:35:23+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-07-13T11:44:19+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=8a09d78076cf506f0750cccd63cc983496473cf3'/>
<id>8a09d78076cf506f0750cccd63cc983496473cf3</id>
<content type='text'>
When using a minimal gfapi application that only initializes a small
graph (sink, shard and meta xlators) the following memory leaks are
reported by Valgrind:

  HEAP SUMMARY:
      in use at exit: 322,976 bytes in 75 blocks
    total heap usage: 684 allocs, 609 frees, 2,092,116 bytes allocated

With this change, the mem-pools are cleaned up on calling of
mem_pools_fini() and the objects in the pool are free'd.

  HEAP SUMMARY:
      in use at exit: 315,265 bytes in 58 blocks
    total heap usage: 684 allocs, 626 frees, 2,092,079 bytes allocated

This information was gathered with `./run-xlator.sh features/shard` that
comes with `gfapi-load-volfile` from gluster-debug-tools.

While working on the free'ing of the per_thread_pool_list_t structures,
it became apparent that GF_CALLOC() in mem_get_pool_list() gets
redirected to a standard calloc() without prepending the Gluster
specific memory header. This is because mem_pools_init() gets called
before THIS-&gt;ctx is valid, so it is not possible to check if memory
accounting is enabled or not. Because of this, the GF_CALLOC() call in
mem_get_pool_list() has been replaced by CALLOC() to prevent potential
mismatches between the allocation/free'ing of per_thread_pool_list_t
structures.

Change-Id: Id6f558816f399b0c613d74df36deac2300b6dd98
BUG: 1470170
URL: https://github.com/gluster/gluster-debug-tools
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17768
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When using a minimal gfapi application that only initializes a small
graph (sink, shard and meta xlators) the following memory leaks are
reported by Valgrind:

  HEAP SUMMARY:
      in use at exit: 322,976 bytes in 75 blocks
    total heap usage: 684 allocs, 609 frees, 2,092,116 bytes allocated

With this change, the mem-pools are cleaned up on calling of
mem_pools_fini() and the objects in the pool are free'd.

  HEAP SUMMARY:
      in use at exit: 315,265 bytes in 58 blocks
    total heap usage: 684 allocs, 626 frees, 2,092,079 bytes allocated

This information was gathered with `./run-xlator.sh features/shard` that
comes with `gfapi-load-volfile` from gluster-debug-tools.

While working on the free'ing of the per_thread_pool_list_t structures,
it became apparent that GF_CALLOC() in mem_get_pool_list() gets
redirected to a standard calloc() without prepending the Gluster
specific memory header. This is because mem_pools_init() gets called
before THIS-&gt;ctx is valid, so it is not possible to check if memory
accounting is enabled or not. Because of this, the GF_CALLOC() call in
mem_get_pool_list() has been replaced by CALLOC() to prevent potential
mismatches between the allocation/free'ing of per_thread_pool_list_t
structures.

Change-Id: Id6f558816f399b0c613d74df36deac2300b6dd98
BUG: 1470170
URL: https://github.com/gluster/gluster-debug-tools
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17768
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgtm/core : use sha hash function for volfile check</title>
<updated>2017-07-10T05:07:11+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2017-07-06T07:56:42+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f2f3d74c835b68ad9ec63ec112870829a823a1fb'/>
<id>f2f3d74c835b68ad9ec63ec112870829a823a1fb</id>
<content type='text'>
We are storing the entire volfile and using this to check
volfile change. With brick multiplexing there will be lot
of graphs per process which will increase the memory foot
print of the process. So instead of storing the entire
graph we could use sha256 and we can compare the hash to
see whether volfile change happened or not.

Also with Brick multiplexing, the direct comparison of vol
file is not correct. There are two problems.

Problem 1:

We are currently storing one single graph (the last
updated volfile) whereas, what we need is the entire
graph with all atttached bricks.

If we fix this issue, we have second problem

Problem 2:
With multiplexing we have a graph that contains multiple
bricks. But what we are checking as part of the reconfigure
is, comparing the entire graph with one single graph,
which will always fail.

Solution:
We create list in glusterfs_ctx_t that stores sha256 hash
of individual brick graphs. When a graph changes happens
we compare the stored hash and the current hash. If the
hash matches, then no need for reconfigure. Otherwise we
first do the reconfigure and then update the hash.

For now, gfapi has not changed this way. Meaning when gfapi
volfile fetch or reconfigure happens, we still store the
entire graph and compare, each memory.

This is fine, because libgfapi will not load brick graphs.
But changing the libgfapi will make the code similar in
both glusterfsd-mgmt and api. Also it helps to reduce some
memory.

Change-Id: I9df917a771a52b95622ab8f63af34ec390163a77
BUG: 1467986
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17709
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We are storing the entire volfile and using this to check
volfile change. With brick multiplexing there will be lot
of graphs per process which will increase the memory foot
print of the process. So instead of storing the entire
graph we could use sha256 and we can compare the hash to
see whether volfile change happened or not.

Also with Brick multiplexing, the direct comparison of vol
file is not correct. There are two problems.

Problem 1:

We are currently storing one single graph (the last
updated volfile) whereas, what we need is the entire
graph with all atttached bricks.

If we fix this issue, we have second problem

Problem 2:
With multiplexing we have a graph that contains multiple
bricks. But what we are checking as part of the reconfigure
is, comparing the entire graph with one single graph,
which will always fail.

Solution:
We create list in glusterfs_ctx_t that stores sha256 hash
of individual brick graphs. When a graph changes happens
we compare the stored hash and the current hash. If the
hash matches, then no need for reconfigure. Otherwise we
first do the reconfigure and then update the hash.

For now, gfapi has not changed this way. Meaning when gfapi
volfile fetch or reconfigure happens, we still store the
entire graph and compare, each memory.

This is fine, because libgfapi will not load brick graphs.
But changing the libgfapi will make the code similar in
both glusterfsd-mgmt and api. Also it helps to reduce some
memory.

Change-Id: I9df917a771a52b95622ab8f63af34ec390163a77
BUG: 1467986
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17709
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Halo Replication feature for AFR translator</title>
<updated>2017-05-02T10:23:53+00:00</updated>
<author>
<name>Kevin Vigor</name>
<email>kvigor@fb.com</email>
</author>
<published>2017-03-21T15:23:25+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=07cc8679cdf3b29680f4f105d0222da168d8bfc1'/>
<id>07cc8679cdf3b29680f4f105d0222da168d8bfc1</id>
<content type='text'>
Summary:
Halo Geo-replication is a feature which allows Gluster or NFS clients to write
locally to their region (as defined by a latency "halo" or threshold if you
like), and have their writes asynchronously propagate from their origin to the
rest of the cluster.  Clients can also write synchronously to the cluster
simply by specifying a halo-latency which is very large (e.g. 10seconds) which
will include all bricks.

In other words, it allows clients to decide at mount time if they desire
synchronous or asynchronous IO into a cluster and the cluster can support both
of these modes to any number of clients simultaneously.

There are a few new volume options due to this feature:
  halo-shd-latency:  The threshold below which self-heal daemons will
  consider children (bricks) connected.

  halo-nfsd-latency: The threshold below which NFS daemons will consider
  children (bricks) connected.

  halo-latency: The threshold below which all other clients will
  consider children (bricks) connected.

  halo-min-replicas: The minimum number of replicas which are to
  be enforced regardless of latency specified in the above 3 options.
  If the number of children falls below this threshold the next
  best (chosen by latency) shall be swapped in.

New FUSE mount options:
  halo-latency &amp; halo-min-replicas: As descripted above.

This feature combined with multi-threaded SHD support (D1271745) results in
some pretty cool geo-replication possibilities.

Operational Notes:
- Global consistency is gaurenteed for synchronous clients, this is provided by
  the existing entry-locking mechanism.
- Asynchronous clients on the other hand and merely consistent to their region.
  Writes &amp; deletes will be protected via entry-locks as usual preventing
  concurrent writes into files which are undergoing replication.  Read operations
  on the other hand should never block.
- Writes are allowed from _any_ region and propagated from the origin to all
  other regions.  The take away from this is care should be taken to ensure
  multiple writers do not write the same files resulting in a gfid split-brain
  which will require resolution via split-brain policies (majority, mtime &amp;
  size).  Recommended method for preventing this is using the nfs-auth feature to
  define which region for each share has RW permissions, tiers not in the origin
  region should have RO perms.

TODO:
- Synchronous clients (including the SHD) should choose clients from their own
  region as preferred sources for reads.  Most of the plumbing is in place for
  this via the child_latency array.
- Better GFID split brain handling &amp; better dent type split brain handling
  (i.e. create a trash can and move the offending files into it).
- Tagging in addition to latency as a means of defining which children you wish
  to synchronously write to

Test Plan:
- The usual suspects, clang, gcc w/ address sanitizer &amp; valgrind
- Prove tests

Reviewers: jackl, dph, cjh, meyering

Reviewed By: meyering

Subscribers: ethanr

Differential Revision: https://phabricator.fb.com/D1272053

Tasks: 4117827

Change-Id: I694a9ab429722da538da171ec528406e77b5e6d1
BUG: 1428061
Signed-off-by: Kevin Vigor &lt;kvigor@fb.com&gt;
Reviewed-on: http://review.gluster.org/16099
Reviewed-on: https://review.gluster.org/16177
Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Summary:
Halo Geo-replication is a feature which allows Gluster or NFS clients to write
locally to their region (as defined by a latency "halo" or threshold if you
like), and have their writes asynchronously propagate from their origin to the
rest of the cluster.  Clients can also write synchronously to the cluster
simply by specifying a halo-latency which is very large (e.g. 10seconds) which
will include all bricks.

In other words, it allows clients to decide at mount time if they desire
synchronous or asynchronous IO into a cluster and the cluster can support both
of these modes to any number of clients simultaneously.

There are a few new volume options due to this feature:
  halo-shd-latency:  The threshold below which self-heal daemons will
  consider children (bricks) connected.

  halo-nfsd-latency: The threshold below which NFS daemons will consider
  children (bricks) connected.

  halo-latency: The threshold below which all other clients will
  consider children (bricks) connected.

  halo-min-replicas: The minimum number of replicas which are to
  be enforced regardless of latency specified in the above 3 options.
  If the number of children falls below this threshold the next
  best (chosen by latency) shall be swapped in.

New FUSE mount options:
  halo-latency &amp; halo-min-replicas: As descripted above.

This feature combined with multi-threaded SHD support (D1271745) results in
some pretty cool geo-replication possibilities.

Operational Notes:
- Global consistency is gaurenteed for synchronous clients, this is provided by
  the existing entry-locking mechanism.
- Asynchronous clients on the other hand and merely consistent to their region.
  Writes &amp; deletes will be protected via entry-locks as usual preventing
  concurrent writes into files which are undergoing replication.  Read operations
  on the other hand should never block.
- Writes are allowed from _any_ region and propagated from the origin to all
  other regions.  The take away from this is care should be taken to ensure
  multiple writers do not write the same files resulting in a gfid split-brain
  which will require resolution via split-brain policies (majority, mtime &amp;
  size).  Recommended method for preventing this is using the nfs-auth feature to
  define which region for each share has RW permissions, tiers not in the origin
  region should have RO perms.

TODO:
- Synchronous clients (including the SHD) should choose clients from their own
  region as preferred sources for reads.  Most of the plumbing is in place for
  this via the child_latency array.
- Better GFID split brain handling &amp; better dent type split brain handling
  (i.e. create a trash can and move the offending files into it).
- Tagging in addition to latency as a means of defining which children you wish
  to synchronously write to

Test Plan:
- The usual suspects, clang, gcc w/ address sanitizer &amp; valgrind
- Prove tests

Reviewers: jackl, dph, cjh, meyering

Reviewed By: meyering

Subscribers: ethanr

Differential Revision: https://phabricator.fb.com/D1272053

Tasks: 4117827

Change-Id: I694a9ab429722da538da171ec528406e77b5e6d1
BUG: 1428061
Signed-off-by: Kevin Vigor &lt;kvigor@fb.com&gt;
Reviewed-on: http://review.gluster.org/16099
Reviewed-on: https://review.gluster.org/16177
Tested-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: make the per glusterfs_ctx_t timer-wheel refcounted</title>
<updated>2017-05-01T09:30:14+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-04-17T10:20:07+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=73fcf3a874b2049da31d01b8363d1ac85c9488c2'/>
<id>73fcf3a874b2049da31d01b8363d1ac85c9488c2</id>
<content type='text'>
xlators can use a 'global' timer-wheel for scheduling events. This
timer-wheel is managed per glusterfs_ctx_t, but does not need to be
allocated for every graph. When an xlator wants to use the timer-wheel,
it will be instanciated on demand, and provided to xlators that request
it later on.

By adding a reference counter to the glusterfs_ctx_t for the
timer-wheel, the threads and structures can be cleaned up when the last
xlator does not have a need for it anymore. In general, the xlators
request the timer-wheel in init(), and they should return it in fini().

Because the timer-wheel is managed per glusterfs_ctx_t, the functions
can be added to ctx.c and do not need to live in their very minimal
tw.[ch] files.

Change-Id: I19d225b39aaa272d9005ba7adc3104c3764f1572
BUG: 1442788
Reported-by: Poornima G &lt;pgurusid@redhat.com&gt;
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17068
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
xlators can use a 'global' timer-wheel for scheduling events. This
timer-wheel is managed per glusterfs_ctx_t, but does not need to be
allocated for every graph. When an xlator wants to use the timer-wheel,
it will be instanciated on demand, and provided to xlators that request
it later on.

By adding a reference counter to the glusterfs_ctx_t for the
timer-wheel, the threads and structures can be cleaned up when the last
xlator does not have a need for it anymore. In general, the xlators
request the timer-wheel in init(), and they should return it in fini().

Because the timer-wheel is managed per glusterfs_ctx_t, the functions
can be added to ctx.c and do not need to live in their very minimal
tw.[ch] files.

Change-Id: I19d225b39aaa272d9005ba7adc3104c3764f1572
BUG: 1442788
Reported-by: Poornima G &lt;pgurusid@redhat.com&gt;
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17068
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>dht/rebalance: allocate migrator thread pool dynamically</title>
<updated>2016-07-28T12:46:30+00:00</updated>
<author>
<name>Susant Palai</name>
<email>spalai@redhat.com</email>
</author>
<published>2016-07-21T12:47:21+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b8e8bfc7e4d3eaf76bb637221bc6392ec10ca54b'/>
<id>b8e8bfc7e4d3eaf76bb637221bc6392ec10ca54b</id>
<content type='text'>
Problems: The maximum number of migratior threads created was static set
to "40". And the number of these threads get created in rebalance depends
on the number of cores user has. If the number of cores exceeds 40, a 
crash or memory corruption can be seen.

Fix: Make the migratior thread pool dynamic.

Change-Id: Ifbdac8a1a396363dd75e2f6bcb454070cfdbf839
BUG: 1359711
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15000
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problems: The maximum number of migratior threads created was static set
to "40". And the number of these threads get created in rebalance depends
on the number of cores user has. If the number of cores exceeds 40, a 
crash or memory corruption can be seen.

Fix: Make the migratior thread pool dynamic.

Change-Id: Ifbdac8a1a396363dd75e2f6bcb454070cfdbf839
BUG: 1359711
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15000
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
