<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/glusterfsd/src/glusterfsd.c, branch v7.0rc2</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>event: rename event_XXX with gf_ prefixed</title>
<updated>2019-08-21T06:13:38+00:00</updated>
<author>
<name>Xiubo Li</name>
<email>xiubli@redhat.com</email>
</author>
<published>2019-07-26T04:34:52+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=acbabe3d916d763a0bb13e7df876cac61ca5b160'/>
<id>acbabe3d916d763a0bb13e7df876cac61ca5b160</id>
<content type='text'>
I hit one crash issue when using the libgfapi.

In the libgfapi it will call glfs_poller() --&gt; event_dispatch()
in file api/src/glfs.c:721, and the event_dispatch() is defined
by libgluster locally, the problem is the name of event_dispatch()
is the extremly the same with the one from libevent package form
the OS.

For example, if a executable program Foo, which will also use and
link the libevent and the libgfapi at the same time, I can hit the
crash, like:

kernel: glfs_glfspoll[68486]: segfault at 1c0 ip 00007fef006fd2b8 sp
00007feeeaffce30 error 4 in libevent-2.0.so.5.1.9[7fef006ed000+46000]

The link for Foo is:
lib_foo_LADD = -levent $(GFAPI_LIBS)
It will crash.

This is because the glfs_poller() is calling the event_dispatch() from
the libevent, not the libglsuter.

The gfapi link info :
GFAPI_LIBS = -lacl -lgfapi -lglusterfs -lgfrpc -lgfxdr -luuid

If I link Foo like:
lib_foo_LADD = $(GFAPI_LIBS) -levent
It will works well without any problem.

And if Foo call one private lib, such as handler_glfs.so, and the
handler_glfs.so will link the GFAPI_LIBS directly, while the Foo won't
and it will dlopen(handler_glfs.so), then the crash will be hit everytime.

The link info will be:
foo_LADD = -levent
libhandler_glfs_LIBADD = $(GFAPI_LIBS)

I can avoid the crash temporarily by linking the GFAPI_LIBS in Foo too like:
foo_LADD = $(GFAPI_LIBS) -levent
libhandler_glfs_LIBADD = $(GFAPI_LIBS)

But this is ugly since the Foo won't use any APIs from the GFAPI_LIBS.

And in some cases when the --as-needed link option is added(on many dists
it is added as default), then the crash is back again, the above workaround
won't work.

Backport of:
&gt; https://review.gluster.org/#/c/glusterfs/+/23110/
&gt; Change-Id: I38f0200b941bd1cff4bf3066fca2fc1f9a5263aa
&gt; Fixes: #699
&gt; Signed-off-by: Xiubo Li &lt;xiubli@redhat.com&gt;

Change-Id: I38f0200b941bd1cff4bf3066fca2fc1f9a5263aa
updates: bz#1740519
Signed-off-by: Xiubo Li &lt;xiubli@redhat.com&gt;
(cherry picked from commit 799edc73c3d4f694c365c6a7c27c9ab8eed5f260)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
I hit one crash issue when using the libgfapi.

In the libgfapi it will call glfs_poller() --&gt; event_dispatch()
in file api/src/glfs.c:721, and the event_dispatch() is defined
by libgluster locally, the problem is the name of event_dispatch()
is the extremly the same with the one from libevent package form
the OS.

For example, if a executable program Foo, which will also use and
link the libevent and the libgfapi at the same time, I can hit the
crash, like:

kernel: glfs_glfspoll[68486]: segfault at 1c0 ip 00007fef006fd2b8 sp
00007feeeaffce30 error 4 in libevent-2.0.so.5.1.9[7fef006ed000+46000]

The link for Foo is:
lib_foo_LADD = -levent $(GFAPI_LIBS)
It will crash.

This is because the glfs_poller() is calling the event_dispatch() from
the libevent, not the libglsuter.

The gfapi link info :
GFAPI_LIBS = -lacl -lgfapi -lglusterfs -lgfrpc -lgfxdr -luuid

If I link Foo like:
lib_foo_LADD = $(GFAPI_LIBS) -levent
It will works well without any problem.

And if Foo call one private lib, such as handler_glfs.so, and the
handler_glfs.so will link the GFAPI_LIBS directly, while the Foo won't
and it will dlopen(handler_glfs.so), then the crash will be hit everytime.

The link info will be:
foo_LADD = -levent
libhandler_glfs_LIBADD = $(GFAPI_LIBS)

I can avoid the crash temporarily by linking the GFAPI_LIBS in Foo too like:
foo_LADD = $(GFAPI_LIBS) -levent
libhandler_glfs_LIBADD = $(GFAPI_LIBS)

But this is ugly since the Foo won't use any APIs from the GFAPI_LIBS.

And in some cases when the --as-needed link option is added(on many dists
it is added as default), then the crash is back again, the above workaround
won't work.

Backport of:
&gt; https://review.gluster.org/#/c/glusterfs/+/23110/
&gt; Change-Id: I38f0200b941bd1cff4bf3066fca2fc1f9a5263aa
&gt; Fixes: #699
&gt; Signed-off-by: Xiubo Li &lt;xiubli@redhat.com&gt;

Change-Id: I38f0200b941bd1cff4bf3066fca2fc1f9a5263aa
updates: bz#1740519
Signed-off-by: Xiubo Li &lt;xiubli@redhat.com&gt;
(cherry picked from commit 799edc73c3d4f694c365c6a7c27c9ab8eed5f260)
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: add tests for different signal handling</title>
<updated>2019-05-30T07:32:34+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2019-04-12T02:46:56+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=71caf1423fe5f0eed652210f8158c95fc8a10219'/>
<id>71caf1423fe5f0eed652210f8158c95fc8a10219</id>
<content type='text'>
Also some cleanup:
* old-protocol.t was actually added to make sure we have line-coverage
* first-test.t should have been removed as per the comment. It doesn't do anything.
* add statvfs to rpc-coverage so we can cover statvfs in few xlators.

updates: bz#1693692
Change-Id: Ie8651ce007de484c4abced16b4de765aa5e517be
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Also some cleanup:
* old-protocol.t was actually added to make sure we have line-coverage
* first-test.t should have been removed as per the comment. It doesn't do anything.
* add statvfs to rpc-coverage so we can cover statvfs in few xlators.

updates: bz#1693692
Change-Id: Ie8651ce007de484c4abced16b4de765aa5e517be
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Fix some "Null pointer dereference" coverity issues</title>
<updated>2019-05-26T13:59:13+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-05-22T15:46:19+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=5d88111a142b3c37e92bdd36699a04fd054d27f4'/>
<id>5d88111a142b3c37e92bdd36699a04fd054d27f4</id>
<content type='text'>
This patch fixes the following CID's:

  * 1124829
  * 1274075
  * 1274083
  * 1274128
  * 1274135
  * 1274141
  * 1274143
  * 1274197
  * 1274205
  * 1274210
  * 1274211
  * 1288801
  * 1398629

Change-Id: Ia7c86cfab3245b20777ffa296e1a59748040f558
Updates: bz#789278
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch fixes the following CID's:

  * 1124829
  * 1274075
  * 1274083
  * 1274128
  * 1274135
  * 1274141
  * 1274143
  * 1274197
  * 1274205
  * 1274210
  * 1274211
  * 1288801
  * 1398629

Change-Id: Ia7c86cfab3245b20777ffa296e1a59748040f558
Updates: bz#789278
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: avoid dynamic TLS allocation when possible</title>
<updated>2019-04-24T03:26:48+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-03-05T17:58:20+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d8eadde7d498939c746ea8ddd9dc70a1029b4070'/>
<id>d8eadde7d498939c746ea8ddd9dc70a1029b4070</id>
<content type='text'>
Some interdependencies between logging and memory management functions
make it impossible to use the logging framework before initializing
memory subsystem because they both depend on Thread Local Storage
allocated through pthread_key_create() during initialization.

This causes a crash when we try to log something very early in the
initialization phase.

To prevent this, several dynamically allocated TLS structures have
been replaced by static TLS reserved at compile time using '__thread'
keyword. This also reduces the number of error sources, making
initialization simpler.

Updates: bz#1193929
Change-Id: I8ea2e072411e30790d50084b6b7e909c7bb01d50
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Some interdependencies between logging and memory management functions
make it impossible to use the logging framework before initializing
memory subsystem because they both depend on Thread Local Storage
allocated through pthread_key_create() during initialization.

This causes a crash when we try to log something very early in the
initialization phase.

To prevent this, several dynamically allocated TLS structures have
been replaced by static TLS reserved at compile time using '__thread'
keyword. This also reduces the number of error sources, making
initialization simpler.

Updates: bz#1193929
Change-Id: I8ea2e072411e30790d50084b6b7e909c7bb01d50
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: Log level changes do not effect on running client process</title>
<updated>2019-04-15T04:30:43+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2019-04-04T04:26:11+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=798aadbe51a9a02dd98a0f861cc239ecf7c8ed57'/>
<id>798aadbe51a9a02dd98a0f861cc239ecf7c8ed57</id>
<content type='text'>
Problem: commit c34e4161f3cb6539ec83a9020f3d27eb4759a975 set log-level
         per xlator during reconfigure only for a brick process not for
         the client process.

Solution: 1) Change per xlator log-level only if brick_mux is enabled.To make sure
             about brick multiplex introudce a flag brick_mux at ctx-&gt;cmd_args.

Note: There are two other changes done with this patch
      1) Ignore client-log-level option to attach a brick with
         already running brick if brick_mux is enabled
      2) Add a log to print pid of the running process to make easier
         debugging

Change-Id: I39e85de778e150d0685cd9a79425ce8b4783f9c9
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
Fixes: bz#1696046
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: commit c34e4161f3cb6539ec83a9020f3d27eb4759a975 set log-level
         per xlator during reconfigure only for a brick process not for
         the client process.

Solution: 1) Change per xlator log-level only if brick_mux is enabled.To make sure
             about brick multiplex introudce a flag brick_mux at ctx-&gt;cmd_args.

Note: There are two other changes done with this patch
      1) Ignore client-log-level option to attach a brick with
         already running brick if brick_mux is enabled
      2) Add a log to print pid of the running process to make easier
         debugging

Change-Id: I39e85de778e150d0685cd9a79425ce8b4783f9c9
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
Fixes: bz#1696046
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/shd: Implement multiplexing in self heal daemon</title>
<updated>2019-04-01T03:44:23+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-02-25T04:35:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=bc3694d7cfc868a2ed6344ea123faf19fce28d13'/>
<id>bc3694d7cfc868a2ed6344ea123faf19fce28d13</id>
<content type='text'>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterfsd: Multiple shd processes are spawned on brick_mux environment</title>
<updated>2019-03-12T04:54:50+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2019-03-01T08:11:24+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=66986594a9023c49e61b32769b7e6b260b600626'/>
<id>66986594a9023c49e61b32769b7e6b260b600626</id>
<content type='text'>
Problem: Multiple shd processes are spawned while starting volumes
         in the loop on brick_mux environment.glusterd spawn a process
         based on a pidfile and shd daemon is taking some time to
         update pid in pidfile due to that glusterd is not able to
         get shd pid

Solution: Commit cd249f4cb783f8d79e79468c455732669e835a4f changed
          the code to update pidfile in parent for any gluster daemon
          after getting the status of forking child in parent.To resolve
          the same correct the condition update pidfile in parent only
          for glusterd and for rest of the daemon pidfile is updated in
          child

Change-Id: Ifd14797fa949562594a285ec82d58384ad717e81
fixes: bz#1684404
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: Multiple shd processes are spawned while starting volumes
         in the loop on brick_mux environment.glusterd spawn a process
         based on a pidfile and shd daemon is taking some time to
         update pid in pidfile due to that glusterd is not able to
         get shd pid

Solution: Commit cd249f4cb783f8d79e79468c455732669e835a4f changed
          the code to update pidfile in parent for any gluster daemon
          after getting the status of forking child in parent.To resolve
          the same correct the condition update pidfile in parent only
          for glusterd and for rest of the daemon pidfile is updated in
          child

Change-Id: Ifd14797fa949562594a285ec82d58384ad717e81
fixes: bz#1684404
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: adding a comment for code readability</title>
<updated>2019-02-19T09:28:11+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2019-02-04T08:47:16+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=862e6409cb6929e1eb235a9156fbf6cbc4719236'/>
<id>862e6409cb6929e1eb235a9156fbf6cbc4719236</id>
<content type='text'>
Adding a comment in the source code, so that anyone reading
the code will understand the changes done by d4fa29 better.

fixes: bz#1654270

Change-Id: I75aff4243420c434c47d69a4b310f77bf161bb29
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Adding a comment in the source code, so that anyone reading
the code will understand the changes done by d4fa29 better.

fixes: bz#1654270

Change-Id: I75aff4243420c434c47d69a4b310f77bf161bb29
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: implement a global thread pool</title>
<updated>2019-02-18T02:58:24+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-01-24T17:44:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=dddcf52020004d98f688ebef968de51d76cbf9a6'/>
<id>dddcf52020004d98f688ebef968de51d76cbf9a6</id>
<content type='text'>
This patch implements a thread pool that is wait-free for adding jobs to
the queue and uses a very small locked region to get jobs. This makes it
possible to decrease contention drastically. It's based on wfcqueue
structure provided by urcu library.

It automatically enables more threads when load demands it, and stops
them when not needed. There's a maximum number of threads that can be
used. This value can be configured.

Depending on the workload, the maximum number of threads plays an
important role. So it needs to be configured for optimal performance.
Currently the thread pool doesn't self adjust the maximum for the
workload, so this configuration needs to be changed manually.

For this reason, the global thread pool has been made optional, so that
volumes can still use the thread pool provided by io-threads.

To enable it for bricks, the following option needs to be set:

   config.global-threading = on

This option has no effect if bricks are already running. A restart is
required to activate it. It's recommended to also enable the following
option when running bricks with the global thread pool:

   performance.iot-pass-through = on

To enable it for a FUSE mount point, the option '--global-threading'
must be added to the mount command. To change it, an umount and remount
is needed. It's recommended to disable the following option when using
global threading on a mount point:

   performance.client-io-threads = off

To enable it for services managed by glusterd, glusterd needs to be
started with option '--global-threading'. In this case all daemons, like
self-heal, will be using the global thread pool.

Currently it can only be enabled for bricks, FUSE mounts and glusterd
services.

The maximum number of threads for clients and bricks can be configured
using the following options:

   config.client-threads
   config.brick-threads

These options can be applied online and its effect is immediate most of
the times. If one of them is set to 0, the maximum number of threads
will be calcutated as #cores * 2.

Some distributions use a very old userspace-rcu library (version 0.7)
for this reason, some header files from version 0.10 have been copied
into contrib/userspace-rcu and are used if the detected version is 0.7
or older.

An additional change has been made to io-threads to prevent that threads
are started when iot-pass-through is set.

Change-Id: I09d19e246b9e6d53c6247b29dfca6af6ee00a24b
updates: #532
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch implements a thread pool that is wait-free for adding jobs to
the queue and uses a very small locked region to get jobs. This makes it
possible to decrease contention drastically. It's based on wfcqueue
structure provided by urcu library.

It automatically enables more threads when load demands it, and stops
them when not needed. There's a maximum number of threads that can be
used. This value can be configured.

Depending on the workload, the maximum number of threads plays an
important role. So it needs to be configured for optimal performance.
Currently the thread pool doesn't self adjust the maximum for the
workload, so this configuration needs to be changed manually.

For this reason, the global thread pool has been made optional, so that
volumes can still use the thread pool provided by io-threads.

To enable it for bricks, the following option needs to be set:

   config.global-threading = on

This option has no effect if bricks are already running. A restart is
required to activate it. It's recommended to also enable the following
option when running bricks with the global thread pool:

   performance.iot-pass-through = on

To enable it for a FUSE mount point, the option '--global-threading'
must be added to the mount command. To change it, an umount and remount
is needed. It's recommended to disable the following option when using
global threading on a mount point:

   performance.client-io-threads = off

To enable it for services managed by glusterd, glusterd needs to be
started with option '--global-threading'. In this case all daemons, like
self-heal, will be using the global thread pool.

Currently it can only be enabled for bricks, FUSE mounts and glusterd
services.

The maximum number of threads for clients and bricks can be configured
using the following options:

   config.client-threads
   config.brick-threads

These options can be applied online and its effect is immediate most of
the times. If one of them is set to 0, the maximum number of threads
will be calcutated as #cores * 2.

Some distributions use a very old userspace-rcu library (version 0.7)
for this reason, some header files from version 0.10 have been copied
into contrib/userspace-rcu and are used if the detected version is 0.7
or older.

An additional change has been made to io-threads to prevent that threads
are started when iot-pass-through is set.

Change-Id: I09d19e246b9e6d53c6247b29dfca6af6ee00a24b
updates: #532
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>fuse: reflect the actual default for lru-limit option</title>
<updated>2019-02-11T13:39:20+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2019-02-09T07:30:21+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=ecd1b4f700ea7a32cc4b46c633f88db7901ff320'/>
<id>ecd1b4f700ea7a32cc4b46c633f88db7901ff320</id>
<content type='text'>
in both `--help` text and man page

updates: bz#1193929
Change-Id: I9aa9367c6863ac8e2403255280697c9e6be26cf0
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
in both `--help` text and man page

updates: bz#1193929
Change-Id: I9aa9367c6863ac8e2403255280697c9e6be26cf0
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
