<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/libglusterfs/src/libglusterfs.sym, branch v7.1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>ctime: Fix incorrect realtime passed to frame-&gt;root-&gt;ctime</title>
<updated>2019-08-28T05:49:36+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2019-08-20T10:19:40+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b85d550a552d485f4a7f1eedbc00bdf1f67d6263'/>
<id>b85d550a552d485f4a7f1eedbc00bdf1f67d6263</id>
<content type='text'>
On systems that don't support "timespec_get"(e.g., centos6), it
was using "clock_gettime" with "CLOCK_MONOTONIC" to get unix epoch
time which is incorrect. This patch introduces "timespec_now_realtime"
which uses "clock_gettime" with "CLOCK_REALTIME" which fixes
the issue.

Backport of:
 &gt; Patch: https://review.gluster.org/23274/
 &gt; Change-Id: I57be35ce442d7e05319e82112b687eb4f28d7612
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
 &gt; BUG: 1743652
(cherry picked from commit d14d0749340d9cb1ef6fc4b35f2fb3015ed0339d)

Change-Id: I57be35ce442d7e05319e82112b687eb4f28d7612
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
fixes: bz#1746145
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
On systems that don't support "timespec_get"(e.g., centos6), it
was using "clock_gettime" with "CLOCK_MONOTONIC" to get unix epoch
time which is incorrect. This patch introduces "timespec_now_realtime"
which uses "clock_gettime" with "CLOCK_REALTIME" which fixes
the issue.

Backport of:
 &gt; Patch: https://review.gluster.org/23274/
 &gt; Change-Id: I57be35ce442d7e05319e82112b687eb4f28d7612
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
 &gt; BUG: 1743652
(cherry picked from commit d14d0749340d9cb1ef6fc4b35f2fb3015ed0339d)

Change-Id: I57be35ce442d7e05319e82112b687eb4f28d7612
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
fixes: bz#1746145
</pre>
</div>
</content>
</entry>
<entry>
<title>event: rename event_XXX with gf_ prefixed</title>
<updated>2019-08-21T06:13:38+00:00</updated>
<author>
<name>Xiubo Li</name>
<email>xiubli@redhat.com</email>
</author>
<published>2019-07-26T04:34:52+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=acbabe3d916d763a0bb13e7df876cac61ca5b160'/>
<id>acbabe3d916d763a0bb13e7df876cac61ca5b160</id>
<content type='text'>
I hit one crash issue when using the libgfapi.

In the libgfapi it will call glfs_poller() --&gt; event_dispatch()
in file api/src/glfs.c:721, and the event_dispatch() is defined
by libgluster locally, the problem is the name of event_dispatch()
is the extremly the same with the one from libevent package form
the OS.

For example, if a executable program Foo, which will also use and
link the libevent and the libgfapi at the same time, I can hit the
crash, like:

kernel: glfs_glfspoll[68486]: segfault at 1c0 ip 00007fef006fd2b8 sp
00007feeeaffce30 error 4 in libevent-2.0.so.5.1.9[7fef006ed000+46000]

The link for Foo is:
lib_foo_LADD = -levent $(GFAPI_LIBS)
It will crash.

This is because the glfs_poller() is calling the event_dispatch() from
the libevent, not the libglsuter.

The gfapi link info :
GFAPI_LIBS = -lacl -lgfapi -lglusterfs -lgfrpc -lgfxdr -luuid

If I link Foo like:
lib_foo_LADD = $(GFAPI_LIBS) -levent
It will works well without any problem.

And if Foo call one private lib, such as handler_glfs.so, and the
handler_glfs.so will link the GFAPI_LIBS directly, while the Foo won't
and it will dlopen(handler_glfs.so), then the crash will be hit everytime.

The link info will be:
foo_LADD = -levent
libhandler_glfs_LIBADD = $(GFAPI_LIBS)

I can avoid the crash temporarily by linking the GFAPI_LIBS in Foo too like:
foo_LADD = $(GFAPI_LIBS) -levent
libhandler_glfs_LIBADD = $(GFAPI_LIBS)

But this is ugly since the Foo won't use any APIs from the GFAPI_LIBS.

And in some cases when the --as-needed link option is added(on many dists
it is added as default), then the crash is back again, the above workaround
won't work.

Backport of:
&gt; https://review.gluster.org/#/c/glusterfs/+/23110/
&gt; Change-Id: I38f0200b941bd1cff4bf3066fca2fc1f9a5263aa
&gt; Fixes: #699
&gt; Signed-off-by: Xiubo Li &lt;xiubli@redhat.com&gt;

Change-Id: I38f0200b941bd1cff4bf3066fca2fc1f9a5263aa
updates: bz#1740519
Signed-off-by: Xiubo Li &lt;xiubli@redhat.com&gt;
(cherry picked from commit 799edc73c3d4f694c365c6a7c27c9ab8eed5f260)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
I hit one crash issue when using the libgfapi.

In the libgfapi it will call glfs_poller() --&gt; event_dispatch()
in file api/src/glfs.c:721, and the event_dispatch() is defined
by libgluster locally, the problem is the name of event_dispatch()
is the extremly the same with the one from libevent package form
the OS.

For example, if a executable program Foo, which will also use and
link the libevent and the libgfapi at the same time, I can hit the
crash, like:

kernel: glfs_glfspoll[68486]: segfault at 1c0 ip 00007fef006fd2b8 sp
00007feeeaffce30 error 4 in libevent-2.0.so.5.1.9[7fef006ed000+46000]

The link for Foo is:
lib_foo_LADD = -levent $(GFAPI_LIBS)
It will crash.

This is because the glfs_poller() is calling the event_dispatch() from
the libevent, not the libglsuter.

The gfapi link info :
GFAPI_LIBS = -lacl -lgfapi -lglusterfs -lgfrpc -lgfxdr -luuid

If I link Foo like:
lib_foo_LADD = $(GFAPI_LIBS) -levent
It will works well without any problem.

And if Foo call one private lib, such as handler_glfs.so, and the
handler_glfs.so will link the GFAPI_LIBS directly, while the Foo won't
and it will dlopen(handler_glfs.so), then the crash will be hit everytime.

The link info will be:
foo_LADD = -levent
libhandler_glfs_LIBADD = $(GFAPI_LIBS)

I can avoid the crash temporarily by linking the GFAPI_LIBS in Foo too like:
foo_LADD = $(GFAPI_LIBS) -levent
libhandler_glfs_LIBADD = $(GFAPI_LIBS)

But this is ugly since the Foo won't use any APIs from the GFAPI_LIBS.

And in some cases when the --as-needed link option is added(on many dists
it is added as default), then the crash is back again, the above workaround
won't work.

Backport of:
&gt; https://review.gluster.org/#/c/glusterfs/+/23110/
&gt; Change-Id: I38f0200b941bd1cff4bf3066fca2fc1f9a5263aa
&gt; Fixes: #699
&gt; Signed-off-by: Xiubo Li &lt;xiubli@redhat.com&gt;

Change-Id: I38f0200b941bd1cff4bf3066fca2fc1f9a5263aa
updates: bz#1740519
Signed-off-by: Xiubo Li &lt;xiubli@redhat.com&gt;
(cherry picked from commit 799edc73c3d4f694c365c6a7c27c9ab8eed5f260)
</pre>
</div>
</content>
</entry>
<entry>
<title>ctime: Set mdata xattr on legacy files</title>
<updated>2019-08-19T11:27:14+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2019-06-24T07:36:49+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=8d2aebf93baed6f8555cd02545d6f95da59cc7f3'/>
<id>8d2aebf93baed6f8555cd02545d6f95da59cc7f3</id>
<content type='text'>
Problem:
The files which were created before ctime enabled would not
have "trusted.glusterfs.mdata"(stores time attributes) xattr.
Upon fops which modifies either ctime or mtime, the xattr
gets created with latest ctime, mtime and atime, which is
incorrect. It should update only the corresponding time
attribute and rest from backend

Solution:
Creating xattr with values from brick is not possible as
each brick of replica set would have different times.
So create the xattr upon successful lookup if the xattr
is not created

Note To Reviewers:
The time attributes used to set xattr is got from successful
lookup. Instead of sending the whole iatt over the wire via
setxattr, a structure called mdata_iatt is sent. The mdata_iatt
contains only time attributes.

Backport of:
 &gt; Patch:  https://review.gluster.org/22936
 &gt; Change-Id: I5e535631ddef04195361ae0364336410a2895dd4
 &gt; BUG: 1593542
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;

Change-Id: I5e535631ddef04195361ae0364336410a2895dd4
updates: bz#1739430
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
The files which were created before ctime enabled would not
have "trusted.glusterfs.mdata"(stores time attributes) xattr.
Upon fops which modifies either ctime or mtime, the xattr
gets created with latest ctime, mtime and atime, which is
incorrect. It should update only the corresponding time
attribute and rest from backend

Solution:
Creating xattr with values from brick is not possible as
each brick of replica set would have different times.
So create the xattr upon successful lookup if the xattr
is not created

Note To Reviewers:
The time attributes used to set xattr is got from successful
lookup. Instead of sending the whole iatt over the wire via
setxattr, a structure called mdata_iatt is sent. The mdata_iatt
contains only time attributes.

Backport of:
 &gt; Patch:  https://review.gluster.org/22936
 &gt; Change-Id: I5e535631ddef04195361ae0364336410a2895dd4
 &gt; BUG: 1593542
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;

Change-Id: I5e535631ddef04195361ae0364336410a2895dd4
updates: bz#1739430
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Optimize code to copy dictionary in handshake code path</title>
<updated>2019-05-31T14:20:25+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2019-05-17T13:56:48+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f8f09178bb890924a8050b466cc2e7a0a30e35a7'/>
<id>f8f09178bb890924a8050b466cc2e7a0a30e35a7</id>
<content type='text'>
Problem: While high no. of volumes are configured around 2000
         glusterd has bottleneck during handshake at the time
         of copying dictionary

Solution: To avoid the bottleneck serialize a dictionary instead
          of copying key-value pair one by one

Change-Id: I9fb332f432e4f915bc3af8dcab38bed26bda2b9a
fixes: bz#1711297
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: While high no. of volumes are configured around 2000
         glusterd has bottleneck during handshake at the time
         of copying dictionary

Solution: To avoid the bottleneck serialize a dictionary instead
          of copying key-value pair one by one

Change-Id: I9fb332f432e4f915bc3af8dcab38bed26bda2b9a
fixes: bz#1711297
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ec/fini: Fix race with ec_fini and ec_notify</title>
<updated>2019-05-21T12:57:39+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-05-21T11:45:07+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=43ade7abac6c1d648ef337ace92194d36c8894a4'/>
<id>43ade7abac6c1d648ef337ace92194d36c8894a4</id>
<content type='text'>
During a graph cleanup, we first sent a PARENT_DOWN and wait for
a child down to ultimately free the xlator and the graph.

In the ec xlator, we cleanup the threads when we get a PARENT_DOWN event.
But a racing event like CHILD_UP or event xl_op may trigger healing threads
after threads cleanup.

So there is a chance that the threads might access a freed private variabe

Change-Id: I252d10181bb67b95900c903d479de707a8489532
fixes: bz#1703948
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
During a graph cleanup, we first sent a PARENT_DOWN and wait for
a child down to ultimately free the xlator and the graph.

In the ec xlator, we cleanup the threads when we get a PARENT_DOWN event.
But a racing event like CHILD_UP or event xl_op may trigger healing threads
after threads cleanup.

So there is a chance that the threads might access a freed private variabe

Change-Id: I252d10181bb67b95900c903d479de707a8489532
fixes: bz#1703948
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: Remove decompunder helper routines from symbol export</title>
<updated>2019-05-11T11:13:17+00:00</updated>
<author>
<name>Anoop C S</name>
<email>anoopcs@redhat.com</email>
</author>
<published>2019-05-10T06:51:31+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=da4601d536da761ce908a2461a0930857f99f171'/>
<id>da4601d536da761ce908a2461a0930857f99f171</id>
<content type='text'>
decompounder and related sources were removed via the following commits:

https://review.gluster.org/#/c/glusterfs/+/22627/
https://review.gluster.org/#/c/glusterfs/+/22629/

Therefore taking out symbol exports for those removed routines.

Change-Id: I2ef99a318de1e4b512cabd2fa923225c5b79b1e5
updates: bz#1193929
Signed-off-by: Anoop C S &lt;anoopcs@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
decompounder and related sources were removed via the following commits:

https://review.gluster.org/#/c/glusterfs/+/22627/
https://review.gluster.org/#/c/glusterfs/+/22629/

Therefore taking out symbol exports for those removed routines.

Change-Id: I2ef99a318de1e4b512cabd2fa923225c5b79b1e5
updates: bz#1193929
Signed-off-by: Anoop C S &lt;anoopcs@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/store: store all key-values in one shot</title>
<updated>2019-05-08T06:46:24+00:00</updated>
<author>
<name>Yaniv Kaul</name>
<email>ykaul@redhat.com</email>
</author>
<published>2019-04-28T19:05:44+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=1fa089e7a2b180e0bdcc1e7e09a63934a2a0c0ef'/>
<id>1fa089e7a2b180e0bdcc1e7e09a63934a2a0c0ef</id>
<content type='text'>
Instead of saving each key-value separately, which is slow (
especially as we fflush() after each!), store them all as one
string and write all together.

Implements https://github.com/gluster/glusterfs/issues/629

Change-Id: Ie77a272446b0b6785584b710a4fdd9c613dd9578
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat,.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Instead of saving each key-value separately, which is slow (
especially as we fflush() after each!), store them all as one
string and write all together.

Implements https://github.com/gluster/glusterfs/issues/629

Change-Id: Ie77a272446b0b6785584b710a4fdd9c613dd9578
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat,.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: avoid dynamic TLS allocation when possible</title>
<updated>2019-04-24T03:26:48+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-03-05T17:58:20+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d8eadde7d498939c746ea8ddd9dc70a1029b4070'/>
<id>d8eadde7d498939c746ea8ddd9dc70a1029b4070</id>
<content type='text'>
Some interdependencies between logging and memory management functions
make it impossible to use the logging framework before initializing
memory subsystem because they both depend on Thread Local Storage
allocated through pthread_key_create() during initialization.

This causes a crash when we try to log something very early in the
initialization phase.

To prevent this, several dynamically allocated TLS structures have
been replaced by static TLS reserved at compile time using '__thread'
keyword. This also reduces the number of error sources, making
initialization simpler.

Updates: bz#1193929
Change-Id: I8ea2e072411e30790d50084b6b7e909c7bb01d50
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Some interdependencies between logging and memory management functions
make it impossible to use the logging framework before initializing
memory subsystem because they both depend on Thread Local Storage
allocated through pthread_key_create() during initialization.

This causes a crash when we try to log something very early in the
initialization phase.

To prevent this, several dynamically allocated TLS structures have
been replaced by static TLS reserved at compile time using '__thread'
keyword. This also reduces the number of error sources, making
initialization simpler.

Updates: bz#1193929
Change-Id: I8ea2e072411e30790d50084b6b7e909c7bb01d50
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: handle memory accounting correctly</title>
<updated>2019-04-22T03:54:17+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-04-12T11:40:59+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b0fce72477d56eeca616ab089756eab4f4b4bf8e'/>
<id>b0fce72477d56eeca616ab089756eab4f4b4bf8e</id>
<content type='text'>
When a translator stops, memory accounting for that translator is not
destroyed (because there could remain memory allocated that references
it), but mutexes that coordinate updates of memory accounting were
destroyed. This caused incorrect memory accounting and even crashes in
debug mode.

This patch also fixes some other things:

* Reduce the number of atomic operations needed to manage memory
  accounting.
* Correctly account memory when realloc() is used.
* Merge two critical sections into one.
* Cleaned the code a bit.

Change-Id: Id5eaee7338729b9bc52c931815ca3ff1e5a7dcc8
Updates: bz#1659334
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When a translator stops, memory accounting for that translator is not
destroyed (because there could remain memory allocated that references
it), but mutexes that coordinate updates of memory accounting were
destroyed. This caused incorrect memory accounting and even crashes in
debug mode.

This patch also fixes some other things:

* Reduce the number of atomic operations needed to manage memory
  accounting.
* Correctly account memory when realloc() is used.
* Merge two critical sections into one.
* Cleaned the code a bit.

Change-Id: Id5eaee7338729b9bc52c931815ca3ff1e5a7dcc8
Updates: bz#1659334
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/shd: Implement multiplexing in self heal daemon</title>
<updated>2019-04-01T03:44:23+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-02-25T04:35:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=bc3694d7cfc868a2ed6344ea123faf19fce28d13'/>
<id>bc3694d7cfc868a2ed6344ea123faf19fce28d13</id>
<content type='text'>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
