<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/protocol/client/src, branch v7.0alpha</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>across: coverity fixes</title>
<updated>2019-06-03T04:00:39+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2019-05-17T05:34:45+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=899b2170945c6023b0037fed70b19aa3cc680a22'/>
<id>899b2170945c6023b0037fed70b19aa3cc680a22</id>
<content type='text'>
* locks/posix.c: key was not freed in one of the cases.
* locks/common.c: lock was being free'd out of context.
* nfs/exports: handle case of missing free.
* protocol/client: handle case of entry not freed.
* storage/posix: handle possible case of double free

CID: 1398628, 1400731, 1400732, 1400756, 1124796, 1325526

updates: bz#789278
Change-Id: Ieeaca890288bc4686355f6565f853dc8911344e8
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Signed-off-by: Sheetal Pamecha &lt;spamecha@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
* locks/posix.c: key was not freed in one of the cases.
* locks/common.c: lock was being free'd out of context.
* nfs/exports: handle case of missing free.
* protocol/client: handle case of entry not freed.
* storage/posix: handle possible case of double free

CID: 1398628, 1400731, 1400732, 1400756, 1124796, 1325526

updates: bz#789278
Change-Id: Ieeaca890288bc4686355f6565f853dc8911344e8
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Signed-off-by: Sheetal Pamecha &lt;spamecha@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Fix some "Null pointer dereference" coverity issues</title>
<updated>2019-05-26T13:59:13+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-05-22T15:46:19+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=5d88111a142b3c37e92bdd36699a04fd054d27f4'/>
<id>5d88111a142b3c37e92bdd36699a04fd054d27f4</id>
<content type='text'>
This patch fixes the following CID's:

  * 1124829
  * 1274075
  * 1274083
  * 1274128
  * 1274135
  * 1274141
  * 1274143
  * 1274197
  * 1274205
  * 1274210
  * 1274211
  * 1288801
  * 1398629

Change-Id: Ia7c86cfab3245b20777ffa296e1a59748040f558
Updates: bz#789278
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch fixes the following CID's:

  * 1124829
  * 1274075
  * 1274083
  * 1274128
  * 1274135
  * 1274141
  * 1274143
  * 1274197
  * 1274205
  * 1274210
  * 1274211
  * 1288801
  * 1398629

Change-Id: Ia7c86cfab3245b20777ffa296e1a59748040f558
Updates: bz#789278
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol: remove compound fop</title>
<updated>2019-04-29T05:29:44+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2019-04-20T06:25:12+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=48d160756813875cf1889b4ce96493d25f96c726'/>
<id>48d160756813875cf1889b4ce96493d25f96c726</id>
<content type='text'>
Compound fops are kept on wire as a backward compatibility with
older AFR modules. The AFR module used beyond 4.x releases are
not using compound fops. Hence removing the compound fop in the
protocol code.

Note that, compound-fops was already an 'option' in AFR, and
completely removed since 4.1.x releases.

So, point to note is, with this change, we have 2 ways to upgrade
when clients of 3.x series are present.

i) set 'use-compound-fops' option to 'false' on any volume which
   is of replica type. And then upgrade the servers.

ii) Do a two step upgrade. First from current version (which will
    already be EOL if it's using compound) to a 4.1..6.x version,
    and then an upgrade to 7.x.

Consider the overall code which we are removing for the option
seems quite high, I believe it is worth it.

updates: bz#1693692
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Change-Id: I0a8876d0367a15e1410ec845f251d5d3097ee593
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Compound fops are kept on wire as a backward compatibility with
older AFR modules. The AFR module used beyond 4.x releases are
not using compound fops. Hence removing the compound fop in the
protocol code.

Note that, compound-fops was already an 'option' in AFR, and
completely removed since 4.1.x releases.

So, point to note is, with this change, we have 2 ways to upgrade
when clients of 3.x series are present.

i) set 'use-compound-fops' option to 'false' on any volume which
   is of replica type. And then upgrade the servers.

ii) Do a two step upgrade. First from current version (which will
    already be EOL if it's using compound) to a 4.1..6.x version,
    and then an upgrade to 7.x.

Consider the overall code which we are removing for the option
seems quite high, I believe it is worth it.

updates: bz#1693692
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Change-Id: I0a8876d0367a15e1410ec845f251d5d3097ee593
</pre>
</div>
</content>
</entry>
<entry>
<title>Replace memdup() with gf_memdup()</title>
<updated>2019-04-12T13:08:28+00:00</updated>
<author>
<name>Vijay Bellur</name>
<email>vbellur@redhat.com</email>
</author>
<published>2019-02-28T00:02:43+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c261eb6c1fa41b6a0eadabbb3a2f64dc194ec254'/>
<id>c261eb6c1fa41b6a0eadabbb3a2f64dc194ec254</id>
<content type='text'>
memdup() and gf_memdup() have the same implementation. Removed one API
as the presence of both can be confusing.

Change-Id: I562130c668457e13e4288e592792872d2e49887e
updates: bz#1193929
Signed-off-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
memdup() and gf_memdup() have the same implementation. Removed one API
as the presence of both can be confusing.

Change-Id: I562130c668457e13e4288e592792872d2e49887e
updates: bz#1193929
Signed-off-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>client/fini: return fini after rpc cleanup</title>
<updated>2019-04-11T03:47:41+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-04-01T09:14:20+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0496523d3c1ac874eeecbb0ecb6516d88438d3c9'/>
<id>0496523d3c1ac874eeecbb0ecb6516d88438d3c9</id>
<content type='text'>
There is a race condition in rpc_transport later
and client fini.

Sequence of events to happen the race condition
1) When we want to destroy a graph, we send a parent down
   event first
2) Once parent down received on a client xlator, we will
   initiates a rpc disconnect
3) This will in turn generates a child down event.
4) When we process child down, we first do fini for
   Every xlator
5) On successful return of fini, we delete the graph

Here after the step 5, there is a chance that the fini
on client might not be finished. Because an rpc_tranpsort
ref can race with the above sequence.

So we have to wait till all rpc's are successfully freed
before returning the fini from client

Change-Id: I20145662d71fb837e448a4d3210d1fcb2855f2d4
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There is a race condition in rpc_transport later
and client fini.

Sequence of events to happen the race condition
1) When we want to destroy a graph, we send a parent down
   event first
2) Once parent down received on a client xlator, we will
   initiates a rpc disconnect
3) This will in turn generates a child down event.
4) When we process child down, we first do fini for
   Every xlator
5) On successful return of fini, we delete the graph

Here after the step 5, there is a chance that the fini
on client might not be finished. Because an rpc_tranpsort
ref can race with the above sequence.

So we have to wait till all rpc's are successfully freed
before returning the fini from client

Change-Id: I20145662d71fb837e448a4d3210d1fcb2855f2d4
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol: add an option to force using old-protocol</title>
<updated>2019-04-10T04:42:00+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2019-03-29T03:00:49+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=22e848f88e2cb9510e82fb4875c182240fff3303'/>
<id>22e848f88e2cb9510e82fb4875c182240fff3303</id>
<content type='text'>
As protocol implements every fop, and in general a large part of
the codebase. Considering our regression is run mostly in 1 machine,
there was no way of forcing the client to use old protocol (while new
one is available). With this patch, a new 'testing' option is provided
which forces client to use old protocol if found.

This should help increase the code coverage by at least 10k lines overall.

updates: bz#1693692
Change-Id: Ie45256f7dea250671b689c72b4b6f25037cef948
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
As protocol implements every fop, and in general a large part of
the codebase. Considering our regression is run mostly in 1 machine,
there was no way of forcing the client to use old protocol (while new
one is available). With this patch, a new 'testing' option is provided
which forces client to use old protocol if found.

This should help increase the code coverage by at least 10k lines overall.

updates: bz#1693692
Change-Id: Ie45256f7dea250671b689c72b4b6f25037cef948
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/shd: Implement multiplexing in self heal daemon</title>
<updated>2019-04-01T03:44:23+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-02-25T04:35:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=bc3694d7cfc868a2ed6344ea123faf19fce28d13'/>
<id>bc3694d7cfc868a2ed6344ea123faf19fce28d13</id>
<content type='text'>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/client: Do not fallback to anon-fd if fd is not open</title>
<updated>2019-03-31T02:56:59+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2019-03-28T12:25:54+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=92ae26ae8039847e38c738ef98835a14be9d4296'/>
<id>92ae26ae8039847e38c738ef98835a14be9d4296</id>
<content type='text'>
If an open comes on a file when a brick is down and after the brick comes up,
a fop comes on the fd, client xlator would still wind the fop on anon-fd
leading to wrong behavior of the fops in some cases.

Example:
If lk fop is issued on the fd just after the brick is up in the scenario above,
lk fop will be sent on anon-fd instead of failing it on that client xlator.
This lock will never be freed upon close of the fd as flush on anon-fd is
invalid and is not wound below server xlator.

As a fix, failing the fop unless the fd has FALLBACK_TO_ANON_FD flag.

Change-Id: I77692d056660b2858e323bdabdfe0a381807cccc
fixes bz#1390914
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If an open comes on a file when a brick is down and after the brick comes up,
a fop comes on the fd, client xlator would still wind the fop on anon-fd
leading to wrong behavior of the fops in some cases.

Example:
If lk fop is issued on the fd just after the brick is up in the scenario above,
lk fop will be sent on anon-fd instead of failing it on that client xlator.
This lock will never be freed upon close of the fd as flush on anon-fd is
invalid and is not wound below server xlator.

As a fix, failing the fop unless the fd has FALLBACK_TO_ANON_FD flag.

Change-Id: I77692d056660b2858e323bdabdfe0a381807cccc
fixes bz#1390914
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>client-rpc: Fix the payload being sent on the wire</title>
<updated>2019-03-29T02:21:05+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2019-03-24T04:10:50+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=6ae36418834ddcd9f68fe7ef9b09b077cdf68ca3'/>
<id>6ae36418834ddcd9f68fe7ef9b09b077cdf68ca3</id>
<content type='text'>
The fops allocate 3 kind of payload(buffer) in the client xlator:
- fop payload, this is the buffer allocated by the write and put fop
- rsphdr paylod, this is the buffer required by the reply cbk of
  some fops like lookup, readdir.
- rsp_paylod, this is the buffer required by the reply cbk of fops like
  readv etc.

Currently, in the lookup and readdir fop the rsphdr is sent as payload,
hence the allocated rsphdr buffer is also sent on the wire, increasing
the bandwidth consumption on the wire.

With this patch, the issue is fixed.

Fixes: bz#1692093
Change-Id: Ie8158921f4db319e60ad5f52d851fa5c9d4a269b
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The fops allocate 3 kind of payload(buffer) in the client xlator:
- fop payload, this is the buffer allocated by the write and put fop
- rsphdr paylod, this is the buffer required by the reply cbk of
  some fops like lookup, readdir.
- rsp_paylod, this is the buffer required by the reply cbk of fops like
  readv etc.

Currently, in the lookup and readdir fop the rsphdr is sent as payload,
hence the allocated rsphdr buffer is also sent on the wire, increasing
the bandwidth consumption on the wire.

With this patch, the issue is fixed.

Fixes: bz#1692093
Change-Id: Ie8158921f4db319e60ad5f52d851fa5c9d4a269b
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>clang: Fix various missing checks for empty list</title>
<updated>2018-12-14T04:33:15+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-12-12T21:45:09+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=bfe2b5e1530efd364af7a175c8a1c89bc4cab0bb'/>
<id>bfe2b5e1530efd364af7a175c8a1c89bc4cab0bb</id>
<content type='text'>
When using list_for_each_entry(_safe) functions, care needs
to be taken that the list passed in are not empty, as these
functions are not empty list safe.

clag scan reported various points where this this pattern
could be caught, and this patch fixes the same.

Additionally the following changes are present in this patch,
- Added an explicit op_ret setting in error case in the
macro MAKE_INODE_HANDLE to address another clang issue reported
- Minor refactoring of some functions in quota code, to address
possible allocation failures in certain functions (which in turn
cause possible empty lists to be passed around)

Change-Id: I1e761a8d218708f714effb56fa643df2a3ea2cc7
Updates: bz#1622665
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When using list_for_each_entry(_safe) functions, care needs
to be taken that the list passed in are not empty, as these
functions are not empty list safe.

clag scan reported various points where this this pattern
could be caught, and this patch fixes the same.

Additionally the following changes are present in this patch,
- Added an explicit op_ret setting in error case in the
macro MAKE_INODE_HANDLE to address another clang issue reported
- Minor refactoring of some functions in quota code, to address
possible allocation failures in certain functions (which in turn
cause possible empty lists to be passed around)

Change-Id: I1e761a8d218708f714effb56fa643df2a3ea2cc7
Updates: bz#1622665
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
