<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests/bugs, branch v6.2</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: define dumpops in the xlator_api of glusterd</title>
<updated>2019-05-08T13:57:24+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2019-04-26T16:58:53+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=3fc7d08c1d5dd9ac9079ee60ea734342eafb42d0'/>
<id>3fc7d08c1d5dd9ac9079ee60ea734342eafb42d0</id>
<content type='text'>
Problem: statedump is not capturing information related to glusterd

Solution: statdump is not capturing glusterd info because
trav-&gt;dumpops is null in gf_proc_dump_single_xlator_info ()
where trav is glusterd xlator object. trav-&gt;dumpops is null
because we missed to define dumpops in xlator_api of glusterd.
defining dumpops in xlator_api of glusterd fixes the issue.

fixes: bz#1703759
Change-Id: If85429ecb1ef580aced8d5b88d09fc15258bfc4c
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit 5d866c13efdcdeddf184f012aa88a652e90ff22e)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: statedump is not capturing information related to glusterd

Solution: statdump is not capturing glusterd info because
trav-&gt;dumpops is null in gf_proc_dump_single_xlator_info ()
where trav is glusterd xlator object. trav-&gt;dumpops is null
because we missed to define dumpops in xlator_api of glusterd.
defining dumpops in xlator_api of glusterd fixes the issue.

fixes: bz#1703759
Change-Id: If85429ecb1ef580aced8d5b88d09fc15258bfc4c
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit 5d866c13efdcdeddf184f012aa88a652e90ff22e)
</pre>
</div>
</content>
</entry>
<entry>
<title>extras/hooks: syntactical errors in SELinux hooks, scipt logic improved</title>
<updated>2019-05-08T13:56:20+00:00</updated>
<author>
<name>Milan Zink</name>
<email>mzink@redhat.com</email>
</author>
<published>2018-02-05T14:04:37+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2aa9898720e3be2a203bec960103afc2558ddff6'/>
<id>2aa9898720e3be2a203bec960103afc2558ddff6</id>
<content type='text'>
Fixes: bz#1701818
Change-Id: Ia5fa1df81bbaec3a84653d136a331c76b457f42c
Signed-off-by: Milan Zink &lt;zeten30@gmail.com&gt;
(cherry picked from commit 1ad201a9fd6748d7ef49fb073fcfe8c6858d557d)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Fixes: bz#1701818
Change-Id: Ia5fa1df81bbaec3a84653d136a331c76b457f42c
Signed-off-by: Milan Zink &lt;zeten30@gmail.com&gt;
(cherry picked from commit 1ad201a9fd6748d7ef49fb073fcfe8c6858d557d)
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: fix fd reopen</title>
<updated>2019-05-08T13:54:59+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2019-04-16T08:49:47+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=bf69fa4727f2b4432d7d54312bc0177cbcf44936'/>
<id>bf69fa4727f2b4432d7d54312bc0177cbcf44936</id>
<content type='text'>
Currently EC tries to reopen fd's that have been opened while a brick
was down. This is done as part of regular write operations, just after
having acquired the locks, and it's sent as a sub-fop of the main write
fop.

There were two problems:

1. The reopen was attempted on all UP bricks, even if a previous lock
didn't succeed. This is incorrect because most probably the open will
fail.

2. If reopen is sent and fails, the error is propagated to the main
operation, causing it to fail when it shouldn't.

To fix this, we only attempt reopens on bricks where the current fop
owns a lock, and we prevent any error to be propagated to the main
fop.

To implement this behaviour an argument used to indicate the minimum
number of required answers has overloaded to also include some flags. To
make the change consistent, it has been necessary to rename the
argument, which means that a lot of files have been changed. However
there are no functional changes.

This change has also uncovered a problem in discard code, which didn't
correctely process requests of small sizes because no real discard fop
was being processed, only a write of 0's on some region. In this case
some fields of the fop remained uninitialized or with incorrect values.
To fix this, a new function has been created to simulate success on a
fop and it's used in the discard case.

Thanks to Pranith for providing a test script that has also detected an
issue in this patch. This patch includes a small modification of this
script to force data to be written into bricks before stopping them.

Backport of:
&gt; Change-Id: If272343873369186c2fb8f43c1d9c52c3ea304ec
&gt; BUG: bz#1699866
&gt; Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;

Change-Id: If272343873369186c2fb8f43c1d9c52c3ea304ec
Fixes: bz#1699917
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently EC tries to reopen fd's that have been opened while a brick
was down. This is done as part of regular write operations, just after
having acquired the locks, and it's sent as a sub-fop of the main write
fop.

There were two problems:

1. The reopen was attempted on all UP bricks, even if a previous lock
didn't succeed. This is incorrect because most probably the open will
fail.

2. If reopen is sent and fails, the error is propagated to the main
operation, causing it to fail when it shouldn't.

To fix this, we only attempt reopens on bricks where the current fop
owns a lock, and we prevent any error to be propagated to the main
fop.

To implement this behaviour an argument used to indicate the minimum
number of required answers has overloaded to also include some flags. To
make the change consistent, it has been necessary to rename the
argument, which means that a lot of files have been changed. However
there are no functional changes.

This change has also uncovered a problem in discard code, which didn't
correctely process requests of small sizes because no real discard fop
was being processed, only a write of 0's on some region. In this case
some fields of the fop remained uninitialized or with incorrect values.
To fix this, a new function has been created to simulate success on a
fop and it's used in the discard case.

Thanks to Pranith for providing a test script that has also detected an
issue in this patch. This patch includes a small modification of this
script to force data to be written into bricks before stopping them.

Backport of:
&gt; Change-Id: If272343873369186c2fb8f43c1d9c52c3ea304ec
&gt; BUG: bz#1699866
&gt; Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;

Change-Id: If272343873369186c2fb8f43c1d9c52c3ea304ec
Fixes: bz#1699917
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Remove local from owners_list on failure of lock-acquisition</title>
<updated>2019-04-16T11:29:03+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2019-04-04T10:01:56+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7ec3a8527f2fb341fc1f6e54ded36b157e4904fe'/>
<id>7ec3a8527f2fb341fc1f6e54ded36b157e4904fe</id>
<content type='text'>
When eager-lock lock acquisition fails because of say network failures, the
local is not being removed from owners_list, this leads to accumulation of
waiting frames and the application will hang because the waiting frames are
under the assumption that another transaction is in the process of acquiring
lock because owner-list is not empty. Handled this case as well in this patch.
Added asserts to make it easier to find these problems in future.

fixes bz#1699731
Change-Id: I3101393265e9827755725b1f2d94a93d8709e923
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When eager-lock lock acquisition fails because of say network failures, the
local is not being removed from owners_list, this leads to accumulation of
waiting frames and the application will hang because the waiting frames are
under the assumption that another transaction is in the process of acquiring
lock because owner-list is not empty. Handled this case as well in this patch.
Added asserts to make it easier to find these problems in future.

fixes bz#1699731
Change-Id: I3101393265e9827755725b1f2d94a93d8709e923
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: Log level changes do not effect on running client process</title>
<updated>2019-04-16T10:59:36+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2019-04-15T05:04:34+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d2de3f66393d4f737e21eedab4200563a60f8bcc'/>
<id>d2de3f66393d4f737e21eedab4200563a60f8bcc</id>
<content type='text'>
Problem: commit c34e4161f3cb6539ec83a9020f3d27eb4759a975 set log-level
         per xlator during reconfigure only for a brick process not for
         the client process.

Solution: 1) Change per xlator log-level only if brick_mux is enabled.To make sure
             about brick multiplex introudce a flag brick_mux at ctx-&gt;cmd_args.

Note: There are two other changes done with this patch
      1) Ignore client-log-level option to attach a brick with
         already running brick if brick_mux is enabled
      2) Add a log to print pid of the running process to make easier
         debugging

&gt; Change-Id: I39e85de778e150d0685cd9a79425ce8b4783f9c9
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
&gt; Fixes: bz#1696046
&gt; (Cherry picked from commit 798aadbe51a9a02dd98a0f861cc239ecf7c8ed57)
&gt; (Reviewed on upstream link https://review.gluster.org/#/c/glusterfs/+/22495/)

Change-Id: If91682830f894ab8f6857f19dcb1797fc15ca64c
Fixes: bz#1699715
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: commit c34e4161f3cb6539ec83a9020f3d27eb4759a975 set log-level
         per xlator during reconfigure only for a brick process not for
         the client process.

Solution: 1) Change per xlator log-level only if brick_mux is enabled.To make sure
             about brick multiplex introudce a flag brick_mux at ctx-&gt;cmd_args.

Note: There are two other changes done with this patch
      1) Ignore client-log-level option to attach a brick with
         already running brick if brick_mux is enabled
      2) Add a log to print pid of the running process to make easier
         debugging

&gt; Change-Id: I39e85de778e150d0685cd9a79425ce8b4783f9c9
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
&gt; Fixes: bz#1696046
&gt; (Cherry picked from commit 798aadbe51a9a02dd98a0f861cc239ecf7c8ed57)
&gt; (Reviewed on upstream link https://review.gluster.org/#/c/glusterfs/+/22495/)

Change-Id: If91682830f894ab8f6857f19dcb1797fc15ca64c
Fixes: bz#1699715
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: Brick is not able to detach successfully in brick_mux environment</title>
<updated>2019-04-16T10:55:51+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2019-04-11T15:08:53+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=88ecd64604ce01c5bc2c2ca88e33b8dcf57d5ee8'/>
<id>88ecd64604ce01c5bc2c2ca88e33b8dcf57d5ee8</id>
<content type='text'>
Problem: In brick_mux environment, while volumes are stopped in a
         loop bricks are not detached successfully. Brick's are not
         detached because xprtrefcnt has not become 0 for detached brick.
         At the time of initiating brick detach process server_notify
         saves xprtrefcnt on detach brick and once counter has become
         0 then server_rpc_notify spawn a server_graph_janitor_threads
         for cleanup brick resources.xprtrefcnt has not become 0 because
         socket framework is not working due to assigning 0 as a fd for socket.
         In commit dc25d2c1eeace91669052e3cecc083896e7329b2
         there was a change in changelog fini to close htime_fd if htime_fd is not
         negative, by default htime_fd is 0 so it close 0 also.

Solution: Initialize htime_fd to -1 after just allocate changelog_priv
          by GF_CALLOC

&gt; Fixes: bz#1699025
&gt; Change-Id: I5f7ca62a0eb1c0510c3e9b880d6ab8af8d736a25
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
&gt; (cherry picked from commit b777d83001d8006420b6c7d2d88fe68950aa7e00)

Change-Id: I7a2b6fc2d36405d51990376333e093661be48475
Fixes: bz#1699714
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: In brick_mux environment, while volumes are stopped in a
         loop bricks are not detached successfully. Brick's are not
         detached because xprtrefcnt has not become 0 for detached brick.
         At the time of initiating brick detach process server_notify
         saves xprtrefcnt on detach brick and once counter has become
         0 then server_rpc_notify spawn a server_graph_janitor_threads
         for cleanup brick resources.xprtrefcnt has not become 0 because
         socket framework is not working due to assigning 0 as a fd for socket.
         In commit dc25d2c1eeace91669052e3cecc083896e7329b2
         there was a change in changelog fini to close htime_fd if htime_fd is not
         negative, by default htime_fd is 0 so it close 0 also.

Solution: Initialize htime_fd to -1 after just allocate changelog_priv
          by GF_CALLOC

&gt; Fixes: bz#1699025
&gt; Change-Id: I5f7ca62a0eb1c0510c3e9b880d6ab8af8d736a25
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
&gt; (cherry picked from commit b777d83001d8006420b6c7d2d88fe68950aa7e00)

Change-Id: I7a2b6fc2d36405d51990376333e093661be48475
Fixes: bz#1699714
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/client: Do not fallback to anon-fd if fd is not open</title>
<updated>2019-04-16T10:52:32+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2019-03-28T12:25:54+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=fbba6e397f5ce473eddca710d593a6c2b62906d5'/>
<id>fbba6e397f5ce473eddca710d593a6c2b62906d5</id>
<content type='text'>
If an open comes on a file when a brick is down and after the brick comes up,
a fop comes on the fd, client xlator would still wind the fop on anon-fd
leading to wrong behavior of the fops in some cases.

Example:
If lk fop is issued on the fd just after the brick is up in the scenario above,
lk fop will be sent on anon-fd instead of failing it on that client xlator.
This lock will never be freed upon close of the fd as flush on anon-fd is
invalid and is not wound below server xlator.

As a fix, failing the fop unless the fd has FALLBACK_TO_ANON_FD flag.

Change-Id: I77692d056660b2858e323bdabdfe0a381807cccc
fixes bz#1699198
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
(cherry picked from commit 92ae26ae8039847e38c738ef98835a14be9d4296)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If an open comes on a file when a brick is down and after the brick comes up,
a fop comes on the fd, client xlator would still wind the fop on anon-fd
leading to wrong behavior of the fops in some cases.

Example:
If lk fop is issued on the fd just after the brick is up in the scenario above,
lk fop will be sent on anon-fd instead of failing it on that client xlator.
This lock will never be freed upon close of the fd as flush on anon-fd is
invalid and is not wound below server xlator.

As a fix, failing the fop unless the fd has FALLBACK_TO_ANON_FD flag.

Change-Id: I77692d056660b2858e323bdabdfe0a381807cccc
fixes bz#1699198
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
(cherry picked from commit 92ae26ae8039847e38c738ef98835a14be9d4296)
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: thin-arbiter read txn fixes</title>
<updated>2019-04-16T10:51:51+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2019-03-07T11:32:36+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=74db82dd5d5cd47c59afb99b44a8b3d698c64167'/>
<id>74db82dd5d5cd47c59afb99b44a8b3d698c64167</id>
<content type='text'>
- Fixes afr_ta_read_txn() to handle inode refresh failures.
code-path.
- Fixes a double free issue of dict.

Note: This patch address post-merge review comments for commit
69532c141be160b3fea03c1579ae4ac13018dcdf

fixes: bz#1693992
Change-Id: Id5299b45b68569d47df6b73755918237a1592cb4
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 500bd0014128e6727e83b6cb77e8ac94304b8f4a)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
- Fixes afr_ta_read_txn() to handle inode refresh failures.
code-path.
- Fixes a double free issue of dict.

Note: This patch address post-merge review comments for commit
69532c141be160b3fea03c1579ae4ac13018dcdf

fixes: bz#1693992
Change-Id: Id5299b45b68569d47df6b73755918237a1592cb4
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 500bd0014128e6727e83b6cb77e8ac94304b8f4a)
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: Don't enqueue an entry if it is already healing</title>
<updated>2019-04-16T10:50:49+00:00</updated>
<author>
<name>Ashish Pandey</name>
<email>aspandey@redhat.com</email>
</author>
<published>2018-11-28T05:52:52+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f792fd01aab1e50b4f662305e9c417064bd37c30'/>
<id>f792fd01aab1e50b4f662305e9c417064bd37c30</id>
<content type='text'>
Problem:
1 - heal-wait-qlength is by default 128. If shd is disabled
and we need to heal files, client side heal is needed.
If we access these files that will trigger the heal.
However, it has been observed that a file will be enqueued
multiple times in the heal wait queue, which in turn causes
queue to be filled and prevent other files to be enqueued.

2 - While a file is going through healing and a write fop from
mount comes on that file, it sends write on all the bricks including
healing one. At the end it updates version and size on all the
bricks. However, it does not unset dirty flag on all the bricks,
even if this write fop was successful on all the bricks.
After healing completion this dirty flag remain set and never
gets cleaned up if SHD is disabled.

Solution:
1 - If an entry is already in queue or going through heal process,
don't enqueue next client side request to heal the same file.

2 - Unset dirty on all the bricks at the end if fop has succeeded on
all the bricks even if some of the bricks are going through heal.

Change-Id: Ia61ffe230c6502ce6cb934425d55e2f40dd1a727
updates: bz#1693223
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
(cherry picked from commit 313dcefe7a62bd16cd794040df068f9bec9c6927)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
1 - heal-wait-qlength is by default 128. If shd is disabled
and we need to heal files, client side heal is needed.
If we access these files that will trigger the heal.
However, it has been observed that a file will be enqueued
multiple times in the heal wait queue, which in turn causes
queue to be filled and prevent other files to be enqueued.

2 - While a file is going through healing and a write fop from
mount comes on that file, it sends write on all the bricks including
healing one. At the end it updates version and size on all the
bricks. However, it does not unset dirty flag on all the bricks,
even if this write fop was successful on all the bricks.
After healing completion this dirty flag remain set and never
gets cleaned up if SHD is disabled.

Solution:
1 - If an entry is already in queue or going through heal process,
don't enqueue next client side request to heal the same file.

2 - Unset dirty on all the bricks at the end if fop has succeeded on
all the bricks even if some of the bricks are going through heal.

Change-Id: Ia61ffe230c6502ce6cb934425d55e2f40dd1a727
updates: bz#1693223
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
(cherry picked from commit 313dcefe7a62bd16cd794040df068f9bec9c6927)
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Send truncate on arbiter brick from SHD</title>
<updated>2019-03-12T20:51:47+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2019-03-07T16:56:49+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=9b58cfc83c26aa09eb1de8187cc65ed0c3390e97'/>
<id>9b58cfc83c26aa09eb1de8187cc65ed0c3390e97</id>
<content type='text'>
Problem:
In an arbiter volume configuration SHD will not send any writes onto the arbiter
brick even if there is data pending marker for the arbiter brick. If we have a
arbiter setup on the geo-rep master and there are data pending markers for the files
on arbiter brick, SHD will not mark any data changelog during healing. While syncing
the data from master to slave, if the arbiter-brick is considered as ACTIVE, then
there is a chance that slave will miss out some data. If the arbiter brick is being
newly added or replaced there is a chance of slave missing all the data during sync.

Fix:
If there is data pending marker for the arbiter brick, send truncate on the arbiter
brick during heal, so that it will record truncate as the data transaction in changelog.

Change-Id: I3242ba6cea6da495c418ef860d9c3359c5459dec
fixes: bz#1687672
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In an arbiter volume configuration SHD will not send any writes onto the arbiter
brick even if there is data pending marker for the arbiter brick. If we have a
arbiter setup on the geo-rep master and there are data pending markers for the files
on arbiter brick, SHD will not mark any data changelog during healing. While syncing
the data from master to slave, if the arbiter-brick is considered as ACTIVE, then
there is a chance that slave will miss out some data. If the arbiter brick is being
newly added or replaced there is a chance of slave missing all the data during sync.

Fix:
If there is data pending marker for the arbiter brick, send truncate on the arbiter
brick during heal, so that it will record truncate as the data transaction in changelog.

Change-Id: I3242ba6cea6da495c418ef860d9c3359c5459dec
fixes: bz#1687672
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
