summaryrefslogtreecommitdiffstats
path: root/tests/functional/glusterd
Commit message (Collapse)AuthorAgeFilesLines
* [Test]: Add tc to check increase in glusterd memory consumptionnik-redhat2021-02-101-0/+207
| | | | | | | | | | | | | | | | | | Test Steps: 1) Enable brick-multiplex and set max-bricks-per-process to 3 in the cluster 2) Get the glusterd memory consumption 3) Perform create,start,stop,delete operation for 100 volumes 4) Check glusterd memory consumption, it should not increase by more than 50MB 5) Repeat steps 3-4 for two more time 6) Check glusterd memory consumption it should not increase by more than 10MB Upstream issue link: https://github.com/gluster/glusterfs/issues/2142 Change-Id: I54d5e337513671d569267fa23fe78b6d3410e944 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add test to verify df -h output after replace, expand and shrink opsPranav2021-01-181-0/+171
| | | | | | | | | | | | | | | | | | | | Test to verify the df -h output when for a given volume, the bricks are replaces, volume size is shrinked and when the volume size is expanded. Steps: - Take the output of df -h. - Replace any one brick for the volumes. - Wait till the heal is completed - Repeat steps 1, 2 and 3 for all bricks for all volumes. - Check if there are any inconsistencies in the output of df -h - Remove bricks from volume and check output of df -h - Add bricks to volume and check output of df -h The size of mount points should remain unchanged during a replace op, and the sizes should vary according to shrink or expand op performed on the volume. Change-Id: I323da4938767cad1976463c2aefb6c41f355ac57 Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix+Lib] Add steps to validate glusterd logsPranav2021-01-181-43/+23
| | | | | | | | | | | | | | Adding additional checks to verify the glusterd logs for `Responded to` and `Received ACC` while performing a glusterd restart. Replacing reboot with network interface down to validate the peer probe scenarios. Adding lib to bring down network interface. Change-Id: Ifb01d53f67835224d828f531e7df960c6cb0a0ba Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Rebalance should start successfully if name of volume more than 108 chars“Milind2021-01-081-0/+173
| | | | | | | | | | | | | | | | 1. On Node N1, Add "transport.socket.bind-address N1" in the /etc/glusterfs/glusterd.vol 2. Create a replicate (1X3) and disperse (4+2) volumes with name more than 108 chars 3. Mount the both volumes using node 1 where you added the "transport.socket.bind-address" and start IO(like untar) 4. Perform add-brick on replicate volume 3-bricks 5. Start rebalance on replicated volume 6. Perform add-brick for disperse volume 6 bricks 7. Start rebalance of disperse volume Change-Id: Ibc57f18b84d21439bbd65a665b31d45b9036ca05 Signed-off-by: “Milind <“mwaykole@redhat.com”>
* [Test] Change the reserve limits to lower and higher while rebal in-progress“Milind2020-12-211-0/+127
| | | | | | | | | | | | | | | | | | | 1) Create a distributed-replicated volume and start it. 2) Enable storage.reserve option on the volume using below command, gluster volume set storage.reserve 50 3) Mount the volume on a client 4) Add some data on the mount point (should be within reserve limits) 5) Now, add-brick and trigger rebalance. While rebalance is in-progress change the reserve limit to a lower value say (30) 6. Stop the rebalance 7. Reset the storage reserve value to 50 as in step 2 8. trigger rebalance 9. while rebalance in-progress change the reserve limit to a higher value say (70) Change-Id: I1b2e449f74bb75392a25af7b7088e7ebb95d2860 Signed-off-by: “Milind <“mwaykole@redhat.com”>
* [TestFix] Adding cluster options reset in TCsrijan-sivakumar2020-12-171-5/+5
| | | | | | | | | The cluster options are reset post the TC run so that they don't persist through the other TC runs. Change-Id: Id55bb64ded09e113cdc0fc512a17857195619e41 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test]: Add tc to detect drop of outbound traffic as network failure in glusterdnik-redhat2020-12-171-0/+115
| | | | | | | | | | | | Test Steps: 1) Create a volume and start it. 2) Add an iptable rule to drop outbound glusterd traffic 3) Check if the rule is added in iptables list 4) Execute few Gluster CLI commands like volume status, peer status 5) Gluster CLI commands should fail with suitable error message Change-Id: Ibc5717659e65f0df22ea3cec098bf7d1932bef9d Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test]: Add tc to test reserved port range for glusternik-redhat2020-12-171-0/+152
| | | | | | | | | | | | | | | | Test Steps: 1) Set the max-port option in glusterd.vol file to 49200 2) Restart glusterd on one of the node 3) Create 50 volumes in a loop 4) Try to start the 50 volumes in a loop 5) Confirm that the 50th volume failed to start 6) Confirm the error message, due to which volume failed to start 7) Set the max-port option in glusterd.vol file back to default value 8) Restart glusterd on the same node 9) Starting the 50th volume should succeed now Change-Id: I084351db20cc37e3391061b7b313a18896cc90b1 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Memory crash - stop and start gluster processes multiple timessrijan-sivakumar2020-12-091-0/+123
| | | | | | | | | | | | | Steps- 1. Create a gluster volume. 2. Kill all gluster related processes. 3. Start glusterd service. 4. Verify that all gluster processes are up. 5. Repeat the above steps 5 times. Change-Id: If01788ae8bcdd75cdb55261715c34edf83e6f018 Signed-off-by: Rinku Kothiya <rkothiya@redhat.com> Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test Fix]: Assertion of default quorum optionsnik-redhat2020-12-041-3/+4
| | | | | | | | | | | Fix: Improved the check for default quorum options on the volume, to work with the present as well as older default values Older default value: 51 Current Default value: 51 (DEFAULT) Change-Id: I200b81334e84a7956090bede3e2aa50b9d4cf8e0 Signed-off-by: nik-redhat <nladha@redhat.com>
* [TestFix] Performing cluster options reset.srijan-sivakumar2020-12-041-1/+15
| | | | | | | | | | | Issue: The cluster options set during TC aren't reset, causing the cluster options to affect subsequent TC runs. Fix: Adding volume_reset() in the tearDown of a TC to perform a cleanup of the cluster options. Change-Id: I00da5837d2a4260b4d414cc3c8083f83d8f6fadd Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test]: Add a tc to check default max bricks per processnik-redhat2020-12-041-0/+100
| | | | | | | | | | | | | | | Test steps: 1) Create a volume and start it. 2) Fetch the max bricks per process value 3) Reset the volume options 4) Fetch the max bricks per process value 5) Compare the value fetched in last step with the initial value 6) Enable brick-multiplexing in the cluster 7) Fetch the max bricks per process value 8) Compare the value fetched in last step with the initial value Change-Id: I20bdefd38271d1e12acf4699b4fe5d0da5463ab3 Signed-off-by: nik-redhat <nladha@redhat.com>
* [TestFix] Adding cluster options reset.srijan-sivakumar2020-12-041-0/+7
| | | | | | | | | The cluster options once set aren't reset and this would cause problem for subsequent TCs. hence reseting the options at teardown. Change-Id: Ifd1df2632a25ca7788a6bb4f765b3f6583ab06d6 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add Tc Verify the vol status vol/all --xml dump“Milind2020-12-021-0/+106
| | | | | | | | | | | 1. stop one of the volume (i.e) gluster volume stop <vol-name> 2. Get the status of the volumes with --xml dump (i.e) gluster volume status all --xml XML dump should be consistent Signed-off-by: “Milind <“mwaykole@redhat.com”> Change-Id: I3e7af6d1bc45b73ed8302bf3277e3613a6b1100f
* [TestFix] Move mem_leak TC to resource_leak dirPranav2020-12-021-128/+0
| | | | | | | | Moving the gluster mem_leak test case to resource_leak dir Change-Id: I8189dc9b509a09f793fe8ca2be53e8546babada7 Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix]: Fixed the grep pattern for epoll thread countnik-redhat2020-11-271-10/+10
| | | | | | | | | | | Modified the command from 'grep epoll_wait' to 'grep -i 'sys_epoll_wait' to address the changes in the epoll functionality for newer versions of Linux. Details of the changes can be found here: https://github.com/torvalds/linux/commit/791eb22eef0d077df4ddcf633ee6eac038f0431e Change-Id: I1671a74e538d20fe5dbf951fca6f8edabe0ead7f Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] xml Dump of gluster volume status during rebalance“Milind2020-11-271-0/+185
| | | | | | | | | | | | | | 1. Create a trusted storage pool by peer probing the node 2. Create a distributed-replicated volume 3. Start the volume and fuse mount the volume and start IO 4. Create another replicated volume and start it and stop it 5. Start rebalance on the volume. 6. While rebalance in progress, stop glusterd on one of the nodes in the Trusted Storage pool. 7. Get the status of the volumes with --xml dump Change-Id: I581b7713d7f9bfdd7be00add3244578b84daf94f Signed-off-by: “Milind <“mwaykole@redhat.com”>
* [Test] Add test to verify memory leak with ssl enabledPranav2020-11-271-0/+128
| | | | | | | | | | This test is to verify BZ:1785577 ( https://bugzilla.redhat.com/show_bug.cgi?id=1785577) To verify that there are no memeory leak when SSL is enabled Change-Id: I1f44de8c65b322ded76961253b8b7a7147aca76a Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Add tc to check vol set when glusterd is stopped on one nodenik-redhat2020-11-241-0/+193
| | | | | | | | | | | | | | Test Steps: 1) Setup and mount a volume on client. 2) Stop glusterd on a random server. 3) Start IO on mount points 4) Set an option on the volume 5) Start glusterd on the stopped node. 6) Verify all the bricks are online after starting glusterd. 7) Check if the volume info is synced across the cluster. Change-Id: Ia2982ce4e26f0d690eb2bc7516d463d2a71cce86 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test]: Add tc for default ping-timeout and epoll thread countnik-redhat2020-11-241-0/+87
| | | | | | | | | | | | | Test Steps: 1. Start glusterd 2. Check ping timeout value in glusterd.vol should be 0 3. Create a test script for epoll thread count 4. Source the test script 5. Fetch the pid of glusterd 6. Check epoll thread count of glusterd should be 1 Change-Id: Ie3bbcb799eb1776004c3db4922d7ee5f5993b100 Signed-off-by: nik-redhat <nladha@redhat.com>
* [TestFix] Fixing typo glusterd“Milind2020-11-101-1/+1
| | | | | Change-Id: I080328dfbcde5652f9ab697f8751b87bf96e8245 Signed-off-by: “Milind <“mwaykole@redhat.com”>
* [Test] Brick status offline when quorum not met.srijan-sivakumar2020-11-091-0/+125
| | | | | | | | | | | | | Steps- 1. Create a volume and mount it. 2. Set the quorum type to 'server'. 3. Bring some nodes down such that quorum isn't met. 4. Brick status in the node which is up should be offline. 5. Restart glusterd in this node. 6. Brick status in the restarted node should be offline. Change-Id: If6885133848d77ec803f059f7a056dc3aeba7eb1 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add tc to check that default log level of CLInik-redhat2020-11-091-0/+97
| | | | | | | | | | | | | Test Steps: 1) Create and start a volume 2) Run volume info command 3) Run volume status command 4) Run volume stop command 5) Run volume start command 6) Check the default log level of cli.log Change-Id: I871d83500b2a3876541afa348c49b8ce32169f23 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add tc to check updates in 'options' file on quorum changesnik-redhat2020-11-051-0/+94
| | | | | | | | | | | | | | Test Steps: 1. Create and start a volume 2. Check the output of '/var/lib/glusterd/options' file 3. Store the value of 'global-option-version' 4. Set server-quorum-ratio to 70% 5. Check the output of '/var/lib/glusterd/options' file 6. Compare the value of 'global-option-version' and check if the value of 'server-quorum-ratio' is set to 70% Change-Id: I5af40a1e05eb542e914e5766667c271cbbe126e8 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add tc to validate auth.allow and auth.reject options on volumenik-redhat2020-11-051-0/+162
| | | | | | | | | | | | | | | | | | Test Steps: 1. Create and start a volume 2. Disable brick mutliplex 2. Set auth.allow option on volume for the client address on which volume is to be mounted 3. Mount the volume on client and then unmmount it. 4. Reset the volume 5. Set auth.reject option on volume for the client address on which volume is to be mounted 6. Mounting the volume should fail 7. Reset the volume and mount it on client. 8. Repeat the steps 2-7 with brick multiplex enabled Change-Id: I26d88a217c03f1b4732e4bdb9b8467a9cd608bae Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add Test to authored to test posix storage.reserve option“Milind”2020-11-021-0/+79
| | | | | | | | | | | | 1) Create a distributed-replicated volume and start it. 2) Enable storage.reserve option on the volume using below command, gluster volume set storage.reserve. let's say, set it to a value of 50. 3) Mount the volume on a client 4) check df -h output of the mount point and backend bricks. Change-Id: I74f891ce5a92e1a4769ec47c64fc5469b6eb9224 Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Test]: Add tc to check profile simultaneously on 2 different nodesnik-redhat2020-10-221-0/+185
| | | | | | | | | | | | | | | | Test Steps: 1) Create a volume and start it. 2) Mount volume on client and start IO. 3) Start profile on the volume. 4) Create another volume. 5) Start profile on the volume. 6) Run volume status in a loop for 100 times in one node. 7) Run profile info for the new volume on one of the other node 8) Run profile info for the new volume in loop for 100 times on the other node Change-Id: I1c32a938bf434a88aca033c54618dca88623b9d1 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add TC to check glusterd config file“Milind”2020-10-221-0/+29
| | | | | | | | | 1 . Check the location of glusterd socket file ( glusterd.socket ) ls /var/run/ | grep -i glusterd.socket 2. systemctl is-enabled glusterd -> enabled Change-Id: I6557c27ffb7e91482043741eeac0294e171a0925 Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Test] Default volume behavior and quorum optionssrijan-sivakumar2020-10-201-0/+129
| | | | | | | | | | | Steps- 1. Create and start volume. 2. Check that the quorum options aren't coming up in the vol info. 3. Kill two glusterd processes. 4. There shouldn't be any effect on the glusterfsd processes. Change-Id: I40e6ab5081e723ae41417f1e5a6ece13c65046b3 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add test gluster does not release posix lock multiple clients “Milind”2020-10-191-0/+91
| | | | | | | | | | | | Steps: 1. Create all types of volumes. 2. Mount the brick on two client mounts 3. Prepare same script to do flock on the two nodes while running this script it should not hang 4. Wait till 300 iteration on both the node Change-Id: I53e5c8b3b924ac502e876fb41dee34e9b5a74ff7 Signed-off-by: “Milind” <mwaykole@redhat.com>
* [TestFix] Changing the assert statement“Milind”2020-10-191-1/+5
| | | | | | | | | | changed from `self.validate_vol_option('storage.reserve', '1 (DEFAULT)')` to `self.validate_vol_option('storage.reserve', '1')` Change-Id: If75820b4ab3c3b04454e232ea1eccc4ee5f7be0b Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Test] Test mountpoint ownership post volume restart.srijan-sivakumar2020-10-191-0/+109
| | | | | | | | | | | Steps- 1. Create a volume and mount it. 2. Set ownership permissions on the mountpoint and validate it. 3. Restart the volume. 4. Validate the permissions set on the mountpoint. Change-Id: I1bd3f0b5181bc93a7afd8e77ab5244224f2f4fed Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add test check glusterd crash when firewall ports not openedPranav2020-10-121-0/+140
| | | | | | | | Add test to verify whether the glusterd crash is found while performing a peer probe with firewall services removed. Change-Id: If68c3da2ec90135a480a3cb1ffc85a6b46b1f3ef Signed-off-by: Pranav <prprakas@redhat.com>
* [Test]: Add tc to check volume status with brick removalnik-redhat2020-10-121-12/+69
| | | | | | | | | | | | | Steps: 1. Create a volume and start it. 2. Fetch the brick list 3. Bring any one brick down umount the brick 4. Force start the volume and check that all the bricks are not online 5. Remount the removed brick and bring back the brick online 6. Force start the volume and check if all the bricks are online Change-Id: I464d3fe451cb7c99e5f21835f3f44f0ea112d7d2 Signed-off-by: nik-redhat <nladha@redhat.com>
* [TestFix]: Add tc to check volume status with bricks absentnik-redhat2020-10-091-45/+30
| | | | | | | | | Fix: Added more volume types to perform tests and optimized the code for a better flow. Change-Id: I8249763161f30109d068da401504e0a24cde4d78 Signed-off-by: nik-redhat <nladha@redhat.com>
* [TestFix] Add check to verify glusterd ErrorPranav2020-10-071-0/+27
| | | | | | | | Adding check to verify gluster volume status doesn't cause any error msg in glusterd logs Change-Id: I5666aa7fb7932a7b61a56afa7d60341ef66a978e Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Volume profile info without starting profilenik-redhat2020-10-061-0/+188
| | | | | | | | | | | | | | | | Steps- 1. Create a volume and start it. 2. Mount volume on the client and start IO. 3. Start profile on the volume 4. Run profile info and see if all bricks are present or not 5. Create another volume and start it. 6. Run profile info without starting profile. 7. Run profile info with all possible options without starting profile. Change-Id: I0eb2424f385197c45bc0c4e3084c053a9498ae7d Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Test Quorum specific CLI commands.srijan-sivakumar2020-09-291-0/+97
| | | | | | | | | | | Steps- 1. Create a volume and start it. 2. Set the quorum-type to 'server' and verify it. 3. Set the quorum-type to 'none' and verify it. 4. Set the quorum-ratio to some value and verify it. Change-Id: I08715972c13fc455cee25f25bdda852b92a48e10 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Test set and reset of storage.reserve limit on glusterdsrijan-sivakumar2020-09-291-0/+91
| | | | | | | | | | Steps- 1. Create a volume and start it. 2. Set storage.reserve limit on the created volume and verify 3. Reset storage.reserve limit on the created volume and verify Change-Id: I6592d19463696ba2c43efbb8f281024fc610d18d Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Validate peer probe with hostname,ip,fqdnPranav2020-09-291-0/+146
| | | | | | | | | Test to validate gluster peer probe scenarios using ip addr, hostname and fqdn by verifying each with peer status output, pool list and cmd_history.log Change-Id: I77512cfcf62b28e70682405c47014646be71593c Signed-off-by: Pranav <prprakas@redhat.com>
* [Test]: Volume status show bricks online though brickpath is deletednik-redhat2020-09-281-0/+81
| | | | | | | | | | | | Steps- 1) Create a volume and start it. 2) Fetch the brick list 3) Remove any brickpath 4) Check number of bricks online is equal to number of bricks in volume Change-Id: I4c3a6692fc88561a47a7d2564901f21dfe0073d4 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test]Add test to validate glusterd.info configuration file“Milind”2020-09-281-0/+67
| | | | | | | | | | | | | 1. Check for the presence of /var/lib/glusterd/glusterd.info file 2. Get the UUID of the current NODE 3. check the value of the uuid returned by executing the command "gluster system:: uuid get " 4. Check the uuid value shown by other node in the cluster for the same node "gluster peer status" on one node will give the UUID of the other node Change-Id: I61dfb227e37b87e889577b77283d65eda4b3cd29 Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Testfix] Added reboot scenario to shared_storage testBala Konda Reddy M2020-09-171-74/+129
| | | | | | | | | | Currently, there is no validation for shared storage whether it is mounted or not post reboot. Added the validation for reboot scenario Made the testcase modular for future updates to the test. Change-Id: I9d39beb3c6718e648eabe15a409c4b4985736645 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [TestFix] Adding sleep so that the brick will get port“Milind”2020-09-151-0/+3
| | | | | | | | | | Problem :ValueError: invalid literal for int() with base 10: 'N/A' Solution : Wait for 5 sec so that brick will get the port Change-Id: Idf518392ba5584d09e81e76fca6e29037ac43e90 Signed-off-by: “Milind” <mwaykole@redhat.com>
* [TestFix] Replace `translate` with `replace`Leela Venkaiah G2020-07-271-8/+1
| | | | | | | | | - `replace` funciton to used to forgo version check - `unicode` is not being recognized from builtins in py2 - `replace` seems correct alternative than fixing unicode Change-Id: Ieb9b5ad283e1a31d65bd8a9715b80f9deb0c05fe Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [TestFix] Make test compatible with Python 2Leela Venkaiah G2020-07-201-8/+16
| | | | | | | - Translate function is availble on `unicode` string in Python2 Change-Id: I6aa01606acc73b18d889a965f1c01f9a393c2c46 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Testfix] Add reset-failed cmdkshithijiyer2020-06-181-3/+7
| | | | | | | | | | | | | | | | | | | | Problem: The testcase test_volume_create_with_glusterd_restarts consist of a asynchronous loop of glusterd restarts which fails in the lastest runs due to patch [1] and [2] added to glusterfs which limits the glusterd restarts to 6. Fix: Add `systemctl reset-failed glusterd` to the asynchronous loop. Links: [1] https://review.gluster.org/#/c/glusterfs/+/23751/ [2] https://review.gluster.org/#/c/glusterfs/+/23970/ Change-Id: Idd52bfeb99c0c43afa45403d71852f5f7b4514fa Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Check 'storage.reserve' with wrong valuesLeela Venkaiah G2020-05-281-0/+79
| | | | | | | | | | Test Steps: 1) Create and start a distributed-replicated volume. 2) Give different inputs to the storage.reserve volume set options 3) Validate the command behaviour on wrong inputs Change-Id: I4bbad81cbea9b3b9e59a61fcf7f2b70eac19b216 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Verify 'gluster get-state' on brick unmountLeela Venkaiah G2020-05-261-0/+126
| | | | | | | | | | | | | | Testcase steps: 1. Form a gluster cluster by peer probing and create a volume 2. Unmount the brick using which the volume is created 3. Run 'gluster get-state' and validate absence of error 'Failed to get daemon state. Check glusterd log file for more details' 4. Create another volume and start it using different bricks which are not used to create above volume 5. Run 'gluster get-state' and validate the absence of above error. Change-Id: Ib629b53c01860355e5bfafef53dcc3233af071e3 Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [Test] Add tc to check get-state when brick is killedPranav2020-04-231-0/+124
| | | | | | | | | Test case verifies whether the gluster get-state shows the proper brick status in the output. The test case checks the brick status when the brick is up and also after killing the brick process. It also verifies whether the other bricks are up when a particular brick process is killed. Change-Id: I9801249d25be2817104194bb0a8f6a16271d662a Signed-off-by: Pranav <prprakas@redhat.com>