diff options
author | Kotresh HR <khiremat@redhat.com> | 2016-09-06 18:28:42 +0530 |
---|---|---|
committer | Raghavendra Bhat <raghavendra@redhat.com> | 2016-09-08 10:09:33 -0700 |
commit | b86a7de9b5ea9dcd0a630dbe09fce6d9ad0d8944 (patch) | |
tree | e9507103a00cc7ce0da30ebc8bc9fc8c8f2f2571 /tests/volume.rc | |
parent | 593b7a83f7408e59ab7b3ef7dfc4fe4096d6e3cd (diff) |
feature/bitrot: Fix recovery of corrupted hardlink
Problem:
When a file with hardlink is corrupted in ec volume,
the recovery steps mentioned was not working.
Only name and metadata was healing but not the data.
Cause:
The bad file marker in the inode context is not removed.
Hence when self heal tries to open the file for data
healing, it fails with EIO.
Background:
The bitrot deletes inode context during forget.
Briefly, the recovery steps involves following steps.
1. Delete the entry marked with bad file xattr
from backend. Delete all the hardlinks including
.glusters hardlink as well.
2. Access the each hardlink of the file including
original from the mount.
The step 2 will send lookup to the brick where the files
are deleted from backend and returns with ENOENT. On
ENOENT, server xlator forgets the inode if there are
no dentries associated with it. But in case hardlinks,
the forget won't be called as dentries (other hardlink
files) are associated with the inode. Hence bitrot stube
won't delete it's context failing the data self heal.
Fix:
Bitrot-stub should delete the inode context on getting
ENOENT during lookup.
Change-Id: Ice6adc18625799e7afd842ab33b3517c2be264c1
BUG: 1373520
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/15408
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Diffstat (limited to 'tests/volume.rc')
-rw-r--r-- | tests/volume.rc | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/tests/volume.rc b/tests/volume.rc index 1b62c026a28..aa614c50489 100644 --- a/tests/volume.rc +++ b/tests/volume.rc @@ -611,6 +611,10 @@ function get_scrubd_count { ps auxww | grep glusterfs | grep scrub.pid | grep -v grep | wc -l } +function get_quarantine_count { + ls -l "$1/.glusterfs/quanrantine" | wc -l +} + function get_quotad_count { ps auxww | grep glusterfs | grep quotad.pid | grep -v grep | wc -l } |