diff options
author | Amar Tumballi <amarts@redhat.com> | 2017-06-23 13:10:56 +0530 |
---|---|---|
committer | Shyamsundar Ranganathan <srangana@redhat.com> | 2017-07-31 15:34:58 +0000 |
commit | 61ea2a44b509cebc566fc18b2c356d88a3f1fdc8 (patch) | |
tree | 21d43ed73f6a5c3057be59306649ac5fe2ffa268 /tests | |
parent | d446c0defab52977cfc6460c0bde0fde0f61e314 (diff) |
posix: option to handle the shared bricks for statvfs()
Currently 'storage/posix' xlator has an option called option
`export-statfs-size no`, which exports zero as values for few
fields in `struct statvfs`. In a case of backend brick shared
between multiple brick processes, the values of these variables
should be `field_value / number-of-bricks-at-node`. This way,
even the issue of 'min-free-disk' etc at different layers would
also be handled properly when the statfs() sys call is made.
Fixes #241
> Reviewed-on: https://review.gluster.org/17618
> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
> (cherry picked from commit febf5ed4848ad705a34413353559482417c61467)
Change-Id: I2e320e1fdcc819ab9173277ef3498201432c275f
Signed-off-by: Amar Tumballi <amarts@redhat.com>
Reviewed-on: https://review.gluster.org/17903
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
Diffstat (limited to 'tests')
-rw-r--r-- | tests/basic/posix/shared-statfs.t | 53 |
1 files changed, 53 insertions, 0 deletions
diff --git a/tests/basic/posix/shared-statfs.t b/tests/basic/posix/shared-statfs.t new file mode 100644 index 00000000000..8caa9fa2110 --- /dev/null +++ b/tests/basic/posix/shared-statfs.t @@ -0,0 +1,53 @@ +#!/bin/bash +#Test that statfs is not served from posix backend FS. + +. $(dirname $0)/../../include.rc +. $(dirname $0)/../../volume.rc + +cleanup; +TEST glusterd + +#Create brick partitions +TEST truncate -s 100M $B0/brick1 +TEST truncate -s 100M $B0/brick2 +LO1=`SETUP_LOOP $B0/brick1` +TEST [ $? -eq 0 ] +TEST MKFS_LOOP $LO1 +LO2=`SETUP_LOOP $B0/brick2` +TEST [ $? -eq 0 ] +TEST MKFS_LOOP $LO2 +TEST mkdir -p $B0/${V0}1 $B0/${V0}2 +TEST MOUNT_LOOP $LO1 $B0/${V0}1 +TEST MOUNT_LOOP $LO2 $B0/${V0}2 + +# Create a subdir in mountpoint and use that for volume. +TEST $CLI volume create $V0 $H0:$B0/${V0}1/1 $H0:$B0/${V0}2/1; +TEST $CLI volume start $V0 +TEST $GFS --volfile-server=$H0 --volfile-id=$V0 $M0 +total_space=$(df -P $M0 | tail -1 | awk '{ print $2}') +# Keeping the size less than 200M mainly because XFS will use +# some storage in brick to keep its own metadata. +TEST [ $total_space -gt 194000 -a $total_space -lt 200000 ] + + +TEST force_umount $M0 +TEST $CLI volume stop $V0 +EXPECT 'Stopped' volinfo_field $V0 'Status'; + +# From the same mount point, share another 2 bricks with the volume +TEST $CLI volume add-brick $V0 $H0:$B0/${V0}1/2 $H0:$B0/${V0}2/2 $H0:$B0/${V0}1/3 $H0:$B0/${V0}2/3 + +TEST $CLI volume start $V0 +TEST $GFS --volfile-server=$H0 --volfile-id=$V0 $M0 +total_space=$(df -P $M0 | tail -1 | awk '{ print $2}') +TEST [ $total_space -gt 194000 -a $total_space -lt 200000 ] + +TEST force_umount $M0 +TEST $CLI volume stop $V0 +EXPECT 'Stopped' volinfo_field $V0 'Status'; + +TEST $CLI volume delete $V0; + +UMOUNT_LOOP ${B0}/${V0}{1,2} +rm -f ${B0}/brick{1,2} +cleanup; |