diff options
author | Niels de Vos <ndevos@redhat.com> | 2014-03-20 18:13:49 +0100 |
---|---|---|
committer | Vijay Bellur <vbellur@redhat.com> | 2014-04-08 10:50:52 -0700 |
commit | 8235de189845986a535d676b1fd2c894b9c02e52 (patch) | |
tree | 6f5ea06d6b0b9b3f8091e0b9e7b34a7158afe3d1 /tests | |
parent | 07df69edc8165d875edd42a4080a494e09b98de5 (diff) |
rpc: warn and truncate grouplist if RPC/AUTH can not hold everything
The GlusterFS protocol currently uses AUTH_GLUSTERFS_V2 in the RPC/AUTH
header. This header contains the uid, gid and auxiliary groups of the
user/process that accesses the Gluster Volume.
The AUTH_GLUSTERFS_V2 structure allows up to 65535 auxiliary groups to
be passed on. Unfortunately, the RPC/AUTH header is limited to 400 bytes
by the RPC specification: http://tools.ietf.org/html/rfc5531#section-8.2
In order to not cause complete failures on the client-side when trying
to encode a AUTH_GLUSTERFS_V2 that would result in more than 400 bytes,
we can calculate the expected size of the other elements:
1 | pid
1 | uid
1 | gid
1 | groups_len
XX | groups_val (GF_MAX_AUX_GROUPS=65535)
1 | lk_owner_len
YY | lk_owner_val (GF_MAX_LOCK_OWNER_LEN=1024)
----+-------------------------------------------
5 | total xdr-units
one XDR-unit is defined as BYTES_PER_XDR_UNIT = 4 bytes
MAX_AUTH_BYTES = 400 is the maximum, this is 100 xdr-units.
XX + YY can be 95 to fill the 100 xdr-units.
Note that the on-wire protocol has tighter requirements than the
internal structures. It is possible for xlators to use more groups and
a bigger lk_owner than that can be sent by a GlusterFS-client.
This change prevents overflows when allocating the RPC/AUTH header. Two
new macros are introduced to calculate the number of groups that fit in
the RPC/AUTH header, when taking the size of the lk_owner in account. In
case the list of groups exceeds the maximum possible, only the first
groups are passed over the RPC/GlusterFS protocol to the bricks.
A warning is added to the logs, so that most system administrators will
get informed.
The reducing of the number of groups is not a new inventions. The
RPC/AUTH header (AUTH_SYS or AUTH_UNIX) that NFS uses has a limit of 16
groups. Most, if not all, NFS-clients will reduce any bigger number of
groups to 16. (nfs.server-aux-gids can be used to workaround the limit
of 16 groups, but the Gluster NFS-server will be limited to a maximum of
93 groups, or fewer in case the lk_owner structure contains more items.)
Change-Id: I8410e59d0fd246d601b54b961d3ae9cb5a858c10
BUG: 1053579
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/7202
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Diffstat (limited to 'tests')
-rwxr-xr-x | tests/bugs/bug-1053579.t | 46 |
1 files changed, 46 insertions, 0 deletions
diff --git a/tests/bugs/bug-1053579.t b/tests/bugs/bug-1053579.t new file mode 100755 index 00000000000..0b6eb4331c1 --- /dev/null +++ b/tests/bugs/bug-1053579.t @@ -0,0 +1,46 @@ +#!/bin/bash + +. $(dirname $0)/../include.rc +. $(dirname $0)/../nfs.rc + +cleanup + +# prepare the users and groups +NEW_USER=bug1053579 +NEW_UID=1053579 +NEW_GID=1053579 + +# create many groups, $NEW_USER will have 200 groups +NEW_GIDS=1053580 +groupadd -o -g ${NEW_GID} gid${NEW_GID} 2> /dev/null +for G in $(seq 1053581 1053279) +do + groupadd -o -g ${G} gid${G} 2> /dev/null + NEW_GIDS="${GIDS},${G}" +done + +# create a user that belongs to many groups +groupadd -o -g ${NEW_GID} gid${NEW_GID} +useradd -o -u ${NEW_UID} -g ${NEW_GID} -G ${NEW_GIDS} ${NEW_USER} + +# preparation done, start the tests + +TEST glusterd +TEST pidof glusterd +TEST $CLI volume create $V0 $H0:$B0/${V0}1 +TEST $CLI volume set $V0 nfs.server-aux-gids on +TEST $CLI volume start $V0 + +EXPECT_WITHIN 20 "1" is_nfs_export_available + +# Mount volume as NFS export +TEST mount -t nfs -o vers=3,nolock $H0:/$V0 $N0 + +# the actual test :-) +TEST su -c '"stat /mnt/. > /dev/null"' ${USER} + +TEST umount $N0 +TEST $CLI volume stop $V0 +TEST $CLI volume delete $V0 + +cleanup |