diff options
| author | Niels de Vos <ndevos@redhat.com> | 2014-05-11 22:51:15 -0300 | 
|---|---|---|
| committer | Niels de Vos <ndevos@redhat.com> | 2014-05-22 06:02:21 -0700 | 
| commit | 57ec16e7f6d08b9a1c07f8ece3db630b08557372 (patch) | |
| tree | 3e09ed682dc989fbe7741c6c4cc0ea40318ddc42 /xlators/nfs | |
| parent | 0ba8c6113058ae2ab2a2e38e11a2c95d75056a3b (diff) | |
rpc: warn and truncate grouplist if RPC/AUTH can not hold everything
The GlusterFS protocol currently uses AUTH_GLUSTERFS_V2 in the RPC/AUTH
header. This header contains the uid, gid and auxiliary groups of the
user/process that accesses the Gluster Volume.
The AUTH_GLUSTERFS_V2 structure allows up to 65535 auxiliary groups to
be passed on. Unfortunately, the RPC/AUTH header is limited to 400 bytes
by the RPC specification: http://tools.ietf.org/html/rfc5531#section-8.2
In order to not cause complete failures on the client-side when trying
to encode a AUTH_GLUSTERFS_V2 that would result in more than 400 bytes,
we can calculate the expected size of the other elements:
    1 | pid
    1 | uid
    1 | gid
    1 | groups_len
   XX | groups_val (GF_MAX_AUX_GROUPS=65535)
    1 | lk_owner_len
   YY | lk_owner_val (GF_MAX_LOCK_OWNER_LEN=1024)
  ----+-------------------------------------------
    5 | total xdr-units
  one XDR-unit is defined as BYTES_PER_XDR_UNIT = 4 bytes
  MAX_AUTH_BYTES = 400 is the maximum, this is 100 xdr-units.
  XX + YY can be 95 to fill the 100 xdr-units.
  Note that the on-wire protocol has tighter requirements than the
  internal structures. It is possible for xlators to use more groups and
  a bigger lk_owner than that can be sent by a GlusterFS-client.
This change prevents overflows when allocating the RPC/AUTH header. Two
new macros are introduced to calculate the number of groups that fit in
the RPC/AUTH header, when taking the size of the lk_owner in account. In
case the list of groups exceeds the maximum possible, only the first
groups are passed over the RPC/GlusterFS protocol to the bricks.
A warning is added to the logs, so that most system administrators will
get informed.
The reducing of the number of groups is not a new inventions. The
RPC/AUTH header (AUTH_SYS or AUTH_UNIX) that NFS uses has a limit of 16
groups. Most, if not all, NFS-clients will reduce any bigger number of
groups to 16. (nfs.server-aux-gids can be used to workaround the limit
of 16 groups, but the Gluster NFS-server will be limited to a maximum of
93 groups, or fewer in case the lk_owner structure contains more items.)
Cherry picked from commit 8235de189845986a535d676b1fd2c894b9c02e52:
> BUG: 1053579
> Signed-off-by: Niels de Vos <ndevos@redhat.com>
> Reviewed-on: http://review.gluster.org/7202
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
> Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Change-Id: I8410e59d0fd246d601b54b961d3ae9cb5a858c10
BUG: 1096425
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/7829
Reviewed-by: Lalatendu Mohanty <lmohanty@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Diffstat (limited to 'xlators/nfs')
| -rw-r--r-- | xlators/nfs/server/src/nfs-fops.c | 27 | 
1 files changed, 26 insertions, 1 deletions
diff --git a/xlators/nfs/server/src/nfs-fops.c b/xlators/nfs/server/src/nfs-fops.c index 60a5a9a843c..7b37e38de09 100644 --- a/xlators/nfs/server/src/nfs-fops.c +++ b/xlators/nfs/server/src/nfs-fops.c @@ -30,6 +30,8 @@  #include <libgen.h>  #include <semaphore.h> +static int gf_auth_max_groups_nfs_log = 0; +  void  nfs_fix_groups (xlator_t *this, call_stack_t *root)  { @@ -39,6 +41,7 @@ nfs_fix_groups (xlator_t *this, call_stack_t *root)          gid_t            mygroups[GF_MAX_AUX_GROUPS];          int              ngroups;          int              i; +        int              max_groups;          struct nfs_state *priv = this->private;          const gid_list_t *agl;  	gid_list_t gl; @@ -47,10 +50,22 @@ nfs_fix_groups (xlator_t *this, call_stack_t *root)                  return;          } +	/* RPC enforces the GF_AUTH_GLUSTERFS_MAX_GROUPS limit */ +	max_groups = GF_AUTH_GLUSTERFS_MAX_GROUPS(root->lk_owner.len); +  	agl = gid_cache_lookup(&priv->gid_cache, root->uid, 0, 0);  	if (agl) { -		for (ngroups = 0; ngroups < agl->gl_count; ngroups++)  +		if (agl->gl_count > max_groups) { +			GF_LOG_OCCASIONALLY (gf_auth_max_groups_nfs_log, +					this->name, GF_LOG_WARNING, +					"too many groups, reducing %d -> %d", +					agl->gl_count, max_groups); +		} + +		for (ngroups = 0; ngroups < agl->gl_count +				&& ngroups <= max_groups; ngroups++) {  			root->groups[ngroups] = agl->gl_list[ngroups]; +		}  		root->ngrps = ngroups;  		gid_cache_release(&priv->gid_cache, agl);  		return; @@ -92,6 +107,16 @@ nfs_fix_groups (xlator_t *this, call_stack_t *root)  			GF_FREE(gl.gl_list);  	} +	/* RPC enforces the GF_AUTH_GLUSTERFS_MAX_GROUPS limit */ +	if (ngroups > max_groups) { +		GF_LOG_OCCASIONALLY (gf_auth_max_groups_nfs_log, +				     this->name, GF_LOG_WARNING, +				     "too many groups, reducing %d -> %d", +				     ngroups, max_groups); + +		ngroups = max_groups; +	} +  	/* Copy data to the frame. */          for (i = 0; i < ngroups; ++i) {                  gf_log (this->name, GF_LOG_TRACE,  | 
