diff options
| author | Varun Shastry <vshastry@redhat.com> | 2013-04-19 12:34:51 +0530 | 
|---|---|---|
| committer | Vijay Bellur <vbellur@redhat.com> | 2013-07-29 18:25:24 +0530 | 
| commit | 3f9956ffb6e0faec1c4eea18916411d22a7e51d8 (patch) | |
| tree | 8176256ff517ae2cefad7408c925567b63b696b9 /xlators/cluster/dht/src/dht-common.c | |
| parent | e9c583598b8ad58bbda15759067ff57eca619e95 (diff) | |
features/quota: Improvements to quota
Old implementation
* Client side implementation of quota
    - Not secure
    - Increased traffic in updating the ctx
New Implementation
* 2 stages of quota implementation is done: Soft and hard quota
    Upon reaching soft quota limit on the directory it logs/alerts in the quota
    daemon log (ie DEFAULT_LOG_DIR/quotad.log) and no more writes allowed after
    hard quota limit. After reaching the soft-limit the daemon alerts the
    user/admin repeatively for every 'alert-time', which is configurable.
* Quota is moved to server-side.
    There will be 2 quota xlators
    i. Quota Server
        It takes care of the enforcing the quota and maintains the context
        specific to the brick. Since this doesn't have the complete picture of
        the cluster, cluster wide usage is updated from the quota daemon. This
        updated context is saved and used for the enforcement.
        It updates its context by searching the QUOTA_UPDATE_KEY from the dict
        in the setxattr call, and is updated from nowhere else.
        The quota is always loaded in the server graph and is by passed if the
        feature is not enabled.
        Options specific to quota-server:
        server-quota    - Specifies whether the features is on/off. It is used
                          to by pass the quota if turned off.
        deem-statfs     - If set to on, it takes quota limits into
                          consideration while estimating fs size. (df command)
    ii. Quota Daemon
        This is the new xlator introduced with this patch. Its the
        *gluster client* process with no mount point, started upon enabling
        quota or restarting the volume. This is a single process for all the
        volumes in the cluster. Its volfile stored in
        GLUSTERD_DEFAULT_WORKI_DIR/quotad/quotad.vol.
        It queries for the sizes on all the bricks, aggregates the size and
        sends back the updated size, periodically. The timeout between
        successive updation is configurable and typically/by default more for
        below-soft-quota usage and less for above-soft-quota usage. It
        maintains the timeout inside the limit structure based on the usage;
        below soft limit and above soft limit.
        There will be thread running per volume which iterates through the list
        and decides whether the size to be queried in the current iteration
        based on its timeout. It takes the next iteration time taking the least
        of the timeouts in the list of entries.
        Maintains a separate inode table for each volume in the quotad. In the
        first iteration it builds the table for quota-dirs (dirs on which limit
        is set) and its components.
        Options specific to quotad:
        hard-timeout       - Timeout for updation of usage to the quota-server
                             when the usage is crosses the soft-limit.
        soft-timeout       - Timeout for the updation of usage to the
                             quota-server when the usage is below soft-limit.
        alert-time         - Frequency of logging after the usage reached
                             soft limit.
   Options common to both:
   default-soft-limit - This is used when individual paths are not
                        configured with soft-limit and default value of
                        this option is 90% of the hard-limit.
   limit-set          - String containing all the limits.
   Thus in the current implementation we'll have 2 quota xlators: one in server
   graph and one in trusted client (quota daemon) of which the sole
   purpose will be to aggregate the quota size xattrs from all the bricks and
   send the same to server quota xlator.
* Changes in glusterd and CLI
   A single volfile is created for all the volumes, similar to nfs volfile.
   All files related to quota client (volfile, pid etc) are stored in
   GLUSTERD_DEFAULT_WORK_DIR/quotad/.
   The new pattern of the quota limit stores in
   limit-set = <single-dir-limit>[,<single-dir-limit>]
   single-dir-limit = <abs-path>:<hard-limit>[:<soft-limit-in-percent>]
   It also introduces new options:
   volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} |
   volume quota <VOLNAME> {limit-usage <path> <size> |soft-limit <path> <percent>} |
   volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>}
Credit:
Raghavendra Bhat        <rabhat@redhat.com>
Varun Shastry           <vshastry@redhat.com>
Shishir Gowda           <sgowda@redhat.com>
Kruthika Dhananjay      <kdhananj@redhat.com>
Brian Foster            <bfoster@redhat.com>
Krishnan Parthasarathi  <kparthas@redhat.com>
Change-Id: I16ec5be0c2faaf42b14034b9ccaf17796adef082
BUG: 969461
Signed-off-by: Varun Shastry <vshastry@redhat.com>
Diffstat (limited to 'xlators/cluster/dht/src/dht-common.c')
| -rw-r--r-- | xlators/cluster/dht/src/dht-common.c | 9 | 
1 files changed, 9 insertions, 0 deletions
diff --git a/xlators/cluster/dht/src/dht-common.c b/xlators/cluster/dht/src/dht-common.c index 8b34d1a7..4f02d18f 100644 --- a/xlators/cluster/dht/src/dht-common.c +++ b/xlators/cluster/dht/src/dht-common.c @@ -2269,6 +2269,15 @@ dht_getxattr (call_frame_t *frame, xlator_t *this,                  return 0;          } +        // Handle the quota limit list command. +        if (key && !strcmp (GF_XATTR_QUOTA_LIMIT_LIST, key)) { +                local->call_cnt = 1; +                subvol = dht_first_up_subvol (this); +                STACK_WIND (frame, dht_getxattr_cbk, subvol, +                            subvol->fops->getxattr, loc, key, xdata); +                return 0; +        } +          if (key && *conf->vol_uuid) {                  if ((match_uuid_local (key, conf->vol_uuid) == 0) &&                      (GF_CLIENT_PID_GSYNCD == frame->root->pid)) {  | 
