path: root/accepted/
diff options
authorRavishankar N <>2016-03-16 10:35:50 +0530
committerNiels de Vos <>2016-03-16 02:53:52 -0700
commite4a6d0bdd06ff7ab8e3d82fa1a6c7eb67328aa90 (patch)
treec4c4e68615f4d4f01961882fec3c3c507dd93962 /accepted/
parenta15f64598613ca9aae0a373c4c466bf6367c37fd (diff)
Add feature document for throttling.
Change-Id: Id7538b797cb8da297f45ec58fefe4a7cac0e3340 Signed-off-by: Ravishankar N <> Reviewed-on: Reviewed-by: Pranith Kumar Karampuri <> Tested-by: Pranith Kumar Karampuri <> Reviewed-by: Niels de Vos <>
Diffstat (limited to 'accepted/')
1 files changed, 90 insertions, 0 deletions
diff --git a/accepted/ b/accepted/
new file mode 100644
index 0000000..49728b0
--- /dev/null
+++ b/accepted/
@@ -0,0 +1,90 @@
+# Server side throttling translator
+## Summary
+The throttling translator would be loaded on the brick process and would use
+the Token Bucket Filter algorithm to regulate FOPS. The main motivation is to
+solve complaints about AFR selfheal taking too much of CPU resources. (due to
+too many fops for entry self-heal, rchecksums for data self-heal etc.)
+## Owners
+Ravishankar N <>
+## Current status
+Only high level design as of now.
+See [this link]( for the discussion on gluster-devel.
+## Related Feature Requests and Bugs
+Raghavdendra Bhat had attempted a [patch]( to move Token Bucket Filter to libglusterfs.
+## Detailed Description
+Throttling is achieved using the Token Bucket Filter algorithm (TBF). TBF
+is already used by bitrot's bitd signer (which is a client process) in
+gluster to regulate the CPU intensive check-sum calculation. By putting the
+logic on the brick side, multiple clients- selfheal, bitrot, rebalance or
+even the mounts themselves can avail the benefits of throttling.
+The TBF algorithm in a nutshell is as follows: There is a bucket which is filled
+at a steady (configurable) rate with tokens. Each FOP will need a fixed amount
+of tokens to be processed. If the bucket has that many tokens, the FOP is
+allowed and that many tokens are removed from the bucket. If not, the FOP is
+queued until the bucket is filled.
+The xlator will need to reside above io-threads and can have different buckets,
+one per client. There has to be a communication mechanism between the client and
+the brick (IPC?) to tell what FOPS need to be regulated from it, and the no. of
+tokens needed etc. These need to be re configurable via appropriate mechanisms.
+Each bucket will have a token filler thread which will fill the tokens in it.
+The main thread will enqueue heals in a list in the bucket if there aren't
+enough tokens. Once the token filler detects some FOPS can be serviced, it will
+send a cond-broadcast to a dequeue thread which will process (stack wind) all
+the FOPS that have the required no. of tokens from all buckets.
+## Benefit to GlusterFS
+Clients will not be starved during self-heal. The throttling feature can also be
+used by other internal clients apart from glustershd like bitrot which currently
+has this logic on bitd.
+## Scope
+### Nature of proposed change
+New server side translator and core functionality in libglusterfs.
+### Implications on manageability
+TBD. Tunables most likely to be exposed via gluster CLI.
+### Implications on presentation layer
+### Implications on persistence layer
+### Implications on 'GlusterFS' backend
+### Modification to GlusterFS metadata
+Mostly none.
+### Implications on 'glusterd'
+TBD. Mostly changes related to the tunables.
+## How To Test
+## User Experience
+New CLI.
+## Dependencies
+## Documentation
+## Status
+High level design.
+## Comments and Discussion
+See [this link]( for the discussion on gluster-devel.