summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorShyamsundarR <srangana@redhat.com>2018-05-09 09:36:05 -0400
committerShyamsundar Ranganathan <srangana@redhat.com>2018-05-09 17:33:16 +0000
commit98d3de5ce7b517cb2dfdd58b4111c26e6cefaf00 (patch)
tree328d89568dc2e949c1067185f96658f01a64831b
parent60048aec76b3795b50efe1b197523c1e7ac05c20 (diff)
doc: Initial release notes for 4.1.0
Change-Id: I934d05ff32431787086d4ec4ddb45abb131d44f6 Updates: bz#1575386 Signed-off-by: ShyamsundarR <srangana@redhat.com>
-rw-r--r--doc/release-notes/4.1.0.md236
1 files changed, 236 insertions, 0 deletions
diff --git a/doc/release-notes/4.1.0.md b/doc/release-notes/4.1.0.md
new file mode 100644
index 00000000000..a7895297742
--- /dev/null
+++ b/doc/release-notes/4.1.0.md
@@ -0,0 +1,236 @@
+# Release notes for Gluster 4.1.0
+
+This is a major release that includes a range of features enhancing management,
+performance, monitoring, and providing newer functionality like thin arbiters,
+cloud archival, time consistency. It also contains several bug fixes.
+
+A selection of the important features and changes are documented on this page.
+A full list of bugs that have been addressed is included further below.
+
+- [Announcements](#announcements)
+- [Major changes and features](#major-changes-and-features)
+- [Major issues](#major-issues)
+- [Bugs addressed in the release](#bugs-addressed)
+
+## Announcements
+
+1. As 4.0 was a short term maintenance release, features which have been
+included in that release are available with 4.1.0 as well. These features may
+be of interest to users upgrading to 4.1.0 from older than 4.0 releases. The 4.0
+[release notes](http://docs.gluster.org/en/latest/release-notes/) captures the list of features that were introduced with 4.0.
+
+**NOTE:** As 4.0 was a short term maintenance release, it will reach end of
+life (EOL) with the release of 4.1.0. ([reference](https://www.gluster.org/release-schedule/))
+
+2. Releases that receive maintenance updates post 4.1 release are, 3.12, and
+4.1 ([reference](https://www.gluster.org/release-schedule/))
+
+**NOTE:** 3.10 long term maintenance release, will reach end of life (EOL) with
+the release of 4.1.0. ([reference](https://www.gluster.org/release-schedule/))
+
+3. Continuing with this release, the CentOS storage SIG will not build server
+packages for CentOS6. Server packages will be available for CentOS7 only. For
+ease of migrations, client packages on CentOS6 will be published and maintained.
+
+**NOTE**: This change was announced [here](http://lists.gluster.org/pipermail/gluster-users/2018-January/033212.html)
+
+## Major changes and features
+
+Features are categorized into the following sections,
+
+- [Management](#management)
+- [Monitoring](#monitoring)
+- [Performance](#performance)
+- [Standalone](#standalone)
+- [Developer related](#developer-related)
+
+### Management
+
+GlusterD2 <TBD>
+
+### Monitoring
+
+Various xlators are enhanced to provide additional metrics, that help in
+determining the effectiveness of the xlator in various workloads.
+
+These metrics can be dumped and visualized as detailed [here](https://docs.gluster.org/en/latest/release-notes/4.0.0/#monitoring).
+
+#### 1. Additional metrics added to negative lookup cache xlator
+
+Metrics added are:
+ - negative_lookup_hit_count
+ - negative_lookup_miss_count
+ - get_real_filename_hit_count
+ - get_real_filename_miss_count
+ - nameless_lookup_count
+ - inodes_with_positive_dentry_cache
+ - inodes_with_negative_dentry_cache
+ - dentry_invalidations_recieved
+ - cache_limit
+ - consumed_cache_size
+ - inode_limit
+ - consumed_inodes
+
+#### 2. Additional metrics added to md-cache xlator
+
+Metrics added are:
+ - stat_cache_hit_count
+ - stat_cache_miss_count
+ - xattr_cache_hit_count
+ - xattr_cache_miss_count
+ - nameless_lookup_count
+ - negative_lookup_count
+ - stat_cache_invalidations_received
+ - xattr_cache_invalidations_received
+
+#### 3. Additional metrics added to quick-read xlator
+
+Metrics added are:
+ - total_files_cached
+ - total_cache_used
+ - cache-hit
+ - cache-miss
+ - cache-invalidations
+
+### Performance
+
+#### 1. Support for fuse writeback cache
+
+Gluster FUSE mounts support FUSE extension to leverage the kernel
+"writeback cache".
+
+For usage help see `man 8 glusterfs` and `man 8 mount.glusterfs`, specifically
+the options `-kernel-writeback-cache` and `-attr-times-granularity`.
+
+#### 2. Extended eager-lock to metadata transactions in replicate xlator
+
+Eager lock feature in replicate xlator is extended to support metadata
+transactions in addition to data transactions. This helps in improving the
+performance when there are frequent metadata updates in the workload. This is
+typically seen with sharded volumes by default, and in other workloads that
+incur a higher rate of metadata modifications to the same set of files.
+
+As a part of this feature, compounded FOPs feature in AFR is deprecated, volumes
+that are configured to leverage compounding will start disregarding the option
+`use-compound-fops`.
+
+**NOTE:** This is an internal change in AFR xlator and is not user controlled
+or configurable.
+
+#### 3. Support for multi-threaded fuse readers
+
+FUSE based mounts can specify number of FUSE request processing threads during
+a mount. For workloads that have high concurrency on a single client, this helps
+in processing FUSE requests in parallel, than the existing single reader model.
+
+This is provided as a mount time option named `reader-thread-count` and can be
+used as follows,
+```
+# mount -t glusterfs -o reader-thread-count=<n> <server>:<volname> <mntpoint>
+```
+
+#### 4. Configurable aggregate size for write-behind xlator
+
+Write-behind xlator provides the option `performance.aggregate-size` to enable
+configurable aggregate write sizes. This option enables write-behind xlator to
+aggregate writes till the specified value before the writes are sent to the
+bricks.
+
+Existing behaviour set this size to a maximum of 128KB per file. The
+configurable option provides the ability to tune this up or down based on the
+workload to improve performance of writes.
+
+Usage:
+```
+# gluster volume set <volname> performance.aggregate-size <size>
+```
+
+#### 5. Adaptive read replica selection based on queue length
+
+AFR xlator is enhanced with a newer value for the option `read-hash-mode`.
+Providing this option with a value of `3` will distribute reads across AFR
+subvolumes based on the subvol having the least outstanding read requests.
+
+This helps in better distributing and hence improving workload performance on
+reads, in replicate based volumes.
+
+### Standalone
+
+#### 1. Support for cloud archival
+
+Cloud sync feature archives cold (infrequently accessed) data to a tertiary
+storage service, thus providing space savings on primary Gluster volumes.
+
+Archived data is fetched on demand, by client mounts that request the data, from
+the tertiary storage service.
+
+<Further notes on how to use and configure, TBD>
+
+**NOTE:** This feature is a tech preview, feedback welcome!
+
+#### 2. Thin arbiter quorum for 2-way replication
+
+**NOTE:** This feature is available only with GlusterD2
+
+<TBD>
+ - What is it?
+ - How to configure/setup?
+ - How to convert an existing 2-way to thin-arbiter?
+ - Better would be to add all these to gluster docs and point to that here,
+ with a short introduction to the feature, and its limitations at present.
+
+#### 3. Automatically configure backup volfile servers in clients
+
+**NOTE:** This feature is available only with GlusterD2
+
+Clients connecting and mounting a Gluster volume, will automatically fetch and
+configure backup volfile servers, for future volfile updates and fetches, when
+the initial server used to fetch the volfile and mount is down.
+
+When using glusterd, this is achieved using the FUSE mount option
+`backup-volfile-servers`, and when using GlusterD2 this is done automatically.
+
+#### 4. Python code base is now python3 compatible
+
+All python code in Gluster that are installed by the various packages are
+python3 compatible.
+
+#### 5. (c/m)time equivalence across replicate and disperse subvolumes
+
+Enabling the utime feature, enables Gluster to maintain consistent change and
+modification time stamps on files and directories across bricks.
+
+This feature is useful when applications are sensitive to time deltas between
+operations (for example tar may report "file changed as we read it"), to
+maintain and report equal time stamps on the file across the subvolumes.
+
+To enable the feature use,
+```
+# gluster volume set <volname> features.utime
+```
+
+**Limitations**:
+<TBD>
+
+### Developer related
+
+#### 1. New API for acquiring leases and acting on lease recalls
+
+A new API to acquire a lease on an open file and also to receive callbacks when
+the lease is recalled, is provided with gfapi.
+
+Refer to the [header](https://github.com/gluster/glusterfs/blob/release-4.1/api/src/glfs.h#L1112) for details on how to use this API.
+
+#### 2. Extended language bindings for gfapi to include perl
+
+See, [libgfapi-perl](https://github.com/gluster/libgfapi-perl) - Libgfapi bindings for Perl using FFI
+
+## Major issues
+
+**None**
+
+## Bugs addressed
+
+Bugs addressed since release-4.0.0 are listed below.
+
+<TBD>