summaryrefslogtreecommitdiffstats
path: root/doc
diff options
context:
space:
mode:
authorKaushal M <kaushal@redhat.com>2018-02-26 11:03:31 +0530
committerShyamsundarR <srangana@redhat.com>2018-02-26 13:26:47 -0500
commit8b85778185440b0f997d9c40b20a728844888886 (patch)
tree5744a4caa485d050bdfc942e37fae7871a8ced43 /doc
parent7c16c3e90e46cd1f22784d06bce0ec85577f0589 (diff)
doc: Update 4.0.0 release notes for GD2v4.0.0rc1
Change-Id: I8dce82bb3b7a1d48da2ad9a55bfa614b93e078ca BUG: 1539842 Signed-off-by: Kaushal M <kaushal@redhat.com>
Diffstat (limited to 'doc')
-rw-r--r--doc/release-notes/4.0.0.md276
1 files changed, 207 insertions, 69 deletions
diff --git a/doc/release-notes/4.0.0.md b/doc/release-notes/4.0.0.md
index caac4c25691..fb1a213c779 100644
--- a/doc/release-notes/4.0.0.md
+++ b/doc/release-notes/4.0.0.md
@@ -1,13 +1,13 @@
# Release notes for Gluster 4.0.0
The Gluster community celebrates 13 years of development with this latest
-release, Gluster 4.0. Including improved integration with containers, an
-enhanced user experience, and a next-generation management framework,
-4.0 release solidifies Gluster as the storage choice for scale out distributed
-file system and cloud-native developers.
+release, Gluster 4.0. This release enables improved integration with containers,
+an enhanced user experience, and a next-generation management framework.
+The 4.0 release helps cloud-native app developers choose Gluster as the default
+scale-out distributed file system.
-The most notable features and changes are documented on this page. A full list
-of bugs that have been addressed is included further below.
+A selection of the important features and changes are documented on this page.
+A full list of bugs that have been addressed is included further below.
- [Announcements](#announcements)
- [Major changes and features](#major-changes-and-features)
@@ -16,15 +16,15 @@ of bugs that have been addressed is included further below.
## Announcements
-1. As 3.13 is a short term maintenance release, features included
-in that release are available with 4.0.0 as well, and could be of interest to
-users upgrading to 4.0.0 from older than 3.13 releases. The 3.13 [release notes](http://docs.gluster.org/en/latest/release-notes/)
-captures the list of features that were introduced with 3.13.
+1. As 3.13 was a short term maintenance release, features which have been
+included in that release are available with 4.0.0 as well.These features may be of
+interest to users upgrading to 4.0.0 from older than 3.13 releases. The 3.13
+[release notes](http://docs.gluster.org/en/latest/release-notes/) captures the list of features that were introduced with 3.13.
**NOTE:** As 3.13 was a short term maintenance release, it will reach end of
life (EOL) with the release of 4.0.0. ([reference](https://www.gluster.org/release-schedule/))
-2. Releases that recieve maintenence updates post 4.0 release are, 3.10, 3.12,
+2. Releases that receive maintenance updates post 4.0 release are, 3.10, 3.12,
4.0 ([reference](https://www.gluster.org/release-schedule/))
3. With this release, the CentOS storage SIG will not build server packages for
@@ -33,10 +33,6 @@ migrations, client packages on CentOS6 will be published and maintained.
**NOTE**: This change was announced [here](http://lists.gluster.org/pipermail/gluster-users/2018-January/033212.html)
-4. TBD: Release version changes
-
-5. TBD: Glusterd2 as the mainstream management chioce from the next release
-
## Major changes and features
Features are categorized into the following sections,
@@ -50,35 +46,182 @@ Features are categorized into the following sections,
### Management
-#### 1. GlusterD2
-**Notes for users:**
-TBD
-- Need GD2 team to fill in enough links and information here covering,
- - What it is
- - Install and configuration steps
- - Future plans (what happens in 4.1.0 and further)
+GlusterD2 (GD2) is new management daemon for Gluster-4.0. It is a complete
+rewrite, with all new internal core frameworks, that make it more scalable,
+easier to integrate with and has lower maintenance requirements.
+
+A [quick start guide](https://github.com/gluster/glusterd2/blob/master/doc/quick-start-user-guide.md) is available to get started with GD2.
+
+GD2 in Gluster-4.0 is a technical preview release. It is not recommended for
+production use. For the current release glusterd is the preferred management
+daemon. More information is available in the [Limitations](#limitations) section.
+
+GD2 brings many new changes and improvements, that affect both users and developers.
+
+#### Features
+The most significant new features brought by GD2 are below.
+##### Native REST APIs
+GD2 exposes all of its management functionality via [ReST APIs](https://github.com/gluster/glusterd2/blob/master/doc/endpoints.md). The ReST APIs
+accept and return data encoded in JSON. This enables external projects such as
+[Heketi](https://github.com/heketi/heketi) to be better integrated with GD2.
+
+##### CLI
+GD2 provides a new CLI, `glustercli`, built on top of the ReST API. The CLI
+retains much of the syntax of the old `gluster` command. In addition we have,
+- Improved CLI help messages
+- Auto completion for sub commands
+- Improved CLI error messages on failure
+- Framework to run `glustercli` from outside the Cluster.
+
+In this release, the following CLI commands are available,
+- Peer management
+ - Peer Probe/Attach
+ - Peer Detach
+ - Peer Status
+- Volume Management
+ - Create/Start/Stop/Delete
+ - Expand
+ - Options Set/Get
+- Bitrot
+ - Enable/Disable
+ - Configure
+ - Status
+- Geo-replication
+ - Create/Start/Pause/Resume/Stop/Delete
+ - Configure
+ - Status
+
+##### Configuration store
+GD2 uses [etcd](https://github.com/coreos/etcd/) to store the Gluster pool configuration, which solves the
+config synchronize issues reported against the Gluster management daemon.
+
+GD2 embeds etcd, and automatically creates and manages an etcd cluster when
+forming the trusted storage pool. If required, GD2 can also connect to an
+already existing etcd cluster.
+
+##### Transaction Framework
+GD2 brings a newer more flexible distributed framework, to help it perform
+actions across the storage pool. The transaction framework provides better
+control for choosing peers for a Gluster operation and it also provides a
+mechanism to roll back the changes when something goes bad.
+
+##### Volume Options
+GD2 intelligently fetches and builds the list of volume options by directly
+reading `xlators` `*.so` files. It does required validations during volume set
+without maintaining duplicate list of options. This avoids lot of issues which
+can happen due to mismatch in the information between Glusterd and xlator
+shared libraries.
+
+Volume options listing is also improved, to clearly distinguish configured
+options and default options. Work is still in progress to categorize these
+options and tune the list for better understanding and ease of use.
+
+##### Volfiles generation and management
+GD2 has a newer and better structured way for developers to define volfile
+structure. The new method reduces the effort required to extend graphs or add
+new graphs.
+
+Also, volfiles are generated in single peer and stored in `etcd` store. This is
+very important for scalability since Volfiles are not stored in every node.
+
+##### Security
+GD2 supports TLS for ReST and internal communication, and authentication for
+the ReST API.If enabled, ReST APIs are currently limited to CLI, or the users
+who have access to the Token file present in `$GLUSTERD2_WORKDIR/auth` file.
+
+##### Features integration - Self Heal
+Self Heal feature integrated for the new Volumes created using Glusterd2.
+
+##### Geo-replication
+
+With GD2 integration Geo-replication setup becomes very easy. If Master and
+Remote volume are available and running, Geo-replication can be setup with just
+a single command.
+```
+glustercli geo-replication create <mastervol> <remotehost>::<remotevol>
+```
+Geo-replication status is improved, Status clearly distinguishes the multiple
+session details in status output.
-**Limitations:**
+Order of status rows was not predictable in earlier releases. It was very
+difficult to correlate the Geo-replication status with Bricks. With this
+release, Master worker status rows will always match with Bricks list in
+Volume info.
-**Known Issues:**
+Status can be checked using,
+```
+glustercli geo-replication status
+glustercli geo-replication status <mastervol> <remotehost>::<remotevol>
+```
+All the other commands are available as usual.
+
+Limitations:
+
+- On Remote nodes, Geo-replication is not yet creates the log directories. As
+a workaround, create the required log directories in Remote Volume nodes.
+
+##### Events APIs
+Events API feature is integrated with GD2. Webhooks can be registered to listen
+for GlusterFS events. Work is in progress for exposing an REST API to view all
+the events happened in last 15 minutes.
+
+#### Limitations
+##### Backward compatibility
+GD2 is not backwards compatible with the older GlusterD. Heterogeneous clusters
+running both GD2 and GlusterD are not possible.
+
+GD2 retains compatibility with Gluster-3.x clients. Old clients will still be
+able to mount and use volumes exported using GD2.
+
+##### Upgrade and migration
+GD2 does not support upgrade from Gluster-3.x releases, in Gluster-4.0.
+Gluster-4.0 will be shipping with both GD2 and the existing GlusterD. Users will
+be able to upgrade to Gluster-4.0 while continuing to use GlusterD.
+
+In Gluster-4.1, users will be able to migrate from GlusterD to GD2. Further,
+upgrades from Gluster-4.1 running GD2 to higher Gluster versions would be
+supported from release 4.1 onwards.
+
+Post Gluster-4.1, GlusterD would be maintained for a couple of releases, post
+which the only option to manage the cluster would be GD2.
+
+##### Missing and partial commands
+Not all commands from GlusterD, have been implemented for GD2. Some have been
+only partially implemented. This means not all GlusterFS features are available
+in GD2. We aim to bring most of the commands back in Gluster-4.1.
+
+##### Recovery from full shutdown
+With GD2, the process of recovery from a situation of a full cluster shutdown
+requires reading the [document available](https://github.com/gluster/glusterd2/wiki/Recovery) as well as some expertise.
+
+#### Known Issues
+##### 2-node clusters
+GD2 does not work well in 2-node clusters. Two main issues exist in this regard.
+- Restarting GD2 fails in 2-node clusters [#352](https://github.com/gluster/glusterd2/issues/352)
+- Detach fails in 2-node clusters [#332](https://github.com/gluster/glusterd2/issues/332)
+
+So it is recommended right now to run GD2 only in clusters of 3 or larger.
+
+##### Other issues
+Other known issues are tracked on [github issues](https://github.com/gluster/glusterd2/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+) right now. Please file any
+other issue you find on github issues.
### Monitoring
-The lack of live monitoring support on top of GlusterFS till date was a
-limiting factor for many users (and in many cases for developers too).
-[Statedump](docs.gluster.org/en/latest/Troubleshooting/statedump/) is useful for debugging, but is heavy for
-live monitoring.
+Till date, the absence of support for live monitoring on GlusterFS created
+constrained user experience for both users and developers. [Statedump](docs.gluster.org/en/latest/Troubleshooting/statedump/) is
+useful for debugging, but is heavy for live monitoring.
Further, the existence of `debug/io-stats` translator was not known to many and
`gluster volume profile` was not recommended as it impacted performance.
-With this release, glusterfs's core infrastructure itself gets some mechanisms
-to provide internal information, that avoids the heavy weight nature of prior
-monitoring mechanisms.
+In this release, GlusterFS enables a lightweight method to access internal
+information and avoids the performance penalty and complexities of previous
+approaches.
#### 1. Metrics collection across every FOP in every xlator
**Notes for users:**
-Gluster now has in-built latency measures in the xlator abstraction, thus
+Now, Gluster now has in-built latency measures in the xlator abstraction, thus
enabling capture of metrics and usage patterns across workloads.
These measures are currently enabled by default.
@@ -86,10 +229,6 @@ These measures are currently enabled by default.
**Limitations:**
This feature is auto-enabled and cannot be disabled.
-Providing means to disable the same in future releases also may not be made
-available, as the data generated is deemed critical to understand, tune, and
-troubleshoot gluster.
-
#### 2. Monitoring support
**Notes for users:**
Currently, the only project which consumes metrics and provides basic
@@ -100,17 +239,16 @@ Users can send SIGUSR2 signal to the process to dump the metrics, in
`/var/run/gluster/metrics/` directory.
**Limitations:**
-Currently core gluster stack and memory management systems dump metrics. For
-other translators and other core components, framework to provide more metrics
-exists, but additional metrics are not added in this release.
+Currently core gluster stack and memory management systems provide metrics. A
+framework to generate more metrics is present for other translators and core
+components. However, additional metrics are not added in this release.
### Performance
#### 1. EC: Make metadata [F]GETXATTR operations faster
**Notes for users:**
-Optimized getxattr and fgetxattr in disperse volumes to speed up the operation.
-Disperse translator, now forwards the request to one of the bircks that is
-deemed to have a good copy, instead of all, thus improving the efficiency of the
-operation.
+Disperse translator has made performance improvements to the [F]GETXATTR
+operation. Workloads involving heavy use of extended attributes on files and
+directories, will gain from the improvements made.
#### 2. Allow md-cache to serve nameless lookup from cache
**Notes for users:**
@@ -141,7 +279,7 @@ append to the existing list of xattr is not supported with this release.
#### 4. Cache last stripe of an EC volume while write is going on
**Notes for users:**
-Disperse translator now has the option to retain a writethrough cache of the
+Disperse translator now has the option to retain a write-through cache of the
last write stripe. This helps in improved small append sequential IO patterns
by reducing the need to read a partial stripe for appending operations.
@@ -154,8 +292,8 @@ Where, <N> is the number of stripes to cache.
#### 5. tie-breaker logic for blocking inodelks/entrylk in SHD
**Notes for users:**
Self-heal deamon locking has been enhanced to identify situations where an
-slefheal deamon is actively working on an inode. This enables other selfheal
-deamons to proceed with other entries in the queue, than waiting on a particular
+selfheal deamon is actively working on an inode. This enables other selfheal
+daemons to proceed with other entries in the queue, than waiting on a particular
entry, thus preventing starvation among selfheal threads.
#### 6. Independent eager-lock options for file and directory accesses
@@ -164,7 +302,7 @@ A new option named 'disperse.other-eager-lock' has been added to make it
possible to have different settings for regular file accesses and accesses
to other types of files (like directories).
-By default this option is enabled to keep the same behaviour as the previous
+By default this option is enabled to ensure the same behavior as the previous
versions. If you have multiple clients creating, renaming or removing files
from the same directory, you can disable this option to improve the performance
for these users while still keeping best performance for file accesses.
@@ -193,15 +331,17 @@ Config is enhanced with the following fields,
master-replica-count=
master-disperse_count=
```
-Note: Exising Geo-replication is not affected since this is activated only
+Note: Existing Geo-replication is not affected since this is activated only
when the option `--use-gconf-volinfo` is passed while spawning `gsyncd monitor`
#### 3. Geo-replication: Improve gverify.sh logs
**Notes for users:**
-gverify log file names and locations are changed as follows,
-1. Slave log file is changed from `<logdir>/geo-replication-slaves/slave.log`
+gverify.sh is the script which runs during geo-rep session creation which
+validates pre-requisites. The logs have been improved and locations are changed
+as follows,
+1. Slave mount log file is changed from `<logdir>/geo-replication-slaves/slave.log`
to, `<logdir>/geo-replication/gverify-slavemnt.log`
-2. Master log file is separated from the slave log file under,
+2. Master mount log file is separated from the slave log file under,
`<logdir>/geo-replication/gverify-mastermnt.log`
#### 4. Geo-rep: Cleanup stale (unusable) XSYNC changelogs.
@@ -212,11 +352,8 @@ restart from a faulty state.
#### 5. Improve gsyncd configuration and arguments handling
**Notes for users:**
+TBD (release notes)
- https://github.com/gluster/glusterfs/issues/73
-- Release notes:
- - Needs user facing documentation for newer options and such
- - There seems to be code improvement as well in the patches,
- so that may not be needed in the release notes
**Limitations:**
@@ -227,8 +364,8 @@ restart from a faulty state.
**Notes for users:**
Options have been added to the posix translator, to override default umask
values with which files and directories are created. This is particularly useful
-when sharing content by applications based on GID. As the defaule mode bits
-prevent such useful sharing, and supercede ACLs in this regard, these options
+when sharing content by applications based on GID. As the default mode bits
+prevent such useful sharing, and supersede ACLs in this regard, these options
are provided to control this behavior.
Command usage is as follows:
@@ -251,10 +388,10 @@ umask. Default value of these options is 0000.
#### 2. Replace MD5 usage to enable FIPS support
**Notes for users:**
-Previously, if gluster was run on a FIPS enabled system, it used to crash
-because MD5 is not FIPS compliant and gluster consumes MD5 checksum in
-various places like self-heal and geo-replication. This has been fixed by
-replacing MD5 with SHA256 which is FIPS compliant.
+Previously, if Gluster was run on a FIPS enabled system, it used to crash
+because MD5 is not FIPS compliant and Gluster consumes MD5 checksum in
+various places like self-heal and geo-replication. By replacing MD5 with a FIPS
+complaint SHA256, Gluster no longer crashes on a FIPS enabled system.
However, in order for AFR self-heal to work correctly during rolling upgrade
to 4.0, we have tied this to a volume option called `fips-mode-rchecksum`.
@@ -272,10 +409,11 @@ This feature strengthens consistency of the file system, trading it for some
performance and is strongly suggested for workloads where consistency is
required.
-For use-cases that involve a large number of renames or frequent creations and
-deletions, the meta-data about the files and directories shared across the
-clients were not always consistent. They do eventually become consistent, but
-a large proportion of applications are not built to handle eventual consistency.
+In previous releases the meta-data about the files and directories shared across
+the clients were not always consistent when the use-cases/workloads involved a
+large number of renames, frequent creations and deletions. They do eventually
+become consistent, but a large proportion of applications are not built to
+handle eventual consistency.
This feature can be enabled as follows,
```
@@ -283,8 +421,8 @@ This feature can be enabled as follows,
```
**Limitations:**
-This feature is released as a preview, as performance implications are not known
-completely.
+This feature is released as a technical preview, as performance implications are
+not known completely.
#### 4. Add option to disable nftw() based deletes when purging the landfill directory
**Notes for users:**
@@ -309,7 +447,7 @@ The option to control this behavior is,
# gluster volume set <volname> storage.max-hardlinks <N>
```
Where, `<N>` is 0-0xFFFFFFFF. If the local file system that the brick is using
-has a lower limit than this setting, that would be honoured.
+has a lower limit than this setting, that would be honored.
Default is set to 100, setting this to 0 turns it off and leaves it to the
local file system defaults. Setting it to 1 turns off hard links.
@@ -321,7 +459,7 @@ healing its subdirectories. If there were a lot of subdirs, it could take a
while before all subdirs were created on the newly added bricks. This led to
some missed directory listings.
-This is changed with this relase to process children directories before the
+This is changed with this release to process children directories before the
parents, thereby changing the way rebalance acts (files within sub directories
are migrated first) and also resolving the directory listing issue.