summaryrefslogtreecommitdiffstats
path: root/doc/release-notes
diff options
context:
space:
mode:
Diffstat (limited to 'doc/release-notes')
-rw-r--r--doc/release-notes/3.10.0.md577
-rw-r--r--doc/release-notes/3.10.1.md49
-rw-r--r--doc/release-notes/3.10.2.md71
-rw-r--r--doc/release-notes/3.10.3.md38
-rw-r--r--doc/release-notes/3.10.4.md38
-rw-r--r--doc/release-notes/3.10.5.md49
-rw-r--r--doc/release-notes/3.10.6.md45
7 files changed, 0 insertions, 867 deletions
diff --git a/doc/release-notes/3.10.0.md b/doc/release-notes/3.10.0.md
deleted file mode 100644
index 75e6c55ba43..00000000000
--- a/doc/release-notes/3.10.0.md
+++ /dev/null
@@ -1,577 +0,0 @@
-# Release notes for Gluster 3.10.0
-
-This is a major Gluster release that includes some substantial changes. The
-features revolve around, better support in container environments, scaling to
-larger number of bricks per node, and a few usability and performance
-improvements, among other bug fixes.
-
-The most notable features and changes are documented on this page. A full list
-of bugs that has been addressed is included further below.
-
-## Major changes and features
-
-### Brick multiplexing
-*Notes for users:*
-Multiplexing reduces both port and memory usage. It does *not* improve
-performance vs. non-multiplexing except when memory is the limiting factor,
-though there are other related changes that improve performance overall (e.g.
-compared to 3.9).
-
-Multiplexing is off by default. It can be enabled with
-
-```bash
-# gluster volume set all cluster.brick-multiplex on
-```
-
-*Limitations:*
-There are currently no tuning options for multiplexing - it's all or nothing.
-This will change in the near future.
-
-*Known Issues:*
-The only feature or combination of features known not to work with multiplexing
-is USS and SSL. Anyone using that combination should leave multiplexing off.
-
-### Support to display op-version information from clients
-*Notes for users:*
-To get information on what op-version are supported by the clients, users can
-invoke the `gluster volume status` command for clients. Along with information
-on hostname, port, bytes read, bytes written and number of clients connected
-per brick, we now also get the op-version on which the respective clients
-operate. Following is the example usage:
-
-```bash
-# gluster volume status <VOLNAME|all> clients
-```
-
-*Limitations:*
-
-*Known Issues:*
-
-### Support to get maximum op-version in a heterogeneous cluster
-*Notes for users:*
-A heterogeneous cluster operates on a common op-version that can be supported
-across all the nodes in the trusted storage pool. Upon upgrade of the nodes in
-the cluster, the cluster might support a higher op-version. Users can retrieve
-the maximum op-version to which the cluster could be bumped up to by invoking
-the `gluster volume get` command on the newly introduced global option,
-`cluster.max-op-version`. The usage is as follows:
-
-```bash
-# gluster volume get all cluster.max-op-version
-```
-
-*Limitations:*
-
-*Known Issues:*
-
-### Support for rebalance time to completion estimation
-*Notes for users:*
-Users can now see approximately how much time the rebalance
-operation will take to complete across all nodes.
-
-The estimated time left for rebalance to complete is displayed
-as part of the rebalance status. Use the command:
-
-```bash
-# gluster volume rebalance <VOLNAME> status
-```
-
-*Limitations:*
-The rebalance process calculates the time left based on the rate
-at while files are processed on the node and the total number of files
-on the brick which is determined using statfs. The limitations of this
-are:
-
- * A single fs partition must host only one brick. Multiple bricks on
-the same fs partition will cause the statfs results to be invalid.
-
- * The estimates are dynamic and are recalculated every time the rebalance status
-command is invoked.The estimates become more accurate over time so short running
-rebalance operations may not benefit.
-
-*Known Issues:*
-As glusterfs does not stored the number of files on the brick, we use statfs to
-guess the number. The .glusterfs directory contents can significantly skew this
-number and affect the calculated estimates.
-
-
-### Separation of tier as its own service
-*Notes for users:*
-This change is to move the management of the tier daemon into the gluster
-service framework, thereby improving it stability and manageability by the
-service framework.
-
-This has no change to any of the tier commands or user facing interfaces and
-operations.
-
-*Limitations:*
-
-*Known Issues:*
-
-### Statedump support for gfapi based applications
-*Notes for users:*
-gfapi based applications now can dump state information for better trouble
-shooting of issues. A statedump can be triggered in two ways:
-
-1. by executing the following on one of the Gluster servers,
- ```bash
- # gluster volume statedump <VOLNAME> client <HOST>:<PID>
- ```
-
- - `<VOLNAME>` should be replaced by the name of the volume
- - `<HOST>` should be replaced by the hostname of the system running the
- gfapi application
- - `<PID>` should be replaced by the PID of the gfapi application
-
-2. through calling `glfs_sysrq(<FS>, GLFS_SYSRQ_STATEDUMP)` within the
- application
-
- - `<FS>` should be replaced by a pointer to a `glfs_t` structure
-
-All statedumps (`*.dump.*` files) will be located at the usual location,
-on most distributions this would be `/var/run/gluster/`.
-
-*Limitations:*
-It is not possible to trigger statedumps from the Gluster CLI when the
-gfapi application has lost its management connection to the GlusterD
-servers.
-
-GlusterFS 3.10 is the first release that contains support for the new
-`glfs_sysrq()` function. Applications that include features for
-debugging will need to be adapted to call this function. At the time of
-the release of 3.10, no applications are known to call `glfs_sysrq()`.
-
-*Known Issues:*
-
-### Disabled creation of trash directory by default
-*Notes for users:*
-From now onwards trash directory, namely .trashcan, will not be be created by
-default upon creation of new volumes unless and until the feature is turned ON
-and the restrictions on the same will be applicable as long as features.trash
-is set for a particular volume.
-
-*Limitations:*
-After upgrade for pre-existing volumes, trash directory will be still present at
-root of the volume. Those who are not interested in this feature may have to
-manually delete the directory from the mount point.
-
-*Known Issues:*
-
-### Implemented parallel readdirp with distribute xlator
-*Notes for users:*
-Currently the directory listing gets slower as the number of bricks/nodes
-increases in a volume, though the file/directory numbers remain unchanged.
-With this feature, the performance of directory listing is made mostly
-independent of the number of nodes/bricks in the volume. Thus scale doesn't
-exponentially reduce the directory listing performance. (On a 2, 5, 10, 25 brick
-setup we saw ~5, 100, 400, 450% improvement consecutively)
-
-To enable this feature:
-```bash
-# gluster volume set <VOLNAME> performance.readdir-ahead on
-# gluster volume set <VOLNAME> performance.parallel-readdir on
-```
-
-To disable this feature:
-```bash
-# gluster volume set <VOLNAME> performance.parallel-readdir off
-```
-
-If there are more than 50 bricks in the volume it is good to increase the cache
-size to be more than 10Mb (default value):
-```bash
-# gluster volume set <VOLNAME> performance.rda-cache-limit <CACHE SIZE>
-```
-
-*Limitations:*
-
-*Known Issues:*
-
-### md-cache can optionally -ve cache security.ima xattr
-*Notes for users:*
-From kernel version 3.X or greater, creating of a file results in removexattr
-call on security.ima xattr. This xattr is not set on the file unless IMA
-feature is active. With this patch, removxattr call returns ENODATA if it is
-not found in the cache.
-
-The end benefit is faster create operations where IMA is not enabled.
-
-To cache this xattr use,
-```bash
-# gluster volume set <VOLNAME> performance.cache-ima-xattrs on
-```
-
-The above option is on by default.
-
-*Limitations:*
-
-*Known Issues:*
-
-### Added support for CPU extensions in disperse computations
-*Notes for users:*
-To improve disperse computations, a new way of generating dynamic code
-targeting specific CPU extensions like SSE and AVX on Intel processors is
-implemented. The available extensions are detected on run time. This can
-roughly double encoding and decoding speeds (or halve CPU usage).
-
-This change is 100% compatible with the old method. No change is needed if
-an existing volume is upgraded.
-
-You can control which extensions to use or disable them with the following
-command:
-
-```bash
-# gluster volume set <VOLNAME> disperse.cpu-extensions <type>
-```
-
-Valid <type> values are:
-
-* none: Completely disable dynamic code generation
-* auto: Automatically detect available extensions and use the best one
-* x64: Use dynamic code generation using standard 64 bits instructions
-* sse: Use dynamic code generation using SSE extensions (128 bits)
-* avx: Use dynamic code generation using AVX extensions (256 bits)
-
-The default value is 'auto'. If a value is specified that is not detected on
-run-time, it will automatically fall back to the next available option.
-
-*Limitations:*
-
-*Known Issues:*
-To solve a conflict between the dynamic code generator and SELinux, it
-has been necessary to create a dynamic file on runtime in the directory
-/usr/libexec/glusterfs. This directory only exists if the server package
-is installed. On nodes with only the client package installed, this directory
-won't exist and the dynamic code won't be used.
-
-It also needs root privileges to create the file there, so any gfapi
-application not running as root won't be able to use dynamic code generation.
-
-In these cases, disperse volumes will continue working normally but using
-the old implementation (equivalent to setting disperse.cpu-extensions to none).
-
-More information and a discussion on how to solve this can be found here:
-
-https://bugzilla.redhat.com/1421649
-
-## Bugs addressed
-
-Bugs addressed since release-3.9 are listed below.
-
-- [#789278](https://bugzilla.redhat.com/789278): Issues reported by Coverity static analysis tool
-- [#1198849](https://bugzilla.redhat.com/1198849): Minor improvements and cleanup for the build system
-- [#1211863](https://bugzilla.redhat.com/1211863): RFE: Support in md-cache to use upcall notifications to invalidate its cache
-- [#1231224](https://bugzilla.redhat.com/1231224): Misleading error messages on brick logs while creating directory (mkdir) on fuse mount
-- [#1234054](https://bugzilla.redhat.com/1234054): `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
-- [#1289922](https://bugzilla.redhat.com/1289922): Implement SIMD support on EC
-- [#1290304](https://bugzilla.redhat.com/1290304): [RFE]Reducing number of network round trips
-- [#1297182](https://bugzilla.redhat.com/1297182): Mounting with "-o noatime" or "-o noexec" causes "nosuid,nodev" to be set as well
-- [#1313838](https://bugzilla.redhat.com/1313838): Tiering as separate process and in v status moving tier task to tier process
-- [#1316873](https://bugzilla.redhat.com/1316873): EC: Set/unset dirty flag for all the update operations
-- [#1325531](https://bugzilla.redhat.com/1325531): Statedump: Add per xlator ref counting for inode
-- [#1325792](https://bugzilla.redhat.com/1325792): "gluster vol heal test statistics heal-count replica" seems doesn't work
-- [#1330604](https://bugzilla.redhat.com/1330604): out-of-tree builds generate XDR headers and source files in the original directory
-- [#1336371](https://bugzilla.redhat.com/1336371): Sequential volume start&stop is failing with SSL enabled setup.
-- [#1341948](https://bugzilla.redhat.com/1341948): DHT: Rebalance- Misleading log messages from __dht_check_free_space function
-- [#1344714](https://bugzilla.redhat.com/1344714): removal of file from nfs mount crashs ganesha server
-- [#1349385](https://bugzilla.redhat.com/1349385): [FEAT]jbr: Add rollbacking of failed fops
-- [#1355956](https://bugzilla.redhat.com/1355956): RFE : move ganesha related configuration into shared storage
-- [#1356076](https://bugzilla.redhat.com/1356076): DHT doesn't evenly balance files on FreeBSD with ZFS
-- [#1356960](https://bugzilla.redhat.com/1356960): OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
-- [#1357753](https://bugzilla.redhat.com/1357753): JSON output for all Events CLI commands
-- [#1357754](https://bugzilla.redhat.com/1357754): Delayed Events if any one Webhook is slow
-- [#1358296](https://bugzilla.redhat.com/1358296): tier: breaking down the monolith processing function tier_migrate_using_query_file()
-- [#1359612](https://bugzilla.redhat.com/1359612): [RFE] Geo-replication Logging Improvements
-- [#1360670](https://bugzilla.redhat.com/1360670): Add output option `--xml` to man page of gluster
-- [#1363595](https://bugzilla.redhat.com/1363595): Node remains in stopped state in pcs status with "/usr/lib/ocf/resource.d/heartbeat/ganesha_mon: line 137: [: too many arguments ]" messages in logs.
-- [#1363965](https://bugzilla.redhat.com/1363965): geo-replication *changes.log does not respect the log-level configured
-- [#1364420](https://bugzilla.redhat.com/1364420): [RFE] History Crawl performance improvement
-- [#1365395](https://bugzilla.redhat.com/1365395): Support for rc.d and init for Service management
-- [#1365740](https://bugzilla.redhat.com/1365740): dht: Update stbuf from servers having layout
-- [#1365791](https://bugzilla.redhat.com/1365791): Geo-rep worker Faulty with OSError: [Errno 21] Is a directory
-- [#1365822](https://bugzilla.redhat.com/1365822): [RFE] cli command to get max supported cluster.op-version
-- [#1366494](https://bugzilla.redhat.com/1366494): Rebalance is not considering the brick sizes while fixing the layout
-- [#1366495](https://bugzilla.redhat.com/1366495): 1 mkdir generates tons of log messages from dht xlator
-- [#1366648](https://bugzilla.redhat.com/1366648): [GSS] A hot tier brick becomes full, causing the entire volume to have issues and returns stale file handle and input/output error.
-- [#1366815](https://bugzilla.redhat.com/1366815): spurious heal info as pending heal entries never end on an EC volume while IOs are going on
-- [#1368012](https://bugzilla.redhat.com/1368012): gluster fails to propagate permissions on the root of a gluster export when adding bricks
-- [#1368138](https://bugzilla.redhat.com/1368138): Crash of glusterd when using long username with geo-replication
-- [#1368312](https://bugzilla.redhat.com/1368312): Value of `replica.split-brain-status' attribute of a directory in metadata split-brain in a dist-rep volume reads that it is not in split-brain
-- [#1368336](https://bugzilla.redhat.com/1368336): [RFE] Tier Events
-- [#1369077](https://bugzilla.redhat.com/1369077): The directories get renamed when data bricks are offline in 4*(2+1) volume
-- [#1369124](https://bugzilla.redhat.com/1369124): fix unused variable warnings from out-of-tree builds generate XDR headers and source files i...
-- [#1369397](https://bugzilla.redhat.com/1369397): segment fault in changelog_cleanup_dispatchers
-- [#1369403](https://bugzilla.redhat.com/1369403): [RFE]: events from protocol server
-- [#1369523](https://bugzilla.redhat.com/1369523): worm: variable reten_mode is invalid to be free by mem_put in fini()
-- [#1370410](https://bugzilla.redhat.com/1370410): [granular entry sh] - Provide a CLI to enable/disable the feature that checks that there are no heals pending before allowing the operation
-- [#1370567](https://bugzilla.redhat.com/1370567): [RFE] Provide snapshot events for the new eventing framework
-- [#1370931](https://bugzilla.redhat.com/1370931): glfs_realpath() should not return malloc()'d allocated memory
-- [#1371353](https://bugzilla.redhat.com/1371353): posix: Integrate important events with events framework
-- [#1371470](https://bugzilla.redhat.com/1371470): disperse: Integrate important events with events framework
-- [#1371485](https://bugzilla.redhat.com/1371485): [RFE]: AFR events
-- [#1371539](https://bugzilla.redhat.com/1371539): Quota version not changing in the quota.conf after upgrading to 3.7.1 from 3.6.1
-- [#1371540](https://bugzilla.redhat.com/1371540): Spurious regression in tests/basic/gfapi/bug1291259.t
-- [#1371874](https://bugzilla.redhat.com/1371874): [RFE] DHT Events
-- [#1372193](https://bugzilla.redhat.com/1372193): [geo-rep]: AttributeError: 'Popen' object has no attribute 'elines'
-- [#1372211](https://bugzilla.redhat.com/1372211): write-behind: flush stuck by former failed write
-- [#1372356](https://bugzilla.redhat.com/1372356): glusterd experiencing repeated connect/disconnect messages when shd is down
-- [#1372553](https://bugzilla.redhat.com/1372553): "gluster vol status all clients --xml" doesn't generate xml if there is a failure in between
-- [#1372584](https://bugzilla.redhat.com/1372584): Fix the test case http://review.gluster.org/#/c/15385/
-- [#1373072](https://bugzilla.redhat.com/1373072): Event pushed even if Answer is No in the Volume Stop and Delete prompt
-- [#1373373](https://bugzilla.redhat.com/1373373): Worker crashes with EINVAL errors
-- [#1373520](https://bugzilla.redhat.com/1373520): [Bitrot]: Recovery fails of a corrupted hardlink (and the corresponding parent file) in a disperse volume
-- [#1373741](https://bugzilla.redhat.com/1373741): [geo-replication]: geo-rep Status is not showing bricks from one of the nodes
-- [#1374093](https://bugzilla.redhat.com/1374093): glusterfs: create a directory with 0464 mode return EIO error
-- [#1374286](https://bugzilla.redhat.com/1374286): [geo-rep]: defunct tar process while using tar+ssh sync
-- [#1374584](https://bugzilla.redhat.com/1374584): Detach tier commit is allowed when detach tier start goes into failed state
-- [#1374587](https://bugzilla.redhat.com/1374587): gf_event python fails with ImportError
-- [#1374993](https://bugzilla.redhat.com/1374993): bug-963541.t spurious failure
-- [#1375181](https://bugzilla.redhat.com/1375181): /var/tmp/rpm-tmp.KPCugR: line 2: /bin/systemctl: No such file or directory
-- [#1375431](https://bugzilla.redhat.com/1375431): [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd/groups/virt
-- [#1375526](https://bugzilla.redhat.com/1375526): Kill rpc.statd on Linux machines
-- [#1375532](https://bugzilla.redhat.com/1375532): Rpm installation fails with conflicts error for eventsconfig.json file
-- [#1376671](https://bugzilla.redhat.com/1376671): Rebalance fails to start if a brick is down
-- [#1376693](https://bugzilla.redhat.com/1376693): RFE: Provide a prompt when enabling gluster-NFS
-- [#1377097](https://bugzilla.redhat.com/1377097): The GlusterFS Callback RPC-calls always use RPC/XID 42
-- [#1377341](https://bugzilla.redhat.com/1377341): out-of-tree builds generate XDR headers and source files in the original directory
-- [#1377427](https://bugzilla.redhat.com/1377427): incorrect fuse dumping for WRITE
-- [#1377556](https://bugzilla.redhat.com/1377556): Files not being opened with o_direct flag during random read operation (Glusterfs 3.8.2)
-- [#1377584](https://bugzilla.redhat.com/1377584): memory leak problems are found in daemon:glusterd, server:glusterfsd and client:glusterfs
-- [#1377607](https://bugzilla.redhat.com/1377607): Volume restart couldn't re-export the volume exported via ganesha.
-- [#1377864](https://bugzilla.redhat.com/1377864): Creation of files on hot tier volume taking very long time
-- [#1378057](https://bugzilla.redhat.com/1378057): glusterd fails to start without installing glusterfs-events package
-- [#1378072](https://bugzilla.redhat.com/1378072): Modifications to AFR Events
-- [#1378305](https://bugzilla.redhat.com/1378305): DHT: remove unused structure members
-- [#1378436](https://bugzilla.redhat.com/1378436): build: python-ctypes no longer exists in Fedora Rawhide
-- [#1378492](https://bugzilla.redhat.com/1378492): warning messages seen in glusterd logs for each 'gluster volume status' command
-- [#1378684](https://bugzilla.redhat.com/1378684): Poor smallfile read performance on Arbiter volume compared to Replica 3 volume
-- [#1378778](https://bugzilla.redhat.com/1378778): Add a test script for compound fops changes in AFR
-- [#1378842](https://bugzilla.redhat.com/1378842): [RFE] 'gluster volume get' should implement the way to retrieve volume options using the volume name 'all'
-- [#1379223](https://bugzilla.redhat.com/1379223): "nfs.disable: on" is not showing in Vol info by default for the 3.7.x volumes after updating to 3.9.0
-- [#1379285](https://bugzilla.redhat.com/1379285): gfapi: Fix fd ref leaks
-- [#1379328](https://bugzilla.redhat.com/1379328): Boolean attributes are published as string
-- [#1379330](https://bugzilla.redhat.com/1379330): eventsapi/georep: Events are not available for Checkpoint and Status Change
-- [#1379511](https://bugzilla.redhat.com/1379511): Fix spurious failures in open-behind.t
-- [#1379655](https://bugzilla.redhat.com/1379655): Recording (ffmpeg) processes on FUSE get hung
-- [#1379720](https://bugzilla.redhat.com/1379720): errors appear in brick and nfs logs and getting stale files on NFS clients
-- [#1379769](https://bugzilla.redhat.com/1379769): GlusterFS fails to build on old Linux distros with linux/oom.h missing
-- [#1380249](https://bugzilla.redhat.com/1380249): Huge memory usage of FUSE client
-- [#1380275](https://bugzilla.redhat.com/1380275): client ID should logged when SSL connection fails
-- [#1381115](https://bugzilla.redhat.com/1381115): Polling failure errors getting when volume is started&stopped with SSL enabled setup.
-- [#1381421](https://bugzilla.redhat.com/1381421): afr fix shd log message error
-- [#1381830](https://bugzilla.redhat.com/1381830): Regression caused by enabling client-io-threads by default
-- [#1382236](https://bugzilla.redhat.com/1382236): glusterfind pre session hangs indefinitely
-- [#1382258](https://bugzilla.redhat.com/1382258): RFE: Support to update NFS-Ganesha export options dynamically
-- [#1382266](https://bugzilla.redhat.com/1382266): md-cache: Invalidate cache entry in case of OPEN with O_TRUNC
-- [#1384142](https://bugzilla.redhat.com/1384142): crypt: changes needed for openssl-1.1 (coming in Fedora 26)
-- [#1384297](https://bugzilla.redhat.com/1384297): glusterfs can't self heal character dev file for invalid dev_t parameters
-- [#1384906](https://bugzilla.redhat.com/1384906): arbiter volume write performance is bad with sharding
-- [#1385104](https://bugzilla.redhat.com/1385104): invalid argument warning messages seen in fuse client logs 2016-09-30 06:34:58.938667] W [dict.c:418ict_set] (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x58722) 0-dict: !this || !value for key=link-count [Invalid argument]
-- [#1385575](https://bugzilla.redhat.com/1385575): pmap_signin event fails to update brickinfo->signed_in flag
-- [#1385593](https://bugzilla.redhat.com/1385593): Fix some spelling mistakes in comments and log messages
-- [#1385839](https://bugzilla.redhat.com/1385839): Incorrect volume type in the "glusterd_state" file generated using CLI "gluster get-state"
-- [#1386088](https://bugzilla.redhat.com/1386088): Memory Leaks in snapshot code path
-- [#1386097](https://bugzilla.redhat.com/1386097): 4 of 8 bricks (2 dht subvols) crashed on systemic setup
-- [#1386123](https://bugzilla.redhat.com/1386123): geo-replica slave node goes faulty for non-root user session due to fail to locate gluster binary
-- [#1386141](https://bugzilla.redhat.com/1386141): Error and warning message getting while removing glusterfs-events package
-- [#1386188](https://bugzilla.redhat.com/1386188): Asynchronous Unsplit-brain still causes Input/Output Error on system calls
-- [#1386200](https://bugzilla.redhat.com/1386200): Log all published events
-- [#1386247](https://bugzilla.redhat.com/1386247): [Eventing]: 'gluster volume tier <volname> start force' does not generate a TIER_START event
-- [#1386450](https://bugzilla.redhat.com/1386450): Continuous warning messages getting when one of the cluster node is down on SSL setup.
-- [#1386516](https://bugzilla.redhat.com/1386516): [Eventing]: UUID is showing zeros in the event message for the peer probe operation.
-- [#1386626](https://bugzilla.redhat.com/1386626): fuse mount point not accessible
-- [#1386766](https://bugzilla.redhat.com/1386766): trashcan max file limit cannot go beyond 1GB
-- [#1387160](https://bugzilla.redhat.com/1387160): clone creation with older names in a system fails
-- [#1387207](https://bugzilla.redhat.com/1387207): [Eventing]: Random VOLUME_SET events seen when no operation is done on the gluster cluster
-- [#1387241](https://bugzilla.redhat.com/1387241): Pass proper permission to acl_permit() in posix_acl_open()
-- [#1387652](https://bugzilla.redhat.com/1387652): [Eventing]: BRICK_DISCONNECTED events seen when a tier volume is stopped
-- [#1387864](https://bugzilla.redhat.com/1387864): [Eventing]: 'gluster vol bitrot <volname> scrub ondemand' does not produce an event
-- [#1388010](https://bugzilla.redhat.com/1388010): [Eventing]: 'VOLUME_REBALANCE' event messages have an incorrect volume name
-- [#1388062](https://bugzilla.redhat.com/1388062): throw warning to show that older tier commands are depricated and will be removed.
-- [#1388292](https://bugzilla.redhat.com/1388292): performance.read-ahead on results in processes on client stuck in IO wait
-- [#1388348](https://bugzilla.redhat.com/1388348): glusterd: Display proper error message and fail the command if S32gluster_enable_shared_storage.sh hook script is not present during gluster volume set all cluster.enable-shared-storage <enable/disable> command
-- [#1388401](https://bugzilla.redhat.com/1388401): Labelled geo-rep checkpoints hide geo-replication status
-- [#1388861](https://bugzilla.redhat.com/1388861): build: python on Debian-based dists use .../lib/python2.7/dist-packages instead of .../site-packages
-- [#1388862](https://bugzilla.redhat.com/1388862): [Eventing]: Events not seen when command is triggered from one of the peer nodes
-- [#1388877](https://bugzilla.redhat.com/1388877): Continuous errors getting in the mount log when the volume mount server glusterd is down.
-- [#1389293](https://bugzilla.redhat.com/1389293): build: incorrect Requires: for portblock resource agent
-- [#1389481](https://bugzilla.redhat.com/1389481): glusterfind fails to list files from tiered volume
-- [#1389697](https://bugzilla.redhat.com/1389697): Remove-brick status output is showing status of fix-layout instead of original remove-brick status output
-- [#1389746](https://bugzilla.redhat.com/1389746): Refresh config fails while exporting subdirectories within a volume
-- [#1390050](https://bugzilla.redhat.com/1390050): Elasticsearch get CorruptIndexException errors when running with GlusterFS persistent storage
-- [#1391086](https://bugzilla.redhat.com/1391086): gfapi clients crash while using async calls due to double fd_unref
-- [#1391387](https://bugzilla.redhat.com/1391387): The FUSE client log is filling up with posix_acl_default and posix_acl_access messages
-- [#1392167](https://bugzilla.redhat.com/1392167): SMB[md-cache Private Build]:Error messages in brick logs related to upcall_cache_invalidate gf_uuid_is_null
-- [#1392445](https://bugzilla.redhat.com/1392445): Hosted Engine VM paused post replace-brick operation
-- [#1392713](https://bugzilla.redhat.com/1392713): inconsistent file permissions b/w write permission and sticky bits(---------T ) displayed when IOs are going on with md-cache enabled (and within the invalidation cycle)
-- [#1392772](https://bugzilla.redhat.com/1392772): [setxattr_cbk] "Permission denied" warning messages are seen in logs while running pjd-fstest suite
-- [#1392865](https://bugzilla.redhat.com/1392865): Better logging when reporting failures of the kind "<file-path> Failing MKNOD as quorum is not met"
-- [#1393259](https://bugzilla.redhat.com/1393259): stat of file is hung with possible deadlock
-- [#1393678](https://bugzilla.redhat.com/1393678): Worker restarts on log-rsync-performance config update
-- [#1394131](https://bugzilla.redhat.com/1394131): [md-cache]: All bricks crashed while performing symlink and rename from client at the same time
-- [#1394224](https://bugzilla.redhat.com/1394224): "nfs-grace-monitor" timed out messages observed
-- [#1394548](https://bugzilla.redhat.com/1394548): Make debugging EACCES errors easier to debug
-- [#1394719](https://bugzilla.redhat.com/1394719): libgfapi core dumps
-- [#1394881](https://bugzilla.redhat.com/1394881): Failed to enable nfs-ganesha after disabling nfs-ganesha cluster
-- [#1395261](https://bugzilla.redhat.com/1395261): Seeing error messages [snapview-client.c:283:gf_svc_lookup_cbk] and [dht-helper.c:1666ht_inode_ctx_time_update] (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x5d75c)
-- [#1395648](https://bugzilla.redhat.com/1395648): ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
-- [#1395660](https://bugzilla.redhat.com/1395660): Checkpoint completed event missing master node detail
-- [#1395687](https://bugzilla.redhat.com/1395687): Client side IObuff leaks at a high pace consumes complete client memory and hence making gluster volume inaccessible
-- [#1395993](https://bugzilla.redhat.com/1395993): heal info --xml when bricks are down in a systemic environment is not displaying anything even after more than 30minutes
-- [#1396038](https://bugzilla.redhat.com/1396038): refresh-config fails and crashes ganesha when mdcache is enabled on the volume.
-- [#1396048](https://bugzilla.redhat.com/1396048): A hard link is lost during rebalance+lookup
-- [#1396062](https://bugzilla.redhat.com/1396062): [geo-rep]: Worker crashes seen while renaming directories in loop
-- [#1396081](https://bugzilla.redhat.com/1396081): Wrong value in Last Synced column during Hybrid Crawl
-- [#1396364](https://bugzilla.redhat.com/1396364): Scheduler : Scheduler should not depend on glusterfs-events package
-- [#1396793](https://bugzilla.redhat.com/1396793): [Ganesha] : Ganesha crashes intermittently during nfs-ganesha restarts.
-- [#1396807](https://bugzilla.redhat.com/1396807): capture volume tunables in get-state dump
-- [#1396952](https://bugzilla.redhat.com/1396952): I/O errors on FUSE mount point when reading and writing from 2 clients
-- [#1397052](https://bugzilla.redhat.com/1397052): OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
-- [#1397177](https://bugzilla.redhat.com/1397177): memory leak when using libgfapi
-- [#1397419](https://bugzilla.redhat.com/1397419): glusterfs_ctx_defaults_init is re-initializing ctx->locks
-- [#1397424](https://bugzilla.redhat.com/1397424): PEER_REJECT, EVENT_BRICKPATH_RESOLVE_FAILED, EVENT_COMPARE_FRIEND_VOLUME_FAILED are not seen
-- [#1397754](https://bugzilla.redhat.com/1397754): [SAMBA-CIFS] : IO hungs in cifs mount while graph switch on & off
-- [#1397795](https://bugzilla.redhat.com/1397795): NFS-Ganesha:Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
-- [#1398076](https://bugzilla.redhat.com/1398076): SEEK_HOLE/ SEEK_DATA doesn't return the correct offset
-- [#1398226](https://bugzilla.redhat.com/1398226): With compound fops on, client process crashes when a replica is brought down while IO is in progress
-- [#1398566](https://bugzilla.redhat.com/1398566): self-heal info command hangs after triggering self-heal
-- [#1399031](https://bugzilla.redhat.com/1399031): build: add systemd dependency to glusterfs sub-package
-- [#1399072](https://bugzilla.redhat.com/1399072): [Disperse] healing should not start if only data bricks are UP
-- [#1399134](https://bugzilla.redhat.com/1399134): GlusterFS client crashes during remove-brick operation
-- [#1399154](https://bugzilla.redhat.com/1399154): After ganesha node reboot/shutdown, portblock process goes to FAILED state
-- [#1399186](https://bugzilla.redhat.com/1399186): [GANESHA] Export ID changed during volume start and stop with message "lookup_export failed with Export id not found" in ganesha.log
-- [#1399578](https://bugzilla.redhat.com/1399578): [compound FOPs]: Memory leak while doing FOPs with brick down
-- [#1399592](https://bugzilla.redhat.com/1399592): Memory leak when self healing daemon queue is full
-- [#1399780](https://bugzilla.redhat.com/1399780): Use standard refcounting for structures where possible
-- [#1399995](https://bugzilla.redhat.com/1399995): Dump volume specific options in get-state output in a more parseable manner
-- [#1400013](https://bugzilla.redhat.com/1400013): [USS,SSL] .snaps directory is not reachable when I/O encryption (SSL) is enabled
-- [#1400026](https://bugzilla.redhat.com/1400026): Duplicate value assigned to GD_MSG_DAEMON_STATE_REQ_RCVD and GD_MSG_BRICK_CLEANUP_SUCCESS messages
-- [#1400237](https://bugzilla.redhat.com/1400237): Ganesha services are not stopped when pacemaker quorum is lost
-- [#1400613](https://bugzilla.redhat.com/1400613): [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha/ in already existing cluster nodes
-- [#1400818](https://bugzilla.redhat.com/1400818): possible memory leak on client when writing to a file while another client issues a truncate
-- [#1401095](https://bugzilla.redhat.com/1401095): log the error when locking the brick directory fails
-- [#1401218](https://bugzilla.redhat.com/1401218): Fix compound fops memory leaks
-- [#1401404](https://bugzilla.redhat.com/1401404): [Arbiter] IO's Halted and heal info command hung
-- [#1401777](https://bugzilla.redhat.com/1401777): atime becomes zero when truncating file via ganesha (or gluster-NFS)
-- [#1401801](https://bugzilla.redhat.com/1401801): [RFE] Use Host UUID to find local nodes to spawn workers
-- [#1401812](https://bugzilla.redhat.com/1401812): RFE: Make readdirp parallel in dht
-- [#1401822](https://bugzilla.redhat.com/1401822): [GANESHA]Unable to export the ganesha volume after doing volume start and stop
-- [#1401836](https://bugzilla.redhat.com/1401836): update documentation to readthedocs.io
-- [#1401921](https://bugzilla.redhat.com/1401921): glusterfsd crashed while taking snapshot using scheduler
-- [#1402237](https://bugzilla.redhat.com/1402237): Bad spacing in error message in cli
-- [#1402261](https://bugzilla.redhat.com/1402261): cli: compile warnings (unused var) if building without bd xlator
-- [#1402369](https://bugzilla.redhat.com/1402369): Getting the warning message while erasing the gluster "glusterfs-server" package.
-- [#1402710](https://bugzilla.redhat.com/1402710): ls and move hung on disperse volume
-- [#1402730](https://bugzilla.redhat.com/1402730): self-heal not happening, as self-heal info lists the same pending shards to be healed
-- [#1402828](https://bugzilla.redhat.com/1402828): Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
-- [#1402841](https://bugzilla.redhat.com/1402841): Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
-- [#1403130](https://bugzilla.redhat.com/1403130): [GANESHA] Adding a node to cluster failed to allocate resource-agents to new node.
-- [#1403780](https://bugzilla.redhat.com/1403780): Incorrect incrementation of volinfo refcnt during volume start
-- [#1404118](https://bugzilla.redhat.com/1404118): Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
-- [#1404168](https://bugzilla.redhat.com/1404168): Upcall: Possible use after free when log level set to TRACE
-- [#1404181](https://bugzilla.redhat.com/1404181): [Ganesha+SSL] : Ganesha crashes on all nodes on volume restarts
-- [#1404410](https://bugzilla.redhat.com/1404410): [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
-- [#1404573](https://bugzilla.redhat.com/1404573): tests/bugs/snapshot/bug-1316437.t test is causing spurious failure
-- [#1404678](https://bugzilla.redhat.com/1404678): [geo-rep]: Config commands fail when the status is 'Created'
-- [#1404905](https://bugzilla.redhat.com/1404905): DHT : file rename operation is successful but log has error 'key:trusted.glusterfs.dht.linkto error:File exists' , 'setting xattrs on <old_filename> failed (File exists)'
-- [#1405165](https://bugzilla.redhat.com/1405165): Allow user to disable mem-pool
-- [#1405301](https://bugzilla.redhat.com/1405301): Fix the failure in tests/basic/gfapi/bug1291259.t
-- [#1405478](https://bugzilla.redhat.com/1405478): Keepalive should be set for IPv6 & IPv4
-- [#1405554](https://bugzilla.redhat.com/1405554): Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
-- [#1405775](https://bugzilla.redhat.com/1405775): GlusterFS process crashed after add-brick
-- [#1405902](https://bugzilla.redhat.com/1405902): Fix spurious failure in tests/bugs/replicate/bug-1402730.t
-- [#1406224](https://bugzilla.redhat.com/1406224): VM pauses due to storage I/O error, when one of the data brick is down with arbiter/replica volume
-- [#1406249](https://bugzilla.redhat.com/1406249): [GANESHA] Deleting a node from ganesha cluster deletes the volume entry from /etc/ganesha/ganesha.conf file
-- [#1406252](https://bugzilla.redhat.com/1406252): Free xdr-allocated compound request and response arrays
-- [#1406348](https://bugzilla.redhat.com/1406348): [Eventing]: POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op
-- [#1406410](https://bugzilla.redhat.com/1406410): [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node
-- [#1406411](https://bugzilla.redhat.com/1406411): Fail add-brick command if replica count changes
-- [#1406878](https://bugzilla.redhat.com/1406878): ec prove tests fail in FB build environment.
-- [#1408115](https://bugzilla.redhat.com/1408115): Remove-brick rebalance failed while rm -rf is in progress
-- [#1408131](https://bugzilla.redhat.com/1408131): Remove tests/distaf
-- [#1408395](https://bugzilla.redhat.com/1408395): [Arbiter] After Killing a brick writes drastically slow down
-- [#1408712](https://bugzilla.redhat.com/1408712): with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
-- [#1408755](https://bugzilla.redhat.com/1408755): Remove tests/basic/rpm.t
-- [#1408757](https://bugzilla.redhat.com/1408757): Fix failure of split-brain-favorite-child-policy.t in CentOS7
-- [#1408758](https://bugzilla.redhat.com/1408758): tests/bugs/glusterd/bug-913555.t fails spuriously
-- [#1409078](https://bugzilla.redhat.com/1409078): RFE: Need a command to check op-version compatibility of clients
-- [#1409186](https://bugzilla.redhat.com/1409186): Dict_t leak in dht_migration_complete_check_task and dht_rebalance_inprogress_task
-- [#1409202](https://bugzilla.redhat.com/1409202): Warning messages throwing when EC volume offline brick comes up are difficult to understand for end user.
-- [#1409206](https://bugzilla.redhat.com/1409206): Extra lookup/fstats are sent over the network when a brick is down.
-- [#1409727](https://bugzilla.redhat.com/1409727): [ganesha + EC]posix compliance rename tests failed on EC volume with nfs-ganesha mount.
-- [#1409730](https://bugzilla.redhat.com/1409730): [ganesha+ec]: Contents of original file are not seen when hardlink is created
-- [#1410071](https://bugzilla.redhat.com/1410071): [Geo-rep] Geo replication status detail without master and slave volume args
-- [#1410313](https://bugzilla.redhat.com/1410313): brick crashed on systemic setup
-- [#1410355](https://bugzilla.redhat.com/1410355): Remove-brick rebalance failed while rm -rf is in progress
-- [#1410375](https://bugzilla.redhat.com/1410375): [Mdcache] clients being served wrong information about a file, can lead to file inconsistency
-- [#1410777](https://bugzilla.redhat.com/1410777): ganesha service crashed on all nodes of ganesha cluster on disperse volume when doing lookup while copying files remotely using scp
-- [#1410853](https://bugzilla.redhat.com/1410853): glusterfs-server should depend on firewalld-filesystem
-- [#1411607](https://bugzilla.redhat.com/1411607): [Geo-rep] If for some reason MKDIR failed to sync, it should not proceed further.
-- [#1411625](https://bugzilla.redhat.com/1411625): Spurious split-brain error messages are seen in rebalance logs
-- [#1411999](https://bugzilla.redhat.com/1411999): URL to Fedora distgit no longer uptodate
-- [#1412002](https://bugzilla.redhat.com/1412002): Examples/getvolfile.py is not pep8 compliant
-- [#1412069](https://bugzilla.redhat.com/1412069): No rollback of renames on succeeded subvols during failure
-- [#1412174](https://bugzilla.redhat.com/1412174): Memory leak on mount/fuse when setxattr fails
-- [#1412467](https://bugzilla.redhat.com/1412467): Remove tests/bugs/distribute/bug-1063230.t
-- [#1412489](https://bugzilla.redhat.com/1412489): Upcall: Possible memleak if inode_ctx_set fails
-- [#1412689](https://bugzilla.redhat.com/1412689): [Geo-rep] Slave mount log file is cluttered by logs of multiple active mounts
-- [#1412917](https://bugzilla.redhat.com/1412917): OOM kill of glusterfsd during continuous add-bricks
-- [#1412918](https://bugzilla.redhat.com/1412918): fuse: Resource leak in fuse-helper under GF_SOLARIS_HOST_OS
-- [#1413967](https://bugzilla.redhat.com/1413967): geo-rep session faulty with ChangelogException "No such file or directory"
-- [#1415226](https://bugzilla.redhat.com/1415226): packaging: python/python2(/python3) cleanup
-- [#1415245](https://bugzilla.redhat.com/1415245): core: max op version
-- [#1415279](https://bugzilla.redhat.com/1415279): libgfapi: remove/revert glfs_ipc() changes targeted for 4.0
-- [#1415581](https://bugzilla.redhat.com/1415581): RFE : Create trash directory only when its is enabled
-- [#1415915](https://bugzilla.redhat.com/1415915): RFE: An administrator friendly way to determine rebalance completion time
-- [#1415918](https://bugzilla.redhat.com/1415918): Cache security.ima xattrs as well
-- [#1416285](https://bugzilla.redhat.com/1416285): EXPECT_WITHIN is taking too much time even if the result matches with expected value
-- [#1416416](https://bugzilla.redhat.com/1416416): Improve output of "gluster volume status detail"
-- [#1417027](https://bugzilla.redhat.com/1417027): option performance.parallel-readdir should honor cluster.readdir-optimize
-- [#1417028](https://bugzilla.redhat.com/1417028): option performance.parallel-readdir can cause OOM in large volumes
-- [#1417042](https://bugzilla.redhat.com/1417042): glusterd restart is starting the offline shd daemon on other node in the cluster
-- [#1417135](https://bugzilla.redhat.com/1417135): [Stress] : SHD Logs flooded with "Heal Failed" messages,filling up "/" quickly
-- [#1417521](https://bugzilla.redhat.com/1417521): [SNAPSHOT] With all USS plugin enable .snaps directory is not visible in cifs mount as well as windows mount
-- [#1417527](https://bugzilla.redhat.com/1417527): glusterfind: After glusterfind pre command execution all temporary files and directories /usr/var/lib/misc/glusterfsd/glusterfind/<session>/<volume>/ should be removed
-- [#1417804](https://bugzilla.redhat.com/1417804): debug/trace: Print iatts of individual entries in readdirp callback for better debugging experience
-- [#1418091](https://bugzilla.redhat.com/1418091): [RFE] Support multiple bricks in one process (multiplexing)
-- [#1418536](https://bugzilla.redhat.com/1418536): Portmap allocates way too much memory (256KB) on stack
-- [#1418541](https://bugzilla.redhat.com/1418541): [Ganesha+SSL] : Bonnie++ hangs during rewrites.
-- [#1418623](https://bugzilla.redhat.com/1418623): client process crashed due to write behind translator
-- [#1418650](https://bugzilla.redhat.com/1418650): Samba crash when mounting a distributed dispersed volume over CIFS
-- [#1418981](https://bugzilla.redhat.com/1418981): Unable to take Statedump for gfapi applications
-- [#1419305](https://bugzilla.redhat.com/1419305): disable client.io-threads on replica volume creation
-- [#1419306](https://bugzilla.redhat.com/1419306): [RFE] Need to have group cli option to set all md-cache options using a single command
-- [#1419503](https://bugzilla.redhat.com/1419503): [SAMBA-SSL] Volume Share hungs when multiple mount & unmount is performed over a windows client on a SSL enabled cluster
-- [#1419696](https://bugzilla.redhat.com/1419696): Fix spurious failure of ec-background-heal.t and tests/bitrot/bug-1373520.t
-- [#1419824](https://bugzilla.redhat.com/1419824): repeated operation failed warnings in gluster mount logs with disperse volume
-- [#1419825](https://bugzilla.redhat.com/1419825): Sequential and Random Writes are off target by 12% and 22% respectively on EC backed volumes over FUSE
-- [#1419846](https://bugzilla.redhat.com/1419846): removing warning related to enum, to let the build take place without errors for 3.10
-- [#1419855](https://bugzilla.redhat.com/1419855): [Remove-brick] Hardlink migration fails with "lookup failed (No such file or directory)" error messages in rebalance logs
-- [#1419868](https://bugzilla.redhat.com/1419868): removing old tier commands under the rebalance commands
-- [#1420606](https://bugzilla.redhat.com/1420606): glusterd is crashed at the time of stop volume
-- [#1420808](https://bugzilla.redhat.com/1420808): Trash feature improperly disabled
-- [#1420810](https://bugzilla.redhat.com/1420810): Massive xlator_t leak in graph-switch code
-- [#1420982](https://bugzilla.redhat.com/1420982): Automatic split brain resolution must check for all the bricks to be up to avoiding serving of inconsistent data(visible on x3 or more)
-- [#1420987](https://bugzilla.redhat.com/1420987): warning messages seen in glusterd logs while setting the volume option
-- [#1420989](https://bugzilla.redhat.com/1420989): when server-quorum is enabled, volume get returns 0 value for server-quorum-ratio
-- [#1420991](https://bugzilla.redhat.com/1420991): Modified volume options not synced once offline nodes comes up.
-- [#1421017](https://bugzilla.redhat.com/1421017): CLI option "--timeout" is accepting non numeric and negative values.
-- [#1421956](https://bugzilla.redhat.com/1421956): Disperse: Fallback to pre-compiled code execution when dynamic code generation fails
-- [#1422350](https://bugzilla.redhat.com/1422350): glustershd process crashed on systemic setup
-- [#1422363](https://bugzilla.redhat.com/1422363): [Replicate] "RPC call decoding failed" leading to IO hang & mount inaccessible
-- [#1422391](https://bugzilla.redhat.com/1422391): Gluster NFS server crashing in __mnt3svc_umountall
-- [#1422766](https://bugzilla.redhat.com/1422766): Entry heal messages in glustershd.log while no entries shown in heal info
-- [#1422777](https://bugzilla.redhat.com/1422777): DHT doesn't evenly balance files on FreeBSD with ZFS
-- [#1422819](https://bugzilla.redhat.com/1422819): [Geo-rep] Recreating geo-rep session with same slave after deleting with reset-sync-time fails to sync
-- [#1422942](https://bugzilla.redhat.com/1422942): Prevent reverse heal from happening
-- [#1423063](https://bugzilla.redhat.com/1423063): glusterfs-fuse RPM now depends on gfapi
-- [#1423070](https://bugzilla.redhat.com/1423070): Bricks not coming up when ran with address sanitizer
-- [#1423385](https://bugzilla.redhat.com/1423385): Crash in index xlator because of race in inode_ctx_set and inode_ref
-- [#1423406](https://bugzilla.redhat.com/1423406): Need to improve remove-brick failure message when the brick process is down.
-- [#1423412](https://bugzilla.redhat.com/1423412): Mount of older client fails
-- [#1423429](https://bugzilla.redhat.com/1423429): unnecessary logging in rda_opendir
-- [#1424921](https://bugzilla.redhat.com/1424921): dht_setxattr returns EINVAL when a file is deleted during the FOP
-- [#1424931](https://bugzilla.redhat.com/1424931): [RFE] Include few more options in virt file
-- [#1424937](https://bugzilla.redhat.com/1424937): multiple glusterfsd process crashed making the complete subvolume unavailable
-- [#1424973](https://bugzilla.redhat.com/1424973): remove-brick status shows 0 rebalanced files
-- [#1425556](https://bugzilla.redhat.com/1425556): glusterd log is flooded with stale disconnect rpc messages
diff --git a/doc/release-notes/3.10.1.md b/doc/release-notes/3.10.1.md
deleted file mode 100644
index 96fcdd3afe5..00000000000
--- a/doc/release-notes/3.10.1.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# Release notes for Gluster 3.10.1
-
-This is a bugfix release. The release notes for [3.10.0](3.10.0.md),
-contains a listing of all the new features that were added and
-bugs in the GlusterFS 3.10 stable release.
-
-## Major changes, features and limitations addressed in this release
-
-1. auth-allow setting was broken with 3.10 release and is now fixed (#1429117)
-
-## Major issues
-
-1. Expanding a gluster volume that is sharded may cause file corruption
- - Sharded volumes are typically used for VM images, if such volumes are
- expanded or possibly contracted (i.e add/remove bricks and rebalance)
- there are reports of VM images getting corrupted.
- - If you are using sharded volumes, DO NOT rebalance them till this is
- fixed
- - Status of this bug can be tracked here, [#1426508](https://bugzilla.redhat.com/1426508)
-
-## Bugs addressed
-
-A total of 26 patches have been merged, addressing 23 bugs:
-- [#1419824](https://bugzilla.redhat.com/1419824): repeated operation failed warnings in gluster mount logs with disperse volume
-- [#1422769](https://bugzilla.redhat.com/1422769): brick process crashes when glusterd is restarted
-- [#1422781](https://bugzilla.redhat.com/1422781): Transport endpoint not connected error seen on client when glusterd is restarted
-- [#1426222](https://bugzilla.redhat.com/1426222): build: fixes to build 3.9.0rc2 on Debian (jessie)
-- [#1426323](https://bugzilla.redhat.com/1426323): common-ha: no need to remove nodes one-by-one in teardown
-- [#1426329](https://bugzilla.redhat.com/1426329): [Ganesha] : Add comment to Ganesha HA config file ,about cluster name's length limitation
-- [#1427387](https://bugzilla.redhat.com/1427387): systemic testing: seeing lot of ping time outs which would lead to splitbrains
-- [#1427399](https://bugzilla.redhat.com/1427399): [RFE] capture portmap details in glusterd's statedump
-- [#1427461](https://bugzilla.redhat.com/1427461): Bricks take up new ports upon volume restart after add-brick op with brick mux enabled
-- [#1428670](https://bugzilla.redhat.com/1428670): Disconnects in nfs mount leads to IO hang and mount inaccessible
-- [#1428739](https://bugzilla.redhat.com/1428739): Fix crash in dht resulting from tests/features/nuke.t
-- [#1429117](https://bugzilla.redhat.com/1429117): auth failure after upgrade to GlusterFS 3.10
-- [#1429402](https://bugzilla.redhat.com/1429402): Restore atime/mtime for symlinks and other non-regular files.
-- [#1429773](https://bugzilla.redhat.com/1429773): disallow increasing replica count for arbiter volumes
-- [#1430512](https://bugzilla.redhat.com/1430512): /libgfxdr.so.0.0.1: undefined symbol: __gf_free
-- [#1430844](https://bugzilla.redhat.com/1430844): build/packaging: Debian and Ubuntu don't have /usr/libexec/; results in bad packages
-- [#1431175](https://bugzilla.redhat.com/1431175): volume start command hangs
-- [#1431176](https://bugzilla.redhat.com/1431176): USS is broken when multiplexing is on
-- [#1431591](https://bugzilla.redhat.com/1431591): memory leak in features/locks xlator
-- [#1434296](https://bugzilla.redhat.com/1434296): [Disperse] Metadata version is not healing when a brick is down
-- [#1434303](https://bugzilla.redhat.com/1434303): Move spit-brain msg in read txn to debug
-- [#1434399](https://bugzilla.redhat.com/1434399): glusterd crashes when peering an IP where the address is more than acceptable range (>255) OR with random hostnames
-- [#1435946](https://bugzilla.redhat.com/1435946): When parallel readdir is enabled and there are simultaneous readdir and disconnects, then it results in crash
-- [#1436203](https://bugzilla.redhat.com/1436203): Undo pending xattrs only on the up bricks
-- [#1436411](https://bugzilla.redhat.com/1436411): Unrecognized filesystems (i.e. btrfs, zfs) log many errors about "getinode size"
-- [#1437326](https://bugzilla.redhat.com/1437326): Sharding: Fix a performance bug
diff --git a/doc/release-notes/3.10.2.md b/doc/release-notes/3.10.2.md
deleted file mode 100644
index c31532fe8c5..00000000000
--- a/doc/release-notes/3.10.2.md
+++ /dev/null
@@ -1,71 +0,0 @@
-# Release notes for Gluster 3.10.2
-
-This is a bugfix release. The release notes for [3.10.0](3.10.0.md) and
-[3.10.1](3.10.1.md)
-contains a listing of all the new features that were added and
-bugs in the GlusterFS 3.10 stable release.
-
-## Major changes, features and limitations addressed in this release
-1. Many bugs brick multiplexing and nfs-ganesha+ha bugs have been addressed.
-2. Rebalance and remove brick operations have been disabled for sharded volumes
- to prevent data corruption.
-
-
-## Major issues
-1. Expanding a gluster volume that is sharded may cause file corruption
-- Sharded volumes are typically used for VM images, if such volumes are
-expanded or possibly contracted (i.e add/remove bricks and rebalance)
-there are reports of VM images getting corrupted.
-- Status of this bug can be tracked here, [#1426508](https://bugzilla.redhat.com/1426508)
-
-
-## Bugs addressed
-
-A total of 63 patches have been merged, addressing 46 bugs:
-- [#1437854](https://bugzilla.redhat.com/1437854): Spellcheck issues reported during Debian build
-- [#1425726](https://bugzilla.redhat.com/1425726): Stale export entries in ganesha.conf after executing "gluster nfs-ganesha disable"
-- [#1427079](https://bugzilla.redhat.com/1427079): [Ganesha] : unexport fails if export configuration file is not present
-- [#1440148](https://bugzilla.redhat.com/1440148): common-ha (debian/ubuntu): ganesha-ha.sh has a hard-coded /usr/libexec/ganesha...
-- [#1443478](https://bugzilla.redhat.com/1443478): RFE: Support to update NFS-Ganesha export options dynamically
-- [#1443490](https://bugzilla.redhat.com/1443490): [Nfs-ganesha] Refresh config fails when ganesha cluster is in failover mode.
-- [#1441474](https://bugzilla.redhat.com/1441474): synclocks don't work correctly under contention
-- [#1449002](https://bugzilla.redhat.com/1449002): [Brick Multiplexing] : Bricks for multiple volumes going down after glusterd restart and not coming back up after volume start force
-- [#1438813](https://bugzilla.redhat.com/1438813): Segmentation fault when creating a qcow2 with qemu-img
-- [#1438423](https://bugzilla.redhat.com/1438423): [Ganesha + EC] : Input/Output Error while creating LOTS of smallfiles
-- [#1444540](https://bugzilla.redhat.com/1444540): rm -rf \<dir\> returns ENOTEMPTY even though ls on the mount point returns no files
-- [#1446227](https://bugzilla.redhat.com/1446227): Incorrect and redundant logs in the DHT rmdir code path
-- [#1447608](https://bugzilla.redhat.com/1447608): Don't allow rebalance/fix-layout operation on sharding enabled volumes till dht+sharding bugs are fixed
-- [#1448864](https://bugzilla.redhat.com/1448864): Seeing error "Failed to get the total number of files. Unable to estimate time to complete rebalance" in rebalance logs
-- [#1443349](https://bugzilla.redhat.com/1443349): [Eventing]: Unrelated error message displayed when path specified during a 'webhook-test/add' is missing a schema
-- [#1441576](https://bugzilla.redhat.com/1441576): [geo-rep]: rsync should not try to sync internal xattrs
-- [#1441927](https://bugzilla.redhat.com/1441927): [geo-rep]: Worker crashes with [Errno 16] Device or resource busy: '.gfid/00000000-0000-0000-0000-000000000001/dir.166 while renaming directories
-- [#1401877](https://bugzilla.redhat.com/1401877): [GANESHA] Symlinks from /etc/ganesha/ganesha.conf to shared\_storage are created on the non-ganesha nodes in 8 node gluster having 4 node ganesha cluster
-- [#1425723](https://bugzilla.redhat.com/1425723): nfs-ganesha volume export file remains stale in shared\_storage\_volume when volume is deleted
-- [#1427759](https://bugzilla.redhat.com/1427759): nfs-ganesha: Incorrect error message returned when disable fails
-- [#1438325](https://bugzilla.redhat.com/1438325): Need to improve remove-brick failure message when the brick process is down.
-- [#1438338](https://bugzilla.redhat.com/1438338): glusterd is setting replicate volume property over disperse volume or vice versa
-- [#1438340](https://bugzilla.redhat.com/1438340): glusterd is not validating for allowed values while setting "cluster.brick-multiplex" property
-- [#1441476](https://bugzilla.redhat.com/1441476): Glusterd crashes when restarted with many volumes
-- [#1444128](https://bugzilla.redhat.com/1444128): [BrickMultiplex] gluster command not responding and .snaps directory is not visible after executing snapshot related command
-- [#1445260](https://bugzilla.redhat.com/1445260): [GANESHA] Volume start and stop having ganesha enable on it,turns off cache-invalidation on volume
-- [#1445408](https://bugzilla.redhat.com/1445408): gluster volume stop hangs
-- [#1449934](https://bugzilla.redhat.com/1449934): Brick Multiplexing :- resetting a brick bring down other bricks with same PID
-- [#1435779](https://bugzilla.redhat.com/1435779): Inode ref leak on anonymous reads and writes
-- [#1440278](https://bugzilla.redhat.com/1440278): [GSS] NFS Sub-directory mount not working on solaris10 client
-- [#1450378](https://bugzilla.redhat.com/1450378): GNFS crashed while taking lock on a file from 2 different clients having same volume mounted from 2 different servers
-- [#1449779](https://bugzilla.redhat.com/1449779): quota: limit-usage command failed with error " Failed to start aux mount"
-- [#1450564](https://bugzilla.redhat.com/1450564): glfsheal: crashed(segfault) with disperse volume in RDMA
-- [#1443501](https://bugzilla.redhat.com/1443501): Don't wind post-op on a brick where the fop phase failed.
-- [#1444892](https://bugzilla.redhat.com/1444892): When either killing or restarting a brick with performance.stat-prefetch on, stat sometimes returns a bad st\_size value.
-- [#1449169](https://bugzilla.redhat.com/1449169): Multiple bricks WILL crash after TCP port probing
-- [#1440805](https://bugzilla.redhat.com/1440805): Update rfc.sh to check Change-Id consistency for backports
-- [#1443010](https://bugzilla.redhat.com/1443010): snapshot: snapshots appear to be failing with respect to secure geo-rep slave
-- [#1445209](https://bugzilla.redhat.com/1445209): snapshot: Unable to take snapshot on a geo-replicated volume, even after stopping the session
-- [#1444773](https://bugzilla.redhat.com/1444773): explicitly specify executor to be bash for tests
-- [#1445407](https://bugzilla.redhat.com/1445407): remove bug-1421590-brick-mux-reuse-ports.t
-- [#1440742](https://bugzilla.redhat.com/1440742): Test files clean up for tier during 3.10
-- [#1448790](https://bugzilla.redhat.com/1448790): [Tiering]: High and low watermark values when set to the same level, is allowed
-- [#1435942](https://bugzilla.redhat.com/1435942): Enabling parallel-readdir causes dht linkto files to be visible on the mount,
-- [#1437763](https://bugzilla.redhat.com/1437763): File-level WORM allows ftruncate() on read-only files
-- [#1439148](https://bugzilla.redhat.com/1439148): Parallel readdir on Gluster NFS displays less number of dentries
-
diff --git a/doc/release-notes/3.10.3.md b/doc/release-notes/3.10.3.md
deleted file mode 100644
index f09abc4b4aa..00000000000
--- a/doc/release-notes/3.10.3.md
+++ /dev/null
@@ -1,38 +0,0 @@
-# Release notes for Gluster 3.10.3
-
-This is a bugfix release. The release notes for [3.10.0](3.10.0.md) ,
-[3.10.1](3.10.1.md) and [3.10.2](3.10.2.md)
-contain a listing of all the new features that were added and
-bugs in the GlusterFS 3.10 stable release.
-
-## Major changes, features and limitations addressed in this release
-1. No Major changes
-
-
-## Major issues
-1. Expanding a gluster volume that is sharded may cause file corruption
-- Sharded volumes are typically used for VM images, if such volumes are
-expanded or possibly contracted (i.e add/remove bricks and rebalance)
-there are reports of VM images getting corrupted.
-- Status of this bug can be tracked here, [#1426508](https://bugzilla.redhat.com/1426508)
-2. Brick multiplexing is being tested and fixed aggressively but we still have a
- few crashes and memory leaks to fix.
-
-
-## Bugs addressed
-
-A total of 18 patches have been merged, addressing 13 bugs:
-- [#1450053](https://bugzilla.redhat.com/1450053): [GANESHA] Adding a node to existing cluster failed to start pacemaker service on new node
-- [#1450773](https://bugzilla.redhat.com/1450773): Quota: After upgrade from 3.7 to higher version , gluster quota list command shows "No quota configured on volume repvol"
-- [#1450934](https://bugzilla.redhat.com/1450934): [New] - Replacing an arbiter brick while I/O happens causes vm pause
-- [#1450947](https://bugzilla.redhat.com/1450947): Autoconf leaves unexpanded variables in path names of non-shell-scripttext files
-- [#1451371](https://bugzilla.redhat.com/1451371): crash in dht\_rmdir\_do
-- [#1451561](https://bugzilla.redhat.com/1451561): AFR returns the node uuid of the same node for every file in the replica
-- [#1451587](https://bugzilla.redhat.com/1451587): cli xml status of detach tier broken
-- [#1451977](https://bugzilla.redhat.com/1451977): Add logs to identify whether disconnects are voluntary or due to network problems
-- [#1451995](https://bugzilla.redhat.com/1451995): Log message shows error code as success even when rpc fails to connect
-- [#1453056](https://bugzilla.redhat.com/1453056): [DHt] : segfault in dht\_selfheal\_dir\_setattr while running regressions
-- [#1453087](https://bugzilla.redhat.com/1453087): Brick Multiplexing: On reboot of a node Brick multiplexing feature lost on that node as multiple brick processes get spawned
-- [#1456682](https://bugzilla.redhat.com/1456682): tierd listens to a port.
-- [#1457054](https://bugzilla.redhat.com/1457054): glusterfs client crash on io-cache.so(\_\_ioc\_page\_wakeup+0x44)
-
diff --git a/doc/release-notes/3.10.4.md b/doc/release-notes/3.10.4.md
deleted file mode 100644
index 0af61479e4e..00000000000
--- a/doc/release-notes/3.10.4.md
+++ /dev/null
@@ -1,38 +0,0 @@
-# Release notes for Gluster 3.10.4
-
-This is a bugfix release. The release notes for [3.10.0](3.10.0.md) ,
-[3.10.1](3.10.1.md), [3.10.2](3.10.2.md) and [3.10.3](3.10.3.md)
-contain a listing of all the new features that were added and
-bugs fixed in the GlusterFS 3.10 stable release.
-
-## Major changes, features and limitations addressed in this release
-1. No Major changes
-
-
-## Major issues
-1. Expanding a gluster volume that is sharded may cause file corruption
-- Sharded volumes are typically used for VM images, if such volumes are
-expanded or possibly contracted (i.e add/remove bricks and rebalance)
-there are reports of VM images getting corrupted.
-- Status of this bug can be tracked here, [#1426508](https://bugzilla.redhat.com/1426508)
-2. Brick multiplexing is being tested and fixed aggressively but we still have a
- few crashes and memory leaks to fix.
-3. Another rebalance related bug is being worked upon [#1467010](https://bugzilla.redhat.com/1467010)
-
-
-## Bugs addressed
-
-A total of 18 patches have been merged, addressing 13 bugs:
-- [#1457732](https://bugzilla.redhat.com/1457732): "split-brain observed [Input/output error]" error messages in samba logs during parallel rm -rf
-- [#1459760](https://bugzilla.redhat.com/1459760): Glusterd segmentation fault in ' _Unwind_Backtrace' while running peer probe
-- [#1460649](https://bugzilla.redhat.com/1460649): posix-acl: Whitelist virtual ACL xattrs
-- [#1460914](https://bugzilla.redhat.com/1460914): Rebalance estimate time sometimes shows negative values
-- [#1460993](https://bugzilla.redhat.com/1460993): Revert CLI restrictions on running rebalance in VM store use case
-- [#1461019](https://bugzilla.redhat.com/1461019): [Ganesha] : Grace period is not being adhered to on RHEL 7.4; Clients continue running IO even during grace.
-- [#1462080](https://bugzilla.redhat.com/1462080): [Bitrot]: Inconsistency seen with 'scrub ondemand' - fails to trigger scrub
-- [#1463623](https://bugzilla.redhat.com/1463623): [Ganesha]Bricks got crashed while running posix compliance test suit on V4 mount
-- [#1463641](https://bugzilla.redhat.com/1463641): [Ganesha] Ganesha service failed to start on new node added in existing ganeshacluster
-- [#1464078](https://bugzilla.redhat.com/1464078): with AFR now making both nodes to return UUID for a file will result in georep consuming more resources
-- [#1466852](https://bugzilla.redhat.com/1466852): assorted typos and spelling mistakes from Debian lintian
-- [#1466863](https://bugzilla.redhat.com/1466863): dht_rename_lock_cbk crashes in upstream regression test
-- [#1467269](https://bugzilla.redhat.com/1467269): Heal info shows incorrect status
diff --git a/doc/release-notes/3.10.5.md b/doc/release-notes/3.10.5.md
deleted file mode 100644
index b91bf5b3640..00000000000
--- a/doc/release-notes/3.10.5.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# Release notes for Gluster 3.10.5
-
-This is a bugfix release. The release notes for [3.10.0](3.10.0.md) ,
-[3.10.1](3.10.1.md), [3.10.2](3.10.2.md), [3.10.3](3.10.3.md) and [3.10.4](3.10.4.md)
-contain a listing of all the new features that were added and
-bugs fixed in the GlusterFS 3.10 stable release.
-
-## Major changes, features and limitations addressed in this release
-**No Major changes**
-
-## Major issues
-1. Expanding a gluster volume that is sharded may cause file corruption
-- Sharded volumes are typically used for VM images, if such volumes are
-expanded or possibly contracted (i.e add/remove bricks and rebalance)
-there are reports of VM images getting corrupted.
-- The last known cause for corruption [#1467010](https://bugzilla.redhat.com/show_bug.cgi?id=1467010)
-has a fix with this release. As further testing is still in progress, the issue
-is retained as a major issue.
-2. Brick multiplexing is being tested and fixed aggressively but we still have a
- few crashes and memory leaks to fix.
-
-
-## Bugs addressed
-
-Bugs addressed since release-3.10.4 are listed below.
-
-- [#1467010](https://bugzilla.redhat.com/1467010): Fd based fops fail with EBADF on file migration
-- [#1468126](https://bugzilla.redhat.com/1468126): disperse seek does not correctly handle the end of file
-- [#1468198](https://bugzilla.redhat.com/1468198): [Geo-rep]: entry failed to sync to slave with ENOENT errror
-- [#1470040](https://bugzilla.redhat.com/1470040): packaging: Upgrade glusterfs-ganesha sometimes fails to semanage ganesha_use_fusefs
-- [#1470488](https://bugzilla.redhat.com/1470488): gluster volume status --xml fails when there are 100 volumes
-- [#1471028](https://bugzilla.redhat.com/1471028): glusterfs process leaking memory when error occurs
-- [#1471612](https://bugzilla.redhat.com/1471612): metadata heal not happening despite having an active sink
-- [#1471870](https://bugzilla.redhat.com/1471870): cthon04 can cause segfault in gNFS/NLM
-- [#1471917](https://bugzilla.redhat.com/1471917): [GANESHA] Ganesha setup creation fails due to selinux blocking some services required for setup creation
-- [#1472446](https://bugzilla.redhat.com/1472446): packaging: save ganesha config files in (/var)/run/gluster/shared_storage/nfs-ganesha
-- [#1473129](https://bugzilla.redhat.com/1473129): dht/rebalance: Improve rebalance crawl performance
-- [#1473132](https://bugzilla.redhat.com/1473132): dht/cluster: rebalance/remove-brick should honor min-free-disk
-- [#1473133](https://bugzilla.redhat.com/1473133): dht/cluster: rebalance/remove-brick should honor min-free-disk
-- [#1473134](https://bugzilla.redhat.com/1473134): The rebal-throttle setting does not work as expected
-- [#1473136](https://bugzilla.redhat.com/1473136): rebalance: Allow admin to change thread count for rebalance
-- [#1473137](https://bugzilla.redhat.com/1473137): dht: Make throttle option "normal" value uniform across dht_init and dht_reconfigure
-- [#1473140](https://bugzilla.redhat.com/1473140): Fix on demand file migration from client
-- [#1473141](https://bugzilla.redhat.com/1473141): cluster/dht: Fix hardlink migration failures
-- [#1475638](https://bugzilla.redhat.com/1475638): [Scale] : Client logs flooded with "inode context is NULL" error messages
-- [#1476212](https://bugzilla.redhat.com/1476212): [geo-rep]: few of the self healed hardlinks on master did not sync to slave
-- [#1478498](https://bugzilla.redhat.com/1478498): scripts: invalid test in S32gluster_enable_shared_storage.sh
-- [#1478499](https://bugzilla.redhat.com/1478499): packaging: /var/lib/glusterd/options should be %config(noreplace)
-- [#1480594](https://bugzilla.redhat.com/1480594): nfs process crashed in "nfs3_getattr" \ No newline at end of file
diff --git a/doc/release-notes/3.10.6.md b/doc/release-notes/3.10.6.md
deleted file mode 100644
index eb911bb1414..00000000000
--- a/doc/release-notes/3.10.6.md
+++ /dev/null
@@ -1,45 +0,0 @@
-# Release notes for Gluster 3.10.6
-
-This is a bugfix release. The release notes for [3.10.0](3.10.0.md) ,
-[3.10.1](3.10.1.md), [3.10.2](3.10.2.md), [3.10.3](3.10.3.md), [3.10.4](3.10.4.md) and [3.10.5](3.10.5.md)
-contain a listing of all the new features that were added and
-bugs fixed in the GlusterFS 3.10 stable release.
-
-## Major changes, features and limitations addressed in this release
-**No Major changes**
-
-## Major issues
-1. Expanding a gluster volume that is sharded may cause file corruption
-- Sharded volumes are typically used for VM images, if such volumes are
-expanded or possibly contracted (i.e add/remove bricks and rebalance)
-there are reports of VM images getting corrupted.
-- The last known cause for corruption [#1498081](https://bugzilla.redhat.com/show_bug.cgi?id=1498081)
-is still pending, and not yet a part of this release.
-2. Brick multiplexing is being tested and fixed aggressively but we still have a
- few crashes and memory leaks to fix.
-
-
-## Bugs addressed
-
-Bugs addressed since release-3.10.5 are listed below.
-
-- [#1467010](https://bugzilla.redhat.com/1467010): Fd based fops fail with EBADF on file migration
-- [#1481394](https://bugzilla.redhat.com/1481394): libgfapi: memory leak in glfs_h_acl_get
-- [#1482857](https://bugzilla.redhat.com/1482857): glusterd fails to start
-- [#1483997](https://bugzilla.redhat.com/1483997): packaging: use rdma-core(-devel) instead of ibverbs, rdmacm; disable rdma on armv7hl
-- [#1484443](https://bugzilla.redhat.com/1484443): packaging: /run and /var/run; prefer /run
-- [#1486542](https://bugzilla.redhat.com/1486542): "ganesha.so cannot open" warning message in glusterd log in non ganesha setup.
-- [#1487042](https://bugzilla.redhat.com/1487042): AFR returns the node uuid of the same node for every file in the replica
-- [#1487647](https://bugzilla.redhat.com/1487647): with AFR now making both nodes to return UUID for a file will result in georep consuming more resources
-- [#1488391](https://bugzilla.redhat.com/1488391): gluster-blockd process crashed and core generated
-- [#1488719](https://bugzilla.redhat.com/1488719): [RHHI] cannot boot vms created from template when disk format = qcow2
-- [#1490909](https://bugzilla.redhat.com/1490909): [Ganesha] : Unable to bring up a Ganesha HA cluster on SELinux disabled machines on latest gluster bits.
-- [#1491166](https://bugzilla.redhat.com/1491166): GlusterD returns a bad memory pointer in glusterd_get_args_from_dict()
-- [#1491691](https://bugzilla.redhat.com/1491691): rpc: TLSv1_2_method() is deprecated in OpenSSL-1.1
-- [#1491966](https://bugzilla.redhat.com/1491966): AFR entry self heal removes a directory's .glusterfs symlink.
-- [#1491985](https://bugzilla.redhat.com/1491985): Add NULL gfid checks before creating file
-- [#1491995](https://bugzilla.redhat.com/1491995): afr: check op_ret value in __afr_selfheal_name_impunge
-- [#1492010](https://bugzilla.redhat.com/1492010): Launch metadata heal in discover code path.
-- [#1495430](https://bugzilla.redhat.com/1495430): Make event-history feature configurable and have it disabled by default
-- [#1496321](https://bugzilla.redhat.com/1496321): [afr] split-brain observed on T files post hardlink and rename in x3 volume
-- [#1497122](https://bugzilla.redhat.com/1497122): Crash in dht_check_and_open_fd_on_subvol_task()