summaryrefslogtreecommitdiffstats
path: root/doc/release-notes
diff options
context:
space:
mode:
Diffstat (limited to 'doc/release-notes')
-rw-r--r--doc/release-notes/3.7.0.md167
-rw-r--r--doc/release-notes/3.7.1.md99
-rw-r--r--doc/release-notes/3.7.10.md63
-rw-r--r--doc/release-notes/3.7.2.md135
-rw-r--r--doc/release-notes/3.7.3.md178
-rw-r--r--doc/release-notes/3.7.4.md121
-rw-r--r--doc/release-notes/3.7.5.md77
-rw-r--r--doc/release-notes/3.7.6.md76
-rw-r--r--doc/release-notes/3.7.7.md171
-rw-r--r--doc/release-notes/3.7.8.md24
-rw-r--r--doc/release-notes/3.7.9.md134
-rw-r--r--doc/release-notes/geo-rep-in-3.7211
-rw-r--r--doc/release-notes/upgrading-from-3.7.2-or-older.md37
13 files changed, 0 insertions, 1493 deletions
diff --git a/doc/release-notes/3.7.0.md b/doc/release-notes/3.7.0.md
deleted file mode 100644
index bf542b7233b..00000000000
--- a/doc/release-notes/3.7.0.md
+++ /dev/null
@@ -1,167 +0,0 @@
-Release Notes for GlusterFS 3.7.0
-
-## Major Changes and Features
-
-Documentation about major changes and features is included in the [`doc/features/` directory](https://github.com/gluster/glusterfs/tree/release-3.7/doc/features) of GlusterFS repository.
-
-### Bitrot Detection
-
-Bitrot detection is a technique used to identify an “insidious” type of disk error where data is silently corrupted with no indication from the disk to the
-storage software layer that an error has occurred. When bitrot detection is enabled on a volume, gluster performs signing of all files/objects in the volume and scrubs data periodically for signature verification. All anomalies observed will be noted in log files.
-
-For more information, refer [here](http://www.gluster.org/community/documentation/index.php/Features/BitRot).
-
-### Multi threaded epoll for performance improvements
-
-Gluster 3.7 introduces multiple threads to dequeue and process more requests from epoll queues. This improves performance by processing more I/O requests. Workloads that involve read/write operations on a lot of small files can benefit from this enhancement.
-
-For more information refer [here](http://www.gluster.org/community/documentation/index.php/Features/Feature_Smallfile_Perf).
-
-### Volume Tiering [Experimental]
-
-Policy based tiering for placement of files. This feature will serve as a foundational piece for building support for data classification.
-
-For more information refer [here](http://www.gluster.org/community/documentation/index.php/Features/data-classification).
-
-Volume Tiering is marked as an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release.
-
-### Trashcan
-
-This feature will enable administrators to temporarily store deleted files from Gluster volumes for a specified time period.
-
-For more information refer [here](http://www.gluster.org/community/documentation/index.php/Features/Trash).
-
-### Efficient Object Count and Inode Quota Support
-
-This improvement enables an easy mechanism to retrieve the number of objects per directory or volume. Count of objects/files within a directory hierarchy is stored as an extended attribute of a directory. The extended attribute can be queried to retrieve the count.
-
-For more information refer [here](http://www.gluster.org/community/documentation/index.php/Features/Object_Count).
-
-This feature has been utilized to add support for inode quotas.
-
-For more details about inode quotas, refer [here](https://github.com/gluster/glusterfs/blob/master/doc/features/quota/quota-object-count.md).
-
-### Pro-active Self healing for Erasure Coding
-
-Gluster 3.7 adds pro-active self healing support for erasure coded volumes.
-
-### Exports and Netgroups Authentication for NFS
-
-This feature adds Linux-style exports & netgroups authentication to the native NFS server. This enables administrators to restrict access to specific clients & netgroups for volume/sub-directory NFSv3 exports.
-
-For more information refer [here](http://www.gluster.org/community/documentation/index.php/Features/Exports_Netgroups_Authentication).
-
-### GlusterFind
-
-GlusterFind is a new tool that provides a mechanism to monitor data events within a volume. Detection of events like modified files is made easier without having to traverse the entire volume.
-
-For more information refer [here](https://github.com/gluster/glusterfs/blob/release-3.7/doc/tools/glusterfind.md).
-
-### Rebalance Performance Improvements
-
-Rebalance and remove brick operations in Gluster get a performance boost by speeding up identification of files needing movement and a multi-threaded mechanism to move all such files.
-
-For more information refer [here](http://www.gluster.org/community/documentation/index.php/Features/improve_rebalance_performance).
-
-### NFSv4 and pNFS support
-
-Gluster 3.7 supports export of volumes through NFSv4, NFSv4.1 and pNFS. This support is enabled via NFS Ganesha. Infrastructure changes done in Gluster 3.7 to support this feature include:
-
-- Addition of upcall infrastructure for cache invalidation.
-- Support for lease locks and delegations.
-- Support for enabling Ganesha through Gluster CLI.
-- Corosync and pacemaker based implementation providing resource monitoring and failover to accomplish NFS HA.
-
-For more information refer the below links:
-
-- [NFS Ganesha Integration](https://github.com/gluster/glusterfs/blob/release-3.7/doc/features/glusterfs_nfs-ganesha_integration.md)
-- [Upcall Infrastructure](http://www.gluster.org/community/documentation/index.php/Features/Upcall-infrastructure)
-- [Gluster CLI for NFS Ganesha](http://www.gluster.org/community/documentation/index.php/Features/Gluster_CLI_for_ganesha)
-- [High Availability for NFS Ganesha](http://www.gluster.org/community/documentation/index.php/Features/HA_for_ganesha)
-- [pNFS support for Gluster](https://github.com/gluster/glusterfs/blob/release-3.7/doc/features/mount_gluster_volume_using_pnfs.md)
-
-pNFS support for Gluster volumes and NFSv4 delegations are in beta for this release. Infrastructure changes to support Lease locks and NFSv4 delegations are targeted for a 3.7.x minor release.
-
-### Snapshot Scheduling
-
-With this enhancement, administrators can schedule volume snapshots.
-
-For more information, see [here](http://www.gluster.org/community/documentation/index.php/Features/Scheduling_of_Snapshot).
-
-### Snapshot Cloning
-
-Volume snapshots can now be cloned to create a new writeable volume.
-
-For more information, see [here](http://www.gluster.org/community/documentation/index.php/Features/Clone_of_Snapshot).
-
-### Sharding [Experimental]
-
-Sharding addresses the problem of fragmentation of space within a volume. This feature adds support for files that are larger than the size of an individual brick. Sharding works by chunking files to blobs of a configurabe size.
-
-For more information, see [here](http://www.gluster.org/community/documentation/index.php/Features/sharding-xlator).
-
-Sharding is an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release.
-
-### RCU in glusterd
-
-Thread synchronization and critical section access has been improved by introducing userspace RCU in glusterd
-
-### Arbiter Volumes
-
-Arbiter volumes are 3 way replicated volumes where the 3rd brick of the replica is automatically configured as an arbiter. The 3rd brick contains only metadata which provides network partition tolerance and prevents split-brains from happening.
-
-For more information, see [here]
-(https://github.com/gluster/glusterfs/blob/release-3.7/doc/features/afr-arbiter-volumes.md).
-
-### Better split-brain resolution
-
-split-brain resolutions can now be also driven by users without administrative intervention.
-
-For more information, see the 'Resolution of split-brain from the mount point' section [here](https://github.com/gluster/glusterfs/blob/release-3.7/doc/features/heal-info-and-split-brain-resolution.md).
-
-### Geo-replication improvements
-
-There have been several improvements in geo-replication for stability and performance. For more details, see [here](https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/geo-rep-in-3.7).
-
-### Minor Improvements
-
-* Message ID based logging has been added for several translators.
-* Quorum support for reads.
-* Snapshot names contain timestamps by default.Subsequent access to the snapshots should be done by the name listed in `gluster snapshot list`
-* Support for `gluster volume get <volname>` added.
-* libgfapi has added handle based functions to get/set POSIX ACLs based on common libacl structures.
-
-### Known Issues
-
-* Enabling Bitrot on volumes with more than 2 bricks on a node is known to cause problems.
-* Addition of bricks dynamically to cold or hot tiers in a tiered volume is not supported.
-* The following configuration changes are necessary for qemu and samba integration with libgfapi to work seamlessly:
-
- ~~~
- # gluster volume set <volname> server.allow-insecure on
- ~~~
-
- Edit `/etc/glusterfs/glusterd.vol` to contain this line: `option rpc-auth-allow-insecure on`
-
- Post 1, restarting the volume would be necessary:
-
- ~~~
- # gluster volume stop <volname>
- # gluster volume start <volname>
- ~~~
-
- Post 2, restarting glusterd would be necessary:
-
- ~~~
- # service glusterd restart
- ~~~
-
- or
-
- ~~~
- # systemctl restart glusterd
- ~~~
-
-### Upgrading to 3.7.0
-
-Instructions for upgrading from previous versions of GlusterFS are maintained on [this wiki page](http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.7).
diff --git a/doc/release-notes/3.7.1.md b/doc/release-notes/3.7.1.md
deleted file mode 100644
index a0763a05a67..00000000000
--- a/doc/release-notes/3.7.1.md
+++ /dev/null
@@ -1,99 +0,0 @@
-## Release Notes for GlusterFS 3.7.1
-
-This is a bugfix release. The [Release Notes for 3.7.0](3.7.0.md), contain a
-listing of all the new features that were added.
-
-```Note: Enabling Bitrot on volumes with more than 2 bricks on a node works with this release. ```
-
-### Bugs Fixed
-
-- [1212676](http://bugzilla.redhat.com/1212676): NetBSD port
-- [1218863](http://bugzilla.redhat.com/1218863): `ls' on a directory which has files with mismatching gfid's does not list anything
-- [1219782](http://bugzilla.redhat.com/1219782): Regression failures in tests/bugs/snapshot/bug-1112559.t
-- [1221000](http://bugzilla.redhat.com/1221000): detach-tier status emulates like detach-tier stop
-- [1221470](http://bugzilla.redhat.com/1221470): dHT rebalance: Dict_copy log messages when running rebalance on a dist-rep volume
-- [1221476](http://bugzilla.redhat.com/1221476): Data Tiering:rebalance fails on a tiered volume
-- [1221477](http://bugzilla.redhat.com/1221477): The tiering feature requires counters.
-- [1221503](http://bugzilla.redhat.com/1221503): DHT Rebalance : Misleading log messages for linkfiles
-- [1221507](http://bugzilla.redhat.com/1221507): NFS-Ganesha: ACL should not be enabled by default
-- [1221534](http://bugzilla.redhat.com/1221534): rebalance failed after attaching the tier to the volume.
-- [1221967](http://bugzilla.redhat.com/1221967): Do not allow detach-tier commands on a non tiered volume
-- [1221969](http://bugzilla.redhat.com/1221969): tiering: use sperate log/socket/pid file for tiering
-- [1222198](http://bugzilla.redhat.com/1222198): Fix nfs/mount3.c build warnings reported in Koji
-- [1222750](http://bugzilla.redhat.com/1222750): non-root geo-replication session goes to faulty state, when the session is started
-- [1222869](http://bugzilla.redhat.com/1222869): [SELinux] [BVT]: Selinux throws AVC errors while running DHT automation on Rhel6.6
-- [1223215](http://bugzilla.redhat.com/1223215): gluster volume status fails with locking failed error message
-- [1223286](http://bugzilla.redhat.com/1223286): [geo-rep]: worker died with "ESTALE" when performed rm -rf on a directory from mount of master volume
-- [1223644](http://bugzilla.redhat.com/1223644): [geo-rep]: With tarssh the file is created at slave but it doesnt get sync
-- [1224100](http://bugzilla.redhat.com/1224100): [geo-rep]: Even after successful sync, the DATA counter did not reset to 0
-- [1224241](http://bugzilla.redhat.com/1224241): gfapi: zero size issue in glfs_h_acl_set()
-- [1224292](http://bugzilla.redhat.com/1224292): peers connected in the middle of a transaction are participating in the transaction
-- [1224647](http://bugzilla.redhat.com/1224647): [RFE] Provide hourly scrubbing option
-- [1224650](http://bugzilla.redhat.com/1224650): SIGNING FAILURE Error messages are poping up in the bitd log
-- [1224894](http://bugzilla.redhat.com/1224894): Quota: spurious failures with quota testcases
-- [1225077](http://bugzilla.redhat.com/1225077): Fix regression test spurious failures
-- [1225279](http://bugzilla.redhat.com/1225279): Different client can not execute "for((i=0;i<1000;i++));do ls -al;done" in a same directory at the sametime
-- [1225318](http://bugzilla.redhat.com/1225318): glusterd could crash in remove-brick-status when local remove-brick process has just completed
-- [1225320](http://bugzilla.redhat.com/1225320): ls command failed with features.read-only on while mounting ec volume.
-- [1225331](http://bugzilla.redhat.com/1225331): [geo-rep] stop-all-gluster-processes.sh fails to stop all gluster processes
-- [1225543](http://bugzilla.redhat.com/1225543): [geo-rep]: snapshot creation timesout even if geo-replication is in pause/stop/delete state
-- [1225552](http://bugzilla.redhat.com/1225552): [Backup]: Unable to create a glusterfind session
-- [1225709](http://bugzilla.redhat.com/1225709): [RFE] Move signing trigger mechanism to [f]setxattr()
-- [1225743](http://bugzilla.redhat.com/1225743): [AFR-V2] - afr_final_errno() should treat op_ret > 0 also as success
-- [1225796](http://bugzilla.redhat.com/1225796): Spurious failure in tests/bugs/disperse/bug-1161621.t
-- [1225919](http://bugzilla.redhat.com/1225919): Log EEXIST errors in DEBUG level in fops MKNOD and MKDIR
-- [1225922](http://bugzilla.redhat.com/1225922): Sharding - Skip update of block count and size for directories in readdirp callback
-- [1226024](http://bugzilla.redhat.com/1226024): cli/tiering:typo errors in tiering
-- [1226029](http://bugzilla.redhat.com/1226029): I/O's hanging on tiered volumes (NFS)
-- [1226032](http://bugzilla.redhat.com/1226032): glusterd crashed on the node when tried to detach a tier after restoring data from the snapshot.
-- [1226117](http://bugzilla.redhat.com/1226117): [RFE] Return proper error codes in case of snapshot failure
-- [1226120](http://bugzilla.redhat.com/1226120): [Snapshot] Do not run scheduler if ovirt scheduler is running
-- [1226139](http://bugzilla.redhat.com/1226139): Implement MKNOD fop in bit-rot.
-- [1226146](http://bugzilla.redhat.com/1226146): BitRot :- bitd is not signing Objects if more than 3 bricks are present on same node
-- [1226153](http://bugzilla.redhat.com/1226153): Quota: Do not allow set/unset of quota limit in heterogeneous cluster
-- [1226629](http://bugzilla.redhat.com/1226629): bug-973073.t fails spuriously
-- [1226853](http://bugzilla.redhat.com/1226853): Volume start fails when glusterfs is source compiled with GCC v5.1.1
-
-### Known Issues
-
-- [1227677](http://bugzilla.redhat.com/1227677): Glusterd crashes and cannot start after rebalance
-- [1227656](http://bugzilla.redhat.com/1227656): Glusted dies when adding new brick to a distributed volume and converting to replicated volume
-- [1210256](http://bugzilla.redhat.com/1210256): gluster volume info --xml gives back incorrect typrStr in xml
-- [1212842](http://bugzilla.redhat.com/1212842): tar on a glusterfs mount displays "file changed as we read it" even though the file was not changed
-- [1220347](http://bugzilla.redhat.com/1220347): Read operation on a file which is in split-brain condition is successful
-- [1213352](http://bugzilla.redhat.com/1213352): nfs-ganesha: HA issue, the iozone process is not moving ahead, once the nfs-ganesha is killed
-- [1220270](http://bugzilla.redhat.com/1220270): nfs-ganesha: Rename fails while exectuing Cthon general category test
-- [1214169](http://bugzilla.redhat.com/1214169): glusterfsd crashed while rebalance and self-heal were in progress
-- [1221941](http://bugzilla.redhat.com/1221941): glusterfsd: bricks crash while executing ls on nfs-ganesha vers=3
-- [1225809](http://bugzilla.redhat.com/1225809): [DHT-REBALANCE]-DataLoss: The data appended to a file during its migration will be lost once the migration is done
-- [1225940](http://bugzilla.redhat.com/1225940): DHT: lookup-unhashed feature breaks runtime compatibility with older client versions
-
-
-- Addition of bricks dynamically to cold or hot tiers in a tiered volume is not supported.
-- The following configuration changes are necessary for qemu and samba integration with libgfapi to work seamlessly:
-
- ~~~
- # gluster volume set <volname> server.allow-insecure on
- ~~~
-
- Edit `/etc/glusterfs/glusterd.vol` to contain this line: `option rpc-auth-allow-insecure on`
-
- Post 1, restarting the volume would be necessary:
-
- ~~~
- # gluster volume stop <volname>
- # gluster volume start <volname>
- ~~~
-
- Post 2, restarting glusterd would be necessary:
-
- ~~~
- # service glusterd restart
- ~~~
-
- or
-
- ~~~
- # systemctl restart glusterd
- ~~~
-
diff --git a/doc/release-notes/3.7.10.md b/doc/release-notes/3.7.10.md
deleted file mode 100644
index 85df72ebc79..00000000000
--- a/doc/release-notes/3.7.10.md
+++ /dev/null
@@ -1,63 +0,0 @@
-# Release notes for GlusterFS-v3.7.10
-
-GlusterFS-v3.7.10 is back on the correct schedule after a long 3.7.9 release.
-
-## Bugs fixed
-
-The following bugs have been fixed in 3.7.10,
-
-- [1299712](https://bugzilla.redhat.com/1299712) - [HC] Implement fallocate, discard and zerofill with sharding
-- [1304963](https://bugzilla.redhat.com/1304963) - [GlusterD]: After log rotate of cmd_history.log file, the next executed gluster commands are not present in the cmd_history.log file.
-- [1310445](https://bugzilla.redhat.com/1310445) - Gluster not resolving hosts with IPv6 only lookups
-- [1311441](https://bugzilla.redhat.com/1311441) - Fix mem leaks related to gfapi applications
-- [1311578](https://bugzilla.redhat.com/1311578) - SMB: SMB crashes with AIO enabled on reads + vers=3.0
-- [1312721](https://bugzilla.redhat.com/1312721) - tar complains: <fileName>: file changed as we read it
-- [1313312](https://bugzilla.redhat.com/1313312) - Client self-heals block the FOP that triggered the heals
-- [1313623](https://bugzilla.redhat.com/1313623) - [georep+disperse]: Geo-Rep session went to faulty with errors "[Errno 5] Input/output error"
-- [1314366](https://bugzilla.redhat.com/1314366) - Peer information is not propagated to all the nodes in the cluster, when the peer is probed with its second interface FQDN/IP
-- [1315141](https://bugzilla.redhat.com/1315141) - RFE: "heal" commands output should have a fixed fields
-- [1315147](https://bugzilla.redhat.com/1315147) - Peer probe from a reinstalled node should fail
-- [1315626](https://bugzilla.redhat.com/1315626) - glusterd crashed when probing a node with firewall enabled on only one node
-- [1315628](https://bugzilla.redhat.com/1315628) - After resetting diagnostics.client-log-level, still Debug messages are logging in scrubber log
-- [1316099](https://bugzilla.redhat.com/1316099) - AFR+SNAPSHOT: File with hard link have different inode number in USS
-- [1316391](https://bugzilla.redhat.com/1316391) - Brick ports get changed after GlusterD restart
-- [1316806](https://bugzilla.redhat.com/1316806) - snapd doesn't come up automatically after node reboot.
-- [1316808](https://bugzilla.redhat.com/1316808) - Data Tiering:tier volume status shows as in-progress on all nodes of a cluster even if the node is not part of volume
-- [1317363](https://bugzilla.redhat.com/1317363) - Errors seen in cli.log, while executing the command 'gluster snapshot info --xml'
-- [1317366](https://bugzilla.redhat.com/1317366) - Tier: Actual files are not demoted and keep on trying to demoted deleted files
-- [1317425](https://bugzilla.redhat.com/1317425) - "gluster_shared_storage"
-- [1317482](https://bugzilla.redhat.com/1317482) - Different epoch values for each of NFS-Ganesha heads
-- [1317788](https://bugzilla.redhat.com/1317788) - Cache swift xattrs
-- [1317861](https://bugzilla.redhat.com/1317861) - Probing a new node, which is part of another cluster, should throw proper error message in logs and CLI
-- [1317863](https://bugzilla.redhat.com/1317863) - glfs_dup() functionality is broken
-- [1318498](https://bugzilla.redhat.com/1318498) - [Tier]: Following volume restart, tierd shows failure at status on some nodes
-- [1318505](https://bugzilla.redhat.com/1318505) - gluster volume status xml output of tiered volume has all the common services tagged under <coldBricks>
-- [1318750](https://bugzilla.redhat.com/1318750) - bash tab completion fails with "grep: Invalid range end"
-- [1318965](https://bugzilla.redhat.com/1318965) - disperse: Provide an option to enable/disable eager lock
-- [1319645](https://bugzilla.redhat.com/1319645) - setting enable-shared-storage without mentioning the domain, doesn't enables shared storage
-- [1319649](https://bugzilla.redhat.com/1319649) - libglusterfs : glusterd was not restarting after setting key=value length beyond PATH_MAX (4096) character
-- [1319989](https://bugzilla.redhat.com/1319989) - smbd crashes while accessing multiple volume shares via same client
-- [1320020](https://bugzilla.redhat.com/1320020) - add-brick on a replicate volume could lead to data-loss
-- [1320024](https://bugzilla.redhat.com/1320024) - Client's App is having issues retrieving files from share 1002976973
-- [1320367](https://bugzilla.redhat.com/1320367) - Add a script that converts the gfid-string of a directory into absolute path name w.r.t the brick path.
-- [1320374](https://bugzilla.redhat.com/1320374) - Glusterd crashed just after a peer probe command failed.
-- [1320377](https://bugzilla.redhat.com/1320377) - Setting of any option using volume set fails when the clients are in older version.
-- [1320821](https://bugzilla.redhat.com/1320821) - volume set on user.* domain trims all white spaces in the value
-- [1320892](https://bugzilla.redhat.com/1320892) - Over some time Files which were accessible become inaccessible(music files)
-- [1321514](https://bugzilla.redhat.com/1321514) - [GSS]-gluster v heal volname info does not work with enabled ssl/tls
-- [1322242](https://bugzilla.redhat.com/1322242) - Installing glusterfs-ganesha-3.7.9-1.el6rhs.x86_64 fails with dependency on /usr/bin/dbus-send
-- [1322431](https://bugzilla.redhat.com/1322431) - pre failed: Traceback ...
-- [1322516](https://bugzilla.redhat.com/1322516) - RFE: Need type of gfid in index_readdir
-- [1322521](https://bugzilla.redhat.com/1322521) - Choose self-heal source as local subvolume if possible
-- [1322552](https://bugzilla.redhat.com/1322552) - Self-heal and manual heal not healing some file
-
-### Known Issues
-
-[1322772](https://bugzilla.redhat.com/1322772): glusterd: glusterd didn't come up after node reboot error" realpath () failed for brick /run/gluster/snaps/130949baac8843cda443cf8a6441157f/brick3/b3. The underlying file system may be in bad state [No such file or directory]"
-* Problem : If snapshot is activated and cluster has some snapshots and if a node is rebooted, glusterd instance doesn't come up and an error log "The underlying file system may be in bad state [No such file or directory]" is seens in glusterd log file.
-* Workaround would be to run [this script](https://gist.github.com/atinmu/a3682ba6782e1d79cf4362d040a89bd1#file-bz1322772-work-around-sh) and post that restart glusterd service on all the nodes.
-
-[1323287](https://bugzilla.redhat.com/1323287): TIER : Attach tier fails
-* Problem: This is not a tiering related issue, rather its on glusterd. If on a multi node cluster, one of the node/glusterd instance is down and volume operations are performed, once the faulty node or glusterd instance comes back, real_path info doesn't get populated back for all the existing bricks resulting into further volume create/attach tier/add-brick commands to fail.
-* Workaround would be to restart glusterd instance once again.
-
diff --git a/doc/release-notes/3.7.2.md b/doc/release-notes/3.7.2.md
deleted file mode 100644
index acb51245fdf..00000000000
--- a/doc/release-notes/3.7.2.md
+++ /dev/null
@@ -1,135 +0,0 @@
-## Release Notes for GlusterFS 3.7.2
-
-This is a bugfix release. The Release Notes for [3.7.0](3.7.0.md), [3.7.1](3.7.1.md) contain a listing of all the new
-features that were added and bugs fixed in the GlusterFS 3.7 stable releases.
-
-### Bugs Fixed
-
-- [1218570](https://bugzilla.redhat.com/1218570): `gluster volume heal <vol-name> split-brain' tries to heal even with insufficient arguments
-- [1219953](https://bugzilla.redhat.com/1219953): The python-gluster RPM should be 'noarch'
-- [1221473](https://bugzilla.redhat.com/1221473): BVT: Posix crash while running BVT on 3.7beta2 build on rhel6.6
-- [1221656](https://bugzilla.redhat.com/1221656): rebalance failing on one of the node
-- [1221941](https://bugzilla.redhat.com/1221941): glusterfsd: bricks crash while executing ls on nfs-ganesha vers=3
-- [1222065](https://bugzilla.redhat.com/1222065): GlusterD fills the logs when the NFS-server is disabled
-- [1223390](https://bugzilla.redhat.com/1223390): packaging: .pc files included in -api-devel should be in -devel
-- [1223890](https://bugzilla.redhat.com/1223890): readdirp return 64bits inodes even if enable-ino32 is set
-- [1225320](https://bugzilla.redhat.com/1225320): ls command failed with features.read-only on while mounting ec volume.
-- [1225548](https://bugzilla.redhat.com/1225548): [Backup]: Misleading error message when glusterfind delete is given with non-existent volume
-- [1225551](https://bugzilla.redhat.com/1225551): [Backup]: Glusterfind session entry persists even after volume is deleted
-- [1225565](https://bugzilla.redhat.com/1225565): [Backup]: RFE - Glusterfind CLI commands need to respond based on volume's start/stop state
-- [1225574](https://bugzilla.redhat.com/1225574): [geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/or ignore these messages in the slave cluster log
-- [1225796](https://bugzilla.redhat.com/1225796): Spurious failure in tests/bugs/disperse/bug-1161621.t
-- [1225809](https://bugzilla.redhat.com/1225809): [DHT-REBALANCE]-DataLoss: The data appended to a file during its migration will be lost once the migration is done
-- [1225839](https://bugzilla.redhat.com/1225839): [DHT:REBALANCE]: xattrs set on the file during rebalance migration will be lost after migration is over
-- [1225842](https://bugzilla.redhat.com/1225842): Minor improvements and cleanup for the build system
-- [1225859](https://bugzilla.redhat.com/1225859): Glusterfs client crash during fd migration after graph switch
-- [1225940](https://bugzilla.redhat.com/1225940): DHT: lookup-unhashed feature breaks runtime compatibility with older client versions
-- [1225999](https://bugzilla.redhat.com/1225999): Update gluster op version to 30701
-- [1226117](https://bugzilla.redhat.com/1226117): [RFE] Return proper error codes in case of snapshot failure
-- [1226213](https://bugzilla.redhat.com/1226213): snap_scheduler script must be usable as python module.
-- [1226224](https://bugzilla.redhat.com/1226224): [RFE] Quota: Make "quota-deem-statfs" option ON, by default, when quota is enabled.
-- [1226272](https://bugzilla.redhat.com/1226272): Volume heal info not reporting files in split brain and core dumping, after upgrading to 3.7.0
-- [1226789](https://bugzilla.redhat.com/1226789): quota: ENOTCONN parodically seen in logs when setting hard/soft timeout during I/O.
-- [1226792](https://bugzilla.redhat.com/1226792): Statfs is hung because of frame loss in quota
-- [1226880](https://bugzilla.redhat.com/1226880): Fix infinite looping in shard_readdir(p) on '/'
-- [1226962](https://bugzilla.redhat.com/1226962): nfs-ganesha: Getting issues for nfs-ganesha on new nodes of glusterfs,error is /etc/ganesha/ganesha-ha.conf: line 11: VIP_<hostname with fqdn>=<ip>: command not found
-- [1227028](https://bugzilla.redhat.com/1227028): nfs-ganesha: Discrepancies with lock states recovery during migration
-- [1227167](https://bugzilla.redhat.com/1227167): NFS: IOZone tests hang, disconnects and hung tasks seen in logs.
-- [1227235](https://bugzilla.redhat.com/1227235): glusterfsd crashed on a quota enabled volume where snapshots were scheduled
-- [1227572](https://bugzilla.redhat.com/1227572): Sharding - Fix posix compliance test failures.
-- [1227576](https://bugzilla.redhat.com/1227576): libglusterfs: Copy _all_ members of gf_dirent_t in entry_copy()
-- [1227611](https://bugzilla.redhat.com/1227611): Fix deadlock in timer-wheel del_timer() API
-- [1227615](https://bugzilla.redhat.com/1227615): "Snap_scheduler disable" should have different return codes for different failures.
-- [1227674](https://bugzilla.redhat.com/1227674): Honour afr self-heal volume set options from clients
-- [1227677](https://bugzilla.redhat.com/1227677): Glusterd crashes and cannot start after rebalance
-- [1227887](https://bugzilla.redhat.com/1227887): Update gluster op version to 30702
-- [1227916](https://bugzilla.redhat.com/1227916): auth_cache_entry structure barely gets cached
-- [1228045](https://bugzilla.redhat.com/1228045): Scrubber should be disabled once bitrot is reset
-- [1228065](https://bugzilla.redhat.com/1228065): Though brick demon is not running, gluster vol status command shows the pid
-- [1228100](https://bugzilla.redhat.com/1228100): Disperse volume: brick logs are getting filled with "anonymous fd creation failed" messages
-- [1228160](https://bugzilla.redhat.com/1228160): linux untar hanged after the bricks are up in a 8+4 config
-- [1228181](https://bugzilla.redhat.com/1228181): Simplify creation and set-up of meta-volume (shared storage)
-- [1228510](https://bugzilla.redhat.com/1228510): Building packages on RHEL-5 based distributions fails
-- [1228592](https://bugzilla.redhat.com/1228592): Glusterd fails to start after volume restore, tier attach and node reboot
-- [1228601](https://bugzilla.redhat.com/1228601): [Virt-RHGS] Creating a image on gluster volume using qemu-img + gfapi throws error messages related to rpc_transport
-- [1228729](https://bugzilla.redhat.com/1228729): nfs-ganesha: rmdir logs "remote operation failed: Stale file handle" even though the operation is successful
-- [1229100](https://bugzilla.redhat.com/1229100): Do not invoke glfs_fini for glfs-heal processes.
-- [1229282](https://bugzilla.redhat.com/1229282): Disperse volume: Huge memory leak of glusterfsd process
-- [1229331](https://bugzilla.redhat.com/1229331): Disperse volume : glusterfs crashed
-- [1229550](https://bugzilla.redhat.com/1229550): [AFR-V2] - Fix shd coredump from tests/bugs/glusterd/bug-948686.t
-- [1230018](https://bugzilla.redhat.com/1230018): [SNAPSHOT]: Initializing snap_scheduler from all nodes at the same time should give proper error message
-- [1230026](https://bugzilla.redhat.com/1230026): BVT: glusterd crashed and dumped during upgrade (on rhel7.1 server)
-- [1230167](https://bugzilla.redhat.com/1230167): [Snapshot] Python crashes with trace back notification when shared storage is unmount from Storage Node
-- [1230350](https://bugzilla.redhat.com/1230350): Client hung up on listing the files on a perticular directory
-- [1230560](https://bugzilla.redhat.com/1230560): data tiering: do not allow tiering related volume set options on a regular volume
-- [1230563](https://bugzilla.redhat.com/1230563): tiering:glusterd crashed when trying to detach-tier commit force on a non-tiered volume.
-- [1230653](https://bugzilla.redhat.com/1230653): Disperse volume : client crashed while running IO
-- [1230687](https://bugzilla.redhat.com/1230687): [Backup]: 'New' as well as 'Modify' entry getting recorded for a newly created hardlink
-- [1230691](https://bugzilla.redhat.com/1230691): [geo-rep]: use_meta_volume config option should be validated for its values
-- [1230693](https://bugzilla.redhat.com/1230693): [geo-rep]: RENAME are not synced to slave when quota is enabled.
-- [1230694](https://bugzilla.redhat.com/1230694): [Backup]: Glusterfind pre fails with htime xattr updation error resulting in historical changelogs not available
-- [1230712](https://bugzilla.redhat.com/1230712): [Backup]: Chown/chgrp for a directory does not get recorded as a MODIFY entry in the outfile
-- [1230715](https://bugzilla.redhat.com/1230715): [Backup]: Glusterfind delete does not delete the session related information present in $GLUSTERD_WORKDIR
-- [1230783](https://bugzilla.redhat.com/1230783): [Backup]: Crash observed when glusterfind pre is run after deleting a directory containing files
-- [1230791](https://bugzilla.redhat.com/1230791): [Backup]: 'Glusterfind list' should display an appropriate output when there are no active sessions
-- [1231213](https://bugzilla.redhat.com/1231213): [geo-rep]: rsync should be made dependent package for geo-replication
-- [1231366](https://bugzilla.redhat.com/1231366): NFS Authentication Performance Issue
-- [1231516](https://bugzilla.redhat.com/1231516): glusterfsd process on 100% cpu, upcall busy loop in reaper thread
-- [1231646](https://bugzilla.redhat.com/1231646): [glusterd] glusterd crashed while trying to remove a bricks - one selected from each replica set - after shrinking nX3 to nX2 to nX1
-- [1231832](https://bugzilla.redhat.com/1231832): bitrot: (rfe) object signing wait time value should be tunable.
-- [1232002](https://bugzilla.redhat.com/1232002): nfs-ganesha: 8 node pcs cluster setup fails
-- [1232135](https://bugzilla.redhat.com/1232135): Quota: " E [quota.c:1197:quota_check_limit] 0-ecvol-quota: Failed to check quota size limit" in brick logs
-- [1232143](https://bugzilla.redhat.com/1232143): nfs-ganesha: trying to bring up nfs-ganesha on three node shows error although pcs status and ganesha process on all three nodes
-- [1232155](https://bugzilla.redhat.com/1232155): Not able to export volume using nfs-ganesha
-- [1232335](https://bugzilla.redhat.com/1232335): nfs-ganesha: volume is not in list of exports in case of volume stop followed by volume start
-- [1232589](https://bugzilla.redhat.com/1232589): [Bitrot] Gluster v set <volname> bitrot enable command succeeds , which is not supported to enable bitrot
-- [1233042](https://bugzilla.redhat.com/1233042): use after free bug in dht
-- [1233117](https://bugzilla.redhat.com/1233117): quota: quota list displays double the size of previous value, post heal completion
-- [1233484](https://bugzilla.redhat.com/1233484): Possible double execution of the state machine for fops that start other subfops
-- [1233056](https://bugzilla.redhat.com/1233056): Not able to create snapshots for geo-replicated volumes when session is created with root user
-- [1233044](https://bugzilla.redhat.com/1233044): Segmentation faults are observed on all the master nodes
-- [1232179](https://bugzilla.redhat.com/1232179): Objects are not signed upon truncate()
-
-### Known Issues
-
-- [1212842](https://bugzilla.redhat.com/1212842): tar on a glusterfs mount displays "file changed as we read it" even though the file was not changed
-- [1229226](https://bugzilla.redhat.com/1229226): Gluster split-brain not logged and data integrity not enforced
-- [1213352](https://bugzilla.redhat.com/1213352): nfs-ganesha: HA issue, the iozone process is not moving ahead, once the nfs-ganesha is killed
-- [1220270](https://bugzilla.redhat.com/1220270): nfs-ganesha: Rename fails while exectuing Cthon general category test
-- [1214169](https://bugzilla.redhat.com/1214169): glusterfsd crashed while rebalance and self-heal were in progress
-- [1225567](https://bugzilla.redhat.com/1225567): Traceback "ValueError:filedescriptor out of range in select()" observed while creating huge set of data on master
-- [1226233](https://bugzilla.redhat.com/1226233): Mount broker user add command removes existing volume for a mountbroker user when second volume is attached to same user
-- [1231539](https://bugzilla.redhat.com/1231539): Detect and send ENOTSUP if upcall feature is not enabled
-- [1232333](https://bugzilla.redhat.com/1232333): Ganesha-ha.sh cluster setup not working with RHEL7 and derivatives
-- [1218961](https://bugzilla.redhat.com/1218961): snapshot: Can not activate the name provided while creating snaps to do any further access
-- [1219399](https://bugzilla.redhat.com/1219399): NFS interoperability problem: Gluster Striped-Replicated can't read on vmware esxi 5.x NFS client
-
-
-- Addition of bricks dynamically to cold or hot tiers in a tiered volume is not supported.
-- The following configuration changes are necessary for qemu and samba integration with libgfapi to work seamlessly:
-
- ~~~
- # gluster volume set <volname> server.allow-insecure on
- ~~~
-
- Edit `/etc/glusterfs/glusterd.vol` to contain this line: `option rpc-auth-allow-insecure on`
-
- Post 1, restarting the volume would be necessary:
-
- ~~~
- # gluster volume stop <volname>
- # gluster volume start <volname>
- ~~~
-
- Post 2, restarting glusterd would be necessary:
-
- ~~~
- # service glusterd restart
- ~~~
-
- or
-
- ~~~
- # systemctl restart glusterd
- ~~~
-
diff --git a/doc/release-notes/3.7.3.md b/doc/release-notes/3.7.3.md
deleted file mode 100644
index 605554623e8..00000000000
--- a/doc/release-notes/3.7.3.md
+++ /dev/null
@@ -1,178 +0,0 @@
-## Release Notes for GlusterFS 3.7.3
-
-This is a bugfix release. The Release Notes for [3.7.0](3.7.0.md), [3.7.1](3.7.1.md) and [3.7.2](3.7.2.md) contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.7 stable releases.
-
-### Bugs Fixed
-
-- [1212842](https://bugzilla.redhat.com/1212842): tar on a glusterfs mount displays "file changed as we read it" even though the file was not changed
-- [1214169](https://bugzilla.redhat.com/1214169): glusterfsd crashed while rebalance and self-heal were in progress
-- [1217722](https://bugzilla.redhat.com/1217722): Tracker bug for Logging framework expansion.
-- [1219358](https://bugzilla.redhat.com/1219358): Disperse volume: client crashed while running iozone
-- [1223318](https://bugzilla.redhat.com/1223318): brick-op failure for glusterd command should log error message in cmd_history.log
-- [1226666](https://bugzilla.redhat.com/1226666): BitRot :- Handle brick re-connection sanely in bitd/scrub process
-- [1226830](https://bugzilla.redhat.com/1226830): Scrubber crash upon pause
-- [1227572](https://bugzilla.redhat.com/1227572): Sharding - Fix posix compliance test failures.
-- [1227808](https://bugzilla.redhat.com/1227808): Issues reported by Cppcheck static analysis tool
-- [1228535](https://bugzilla.redhat.com/1228535): Memory leak in marker xlator
-- [1228640](https://bugzilla.redhat.com/1228640): afr: unrecognized option in re-balance volfile
-- [1229282](https://bugzilla.redhat.com/1229282): Disperse volume: Huge memory leak of glusterfsd process
-- [1229563](https://bugzilla.redhat.com/1229563): Disperse volume: Failed to update version and size (error 2) seen during delete operations
-- [1230327](https://bugzilla.redhat.com/1230327): context of access control translator should be updated properly for GF_POSIX_ACL_*_KEY xattrs
-- [1230399](https://bugzilla.redhat.com/1230399): [Snapshot] Scheduled job is not processed when one of the node of shared storage volume is down
-- [1230523](https://bugzilla.redhat.com/1230523): glusterd: glusterd crashing if you run re-balance and vol status command parallely.
-- [1230857](https://bugzilla.redhat.com/1230857): Files migrated should stay on a tier for a full cycle
-- [1231024](https://bugzilla.redhat.com/1231024): scrub frequecny and throttle change information need to be present in Scrubber log
-- [1231608](https://bugzilla.redhat.com/1231608): Add regression test for cluster lock in a heterogeneous cluster
-- [1231767](https://bugzilla.redhat.com/1231767): tiering:compiler warning with gcc v5.1.1
-- [1232173](https://bugzilla.redhat.com/1232173): Incomplete self-heal and split-brain on directories found when self-healing files/dirs on a replaced disk
-- [1232185](https://bugzilla.redhat.com/1232185): cli correction: if tried to create multiple bricks on same server shows replicate volume instead of disperse volume
-- [1232199](https://bugzilla.redhat.com/1232199): Skip zero byte files when triggering signing
-- [1232333](https://bugzilla.redhat.com/1232333): Ganesha-ha.sh cluster setup not working with RHEL7 and derivatives
-- [1232335](https://bugzilla.redhat.com/1232335): nfs-ganesha: volume is not in list of exports in case of volume stop followed by volume start
-- [1232602](https://bugzilla.redhat.com/1232602): bug-857330/xml.t fails spuriously
-- [1232612](https://bugzilla.redhat.com/1232612): Disperse volume: misleading unsuccessful message with heal and heal full
-- [1232883](https://bugzilla.redhat.com/1232883): Snapshot daemon failed to run on newly created dist-rep volume with uss enabled
-- [1232885](https://bugzilla.redhat.com/1232885): [SNAPSHOT]: "man gluster" needs modification for few snapshot commands
-- [1232886](https://bugzilla.redhat.com/1232886): [SNAPSHOT]: Output message when a snapshot create is issued when multiple bricks are down needs to be improved
-- [1232887](https://bugzilla.redhat.com/1232887): [SNAPSHOT] : Snapshot delete fails with error - Snap might not be in an usable state
-- [1232889](https://bugzilla.redhat.com/1232889): Snapshot: When Cluster.enable-shared-storage is enable, shared storage should get mount after Node reboot
-- [1233041](https://bugzilla.redhat.com/1233041): glusterd crashed when testing heal full on replaced disks
-- [1233158](https://bugzilla.redhat.com/1233158): Null pointer dreference in dht_migrate_complete_check_task
-- [1233518](https://bugzilla.redhat.com/1233518): [Backup]: Glusterfind session(s) created before starting the volume results in 'changelog not available' error, eventually
-- [1233555](https://bugzilla.redhat.com/1233555): gluster v set help needs to be updated for cluster.enable-shared-storage option
-- [1233559](https://bugzilla.redhat.com/1233559): libglusterfs: avoid crash due to ctx being NULL
-- [1233611](https://bugzilla.redhat.com/1233611): Incomplete conservative merge for split-brained directories
-- [1233632](https://bugzilla.redhat.com/1233632): Disperse volume: client crashed while running iozone
-- [1233651](https://bugzilla.redhat.com/1233651): pthread cond and mutex variables of fs struct has to be destroyed conditionally.
-- [1234216](https://bugzilla.redhat.com/1234216): nfs-ganesha: add node fails to add a new node to the cluster
-- [1234225](https://bugzilla.redhat.com/1234225): Data Tiering: add tiering set options to volume set help (cluster.tier-demote-frequency and cluster.tier-promote-frequency)
-- [1234297](https://bugzilla.redhat.com/1234297): Quota: Porting logging messages to new logging framework
-- [1234408](https://bugzilla.redhat.com/1234408): STACK_RESET may crash with concurrent statedump requests to a glusterfs process
-- [1234584](https://bugzilla.redhat.com/1234584): nfs-ganesha:delete node throws error and pcs status also notifies about failures, in fact I/O also doesn't resume post grace period
-- [1234679](https://bugzilla.redhat.com/1234679): Disperse volume : 'ls -ltrh' doesn't list correct size of the files every time
-- [1234695](https://bugzilla.redhat.com/1234695): [geo-rep]: Setting meta volume config to false when meta volume is stopped/deleted leads geo-rep to faulty
-- [1234843](https://bugzilla.redhat.com/1234843): GlusterD does not store updated peerinfo objects.
-- [1234898](https://bugzilla.redhat.com/1234898): [geo-rep]: Feature fan-out fails with the use of meta volume config
-- [1235203](https://bugzilla.redhat.com/1235203): tiering: tier status shows as " progressing " but there is no rebalance daemon running
-- [1235208](https://bugzilla.redhat.com/1235208): glusterd: glusterd crashes while importing a USS enabled volume which is already started
-- [1235242](https://bugzilla.redhat.com/1235242): changelog: directory renames not getting recorded
-- [1235258](https://bugzilla.redhat.com/1235258): nfs-ganesha: ganesha-ha.sh --refresh-config not working
-- [1235297](https://bugzilla.redhat.com/1235297): [geo-rep]: set_geo_rep_pem_keys.sh needs modification in gluster path to support mount broker functionality
-- [1235360](https://bugzilla.redhat.com/1235360): [geo-rep]: Mountbroker setup goes to Faulty with ssh 'Permission Denied' Errors
-- [1235428](https://bugzilla.redhat.com/1235428): Mount broker user add command removes existing volume for a mountbroker user when second volume is attached to same user
-- [1235512](https://bugzilla.redhat.com/1235512): quorum calculation might go for toss for a concurrent peer probe command
-- [1235629](https://bugzilla.redhat.com/1235629): Missing trusted.ec.config xattr for files after heal process
-- [1235904](https://bugzilla.redhat.com/1235904): fgetxattr() crashes when key name is NULL
-- [1235923](https://bugzilla.redhat.com/1235923): POSIX: brick logs filled with _gf_log_callingfn due to this==NULL in dict_get
-- [1235928](https://bugzilla.redhat.com/1235928): memory corruption in the way we maintain migration information in inodes.
-- [1235934](https://bugzilla.redhat.com/1235934): Allow only lookup and delete operation on file that is in split-brain
-- [1235939](https://bugzilla.redhat.com/1235939): Provide and use a common way to do reference counting of (internal) structures
-- [1235966](https://bugzilla.redhat.com/1235966): [RHEV-RHGS] After self-heal operation, VM Image file loses the sparseness property
-- [1235990](https://bugzilla.redhat.com/1235990): quota: marker accounting miscalculated when renaming a file on with write is in progress
-- [1236019](https://bugzilla.redhat.com/1236019): peer probe results in Peer Rejected(Connected)
-- [1236093](https://bugzilla.redhat.com/1236093): [geo-rep]: worker died with "ESTALE" when performed rm -rf on a directory from mount of master volume
-- [1236260](https://bugzilla.redhat.com/1236260): [Quota] The root of the volume on which the quota is set shows the volume size more than actual volume size, when checked with "df" command.
-- [1236269](https://bugzilla.redhat.com/1236269): FSAL_GLUSTER : symlinks are not working properly if acl is enabled
-- [1236271](https://bugzilla.redhat.com/1236271): Introduce an ATOMIC_WRITE flag in posix writev
-- [1236274](https://bugzilla.redhat.com/1236274): Upcall: Directory or file creation should send cache invalidation requests to parent directories
-- [1236282](https://bugzilla.redhat.com/1236282): [Backup]: File movement across directories does not get captured in the output file in a X3 volume
-- [1236288](https://bugzilla.redhat.com/1236288): Data Tiering: Files not getting promoted once demoted
-- [1236933](https://bugzilla.redhat.com/1236933): Ganesha volume export failed
-- [1238052](https://bugzilla.redhat.com/1238052): Quota list is not working on tiered volume.
-- [1238057](https://bugzilla.redhat.com/1238057): Incorrect state created in '/var/lib/nfs/statd'
-- [1238073](https://bugzilla.redhat.com/1238073): protocol/server doesn't reconfigure auth.ssl-allow options
-- [1238476](https://bugzilla.redhat.com/1238476): Throttle background heals in disperse volumes
-- [1238752](https://bugzilla.redhat.com/1238752): Consecutive volume start/stop operations when ganesha.enable is on, leads to errors
-- [1239270](https://bugzilla.redhat.com/1239270): [Scheduler]: Unable to create Snapshots on RHEL-7.1 using Scheduler
-- [1240183](https://bugzilla.redhat.com/1240183): Renamed Files are missing after self-heal
-- [1240190](https://bugzilla.redhat.com/1240190): do an explicit lookup on the inodes linked in readdirp
-- [1240603](https://bugzilla.redhat.com/1240603): glusterfsd crashed after volume start force
-- [1240607](https://bugzilla.redhat.com/1240607): [geo-rep]: UnboundLocalError: local variable 'fd' referenced before assignment
-- [1240616](https://bugzilla.redhat.com/1240616): Unable to pause georep session if one of the nodes in cluster is not part of master volume.
-- [1240906](https://bugzilla.redhat.com/1240906): quota+afr: quotad crash "afr_local_init (local=0x0, priv=0x7fddd0372220, op_errno=0x7fddce1434dc) at afr-common.c:4112"
-- [1240955](https://bugzilla.redhat.com/1240955): [USS]: snapd process is not killed once the glusterd comes back
-- [1241134](https://bugzilla.redhat.com/1241134): nfs-ganesha: execution of script ganesha-ha.sh throws a error for a file
-- [1241487](https://bugzilla.redhat.com/1241487): quota/marker: lk_owner is null while acquiring inodelk in rename operation
-- [1241529](https://bugzilla.redhat.com/1241529): BitRot :- Files marked as 'Bad' should not be accessible from mount
-- [1241666](https://bugzilla.redhat.com/1241666): glfs_loc_link: Update loc.inode with the existing inode incase if already exits
-- [1241776](https://bugzilla.redhat.com/1241776): [Data Tiering]: HOT Files get demoted from hot tier
-- [1241784](https://bugzilla.redhat.com/1241784): Gluster commands timeout on SSL enabled system, after adding new node to trusted storage pool
-- [1241831](https://bugzilla.redhat.com/1241831): quota: marker accounting can get miscalculated after upgrade to 3.7
-- [1241841](https://bugzilla.redhat.com/1241841): gf_msg_callingfn does not log the callers of the function in which it is called
-- [1241885](https://bugzilla.redhat.com/1241885): ganesha volume export fails in rhel7.1
-- [1241963](https://bugzilla.redhat.com/1241963): Peer not recognized after IP address change
-- [1242031](https://bugzilla.redhat.com/1242031): nfs-ganesha: bricks crash while executing acl related operation for named group/user
-- [1242044](https://bugzilla.redhat.com/1242044): nfs-ganesha : Multiple setting of nfs4_acl on a same file will cause brick crash
-- [1242192](https://bugzilla.redhat.com/1242192): nfs-ganesha: add-node logic does not copy the "/etc/ganesha/exports" directory to the correct path on the newly added node
-- [1242274](https://bugzilla.redhat.com/1242274): Migration does not work when EC is used as a tiered volume.
-- [1242329](https://bugzilla.redhat.com/1242329): [Quota] : Inode quota spurious failure
-- [1242515](https://bugzilla.redhat.com/1242515): racy condition in nfs/auth-cache feature
-- [1242718](https://bugzilla.redhat.com/1242718): [RFE] Improve I/O latency during signing
-- [1242728](https://bugzilla.redhat.com/1242728): replacing a offline brick fails with "replace-brick" command
-- [1242734](https://bugzilla.redhat.com/1242734): GlusterD crashes when management encryption is enabled
-- [1242882](https://bugzilla.redhat.com/1242882): Quota: Quota Daemon doesn't start after node reboot
-- [1242898](https://bugzilla.redhat.com/1242898): Crash in Quota enforcer
-- [1243408](https://bugzilla.redhat.com/1243408): syncop:Include iatt to 'syncop_link' args
-- [1243642](https://bugzilla.redhat.com/1243642): GF_CONTENT_KEY should not be handled unless we are sure no other operations are in progress
-- [1243644](https://bugzilla.redhat.com/1243644): Metadata self-heal is not handling failures while heal properly
-- [1243647](https://bugzilla.redhat.com/1243647): Disperse volume : data corruption with appending writes in 8+4 config
-- [1243648](https://bugzilla.redhat.com/1243648): Disperse volume: NFS crashed
-- [1243654](https://bugzilla.redhat.com/1243654): fops fail with EIO on nfs mount after add-brick and rebalance
-- [1243655](https://bugzilla.redhat.com/1243655): Sharding - Use (f)xattrop (as opposed to (f)setxattr) to update shard size and block count
-- [1243898](https://bugzilla.redhat.com/1243898): huge mem leak in posix xattrop
-- [1244100](https://bugzilla.redhat.com/1244100): using fop's dict for resolving causes problems
-- [1244103](https://bugzilla.redhat.com/1244103): Gluster cli logs invalid argument error on every gluster command execution
-- [1244114](https://bugzilla.redhat.com/1244114): unix domain sockets on Gluster/NFS are created as fifo/pipe
-- [1244116](https://bugzilla.redhat.com/1244116): quota: brick crashes when create and remove performed in parallel
-- [1245908](https://bugzilla.redhat.com/1245908): snap-view:mount crash if debug mode is enabled
-- [1245934](https://bugzilla.redhat.com/1245934): [RHEV-RHGS] App VMs paused due to IO error caused by split-brain, after initiating remove-brick operation
-- [1246121](https://bugzilla.redhat.com/1246121): Disperse volume : client glusterfs crashed while running IO
-- [1246481](https://bugzilla.redhat.com/1246481): rpc: fix binding brick issue while bind-insecure is enabled
-- [1246728](https://bugzilla.redhat.com/1246728): client3_3_removexattr_cbk floods the logs with "No data available" messages
-- [1246809](https://bugzilla.redhat.com/1246809): glusterd crashed when a client which doesn't support SSL tries to mount a SSL enabled gluster volume
-- [1246987](https://bugzilla.redhat.com/1246987): Deceiving log messages like "Failing STAT on gfid : split-brain observed. [Input/output error]" reported
-- [1246988](https://bugzilla.redhat.com/1246988): sharding - Populate the aggregated ia_size and ia_blocks before unwinding (f)setattr to upper layers
-- [1247012](https://bugzilla.redhat.com/1247012): Initialize daemons on demand
-
-### Known Issues
-
-- [1219399](https://bugzilla.redhat.com/1219399): NFS interoperability problem: Gluster Striped-Replicated can't read on vmware esxi 5.x NFS client
-- [1225077](https://bugzilla.redhat.com/1225077): Fix regression test spurious failures
-- [1207023](https://bugzilla.redhat.com/1207023): [RFE] Snapshot scheduler enhancements (both GUI Console & CLI)
-- [1218990](https://bugzilla.redhat.com/1218990): failing installation of glusterfs-server-3.7.0beta1-0.14.git09bbd5c.el7.centos.x86_64
-- [1221957](https://bugzilla.redhat.com/1221957): Fully support data-tiering in 3.7.x, remove out of 'experimental' status
-- [1225567](https://bugzilla.redhat.com/1225567): [geo-rep]: Traceback ValueError: filedescriptor out of range in select() observed while creating huge set of data on master
-- [1227656](https://bugzilla.redhat.com/1227656): Unable to mount a replicated volume without all bricks online.
-- [1235964](https://bugzilla.redhat.com/1235964): Disperse volume: FUSE I/O error after self healing the failed disk files
-- [1231539](https://bugzilla.redhat.com/1231539): Detect and send ENOTSUP if upcall feature is not enabled
-- [1240920](https://bugzilla.redhat.com/1240920): libgfapi: Segfault seen when glfs_*() methods are invoked with invalid glfd
-
-
-- Addition of bricks dynamically to cold or hot tiers in a tiered volume is not supported.
-- The following configuration changes are necessary for qemu and samba integration with libgfapi to work seamlessly:
-
- ~~~
- # gluster volume set <volname> server.allow-insecure on
- ~~~
-
- Edit `/etc/glusterfs/glusterd.vol` to contain this line: `option rpc-auth-allow-insecure on`
-
- Post 1, restarting the volume would be necessary:
-
- ~~~
- # gluster volume stop <volname>
- # gluster volume start <volname>
- ~~~
-
- Post 2, restarting glusterd would be necessary:
-
- ~~~
- # service glusterd restart
- ~~~
-
- or
-
- ~~~
- # systemctl restart glusterd
- ~~~
-
diff --git a/doc/release-notes/3.7.4.md b/doc/release-notes/3.7.4.md
deleted file mode 100644
index c3dff6e42f9..00000000000
--- a/doc/release-notes/3.7.4.md
+++ /dev/null
@@ -1,121 +0,0 @@
-## Release Notes for GlusterFS 3.7.4
-
-This is a bugfix release. The Release Notes for [3.7.0](3.7.0.md), [3.7.1](3.7.1.md), [3.7.2](3.7.2.md) and [3.7.3](3.7.3.md) contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.7 stable releases.
-
-### Bugs Fixed
-
-Release 3.7.4 contains 93 bug fixes.
-
-- [1223945](https://bugzilla.redhat.com/1223945): Scripts/Binaries are not installed with +x bit
-- [1228216](https://bugzilla.redhat.com/1228216): Disperse volume: gluster volume status doesn't show shd status
-- [1228521](https://bugzilla.redhat.com/1228521): USS: Take ref on root inode
-- [1231678](https://bugzilla.redhat.com/1231678): geo-rep: gverify.sh throws error if slave_host entry is not added to know_hosts file
-- [1235202](https://bugzilla.redhat.com/1235202): tiering: tier daemon not restarting during volume/glusterd restart
-- [1235964](https://bugzilla.redhat.com/1235964): Disperse volume: FUSE I/O error after self healing the failed disk files
-- [1236050](https://bugzilla.redhat.com/1236050): Disperse volume: fuse mount hung after self healing
-- [1238706](https://bugzilla.redhat.com/1238706): snapd/quota/nfs daemon's runs on the node, even after that node was detached from trusted storage pool
-- [1240920](https://bugzilla.redhat.com/1240920): libgfapi: Segfault seen when glfs_*() methods are invoked with invalid glfd
-- [1242536](https://bugzilla.redhat.com/1242536): Data Tiering: Rename of file is not heating up the file
-- [1243384](https://bugzilla.redhat.com/1243384): EC volume: Replace bricks is not healing version of root directory
-- [1244721](https://bugzilla.redhat.com/1244721): glusterd: Porting left out log messages to new logging API
-- [1244724](https://bugzilla.redhat.com/1244724): quota: allowed to set soft-limit %age beyond 100%
-- [1245922](https://bugzilla.redhat.com/1245922): [SNAPSHOT] : Correction required in output message after initilalising snap_scheduler
-- [1245923](https://bugzilla.redhat.com/1245923): [Snapshot] Scheduler should check vol-name exists or not before adding scheduled jobs
-- [1247014](https://bugzilla.redhat.com/1247014): sharding - Fix unlink of sparse files
-- [1247153](https://bugzilla.redhat.com/1247153): SSL improvements: ECDH, DH, CRL, and accessible options
-- [1247551](https://bugzilla.redhat.com/1247551): forgotten inodes are not being signed
-- [1247615](https://bugzilla.redhat.com/1247615): tests/bugs/replicate/bug-1238508-self-heal.t fails in 3.7 branch
-- [1247833](https://bugzilla.redhat.com/1247833): sharding - OS installation on vm image hangs on a sharded volume
-- [1247850](https://bugzilla.redhat.com/1247850): Glusterfsd crashes because of thread-unsafe code in gf_authenticate
-- [1247882](https://bugzilla.redhat.com/1247882): [geo-rep]: killing brick from replica pair makes geo-rep session faulty with Traceback "ChangelogException"""
-- [1247910](https://bugzilla.redhat.com/1247910): Gluster peer probe with negative num
-- [1247917](https://bugzilla.redhat.com/1247917): ./tests/basic/volume-snapshot.t spurious fail causing glusterd crash.
-- [1248325](https://bugzilla.redhat.com/1248325): quota: In enforcer, caching parents in ctx during build ancestry is not working
-- [1248337](https://bugzilla.redhat.com/1248337): Data Tiering: Change the error message when a detach-tier status is issued on a non-tier volume
-- [1248450](https://bugzilla.redhat.com/1248450): rpc: check for unprivileged port should start at 1024 and not beyond 1024
-- [1248962](https://bugzilla.redhat.com/1248962): quota/marker: errors in log file 'Failed to get metadata for'
-- [1249461](https://bugzilla.redhat.com/1249461): 'unable to get transaction op-info' error seen in glusterd log while executing gluster volume status command
-- [1249547](https://bugzilla.redhat.com/1249547): [geo-rep]: rename followed by deletes causes ESTALE
-- [1249921](https://bugzilla.redhat.com/1249921): [upgrade] After upgrade from 3.5 to 3.6 onwards version, bumping up op-version failed
-- [1249925](https://bugzilla.redhat.com/1249925): DHT-rebalance: Rebalance hangs on distribute volume when glusterd is stopped on peer node
-- [1249983](https://bugzilla.redhat.com/1249983): Rebalance is failing in test cluster framework.
-- [1250601](https://bugzilla.redhat.com/1250601): nfs-ganesha: remove the entry of the deleted node
-- [1250628](https://bugzilla.redhat.com/1250628): nfs-ganesha: ganesha-ha.sh --status is actually same as "pcs status"""
-- [1250809](https://bugzilla.redhat.com/1250809): Enable multi-threaded epoll for glusterd process
-- [1250810](https://bugzilla.redhat.com/1250810): Make ping-timeout option configurable at a volume-level
-- [1250834](https://bugzilla.redhat.com/1250834): Sharding - Excessive logging of messages of the kind 'Failed to get trusted.glusterfs.shard.file-size for bf292f5b-6dd6-45a8-b03c-aaf5bb973c50'
-- [1250864](https://bugzilla.redhat.com/1250864): ec returns EIO error in cases where a more specific error could be returned
-- [1251106](https://bugzilla.redhat.com/1251106): sharding - Renames on non-sharded files failing with ENOMEM
-- [1251380](https://bugzilla.redhat.com/1251380): statfs giving incorrect values for AFR arbiter volumes
-- [1252272](https://bugzilla.redhat.com/1252272): rdma : pending - porting log messages to a new framework
-- [1252297](https://bugzilla.redhat.com/1252297): Quota: volume-reset shouldn't remove quota-deem-statfs, unless explicitly specified, when quota is enabled.
-- [1252348](https://bugzilla.redhat.com/1252348): using fop's dict for resolving causes problems
-- [1252680](https://bugzilla.redhat.com/1252680): probing and detaching a peer generated a CRITICAL error - "Could not find peer"" in glusterd logs"
-- [1252727](https://bugzilla.redhat.com/1252727): tiering: Tier daemon stopped prior to graph switch.
-- [1252873](https://bugzilla.redhat.com/1252873): gluster vol quota dist-vol list is not displaying quota informatio.
-- [1252903](https://bugzilla.redhat.com/1252903): Fix invalid logic in tier.t
-- [1252907](https://bugzilla.redhat.com/1252907): Unable to demote files in tiered volumes when cold tier is EC.
-- [1253148](https://bugzilla.redhat.com/1253148): gf_store_save_value fails to check for errors, leading to emptying files in /var/lib/glusterd/
-- [1253151](https://bugzilla.redhat.com/1253151): Sharding - Individual shards' ownership differs from that of the original file
-- [1253160](https://bugzilla.redhat.com/1253160): while re-configuring the scrubber frequency, scheduling is not happening based on current time
-- [1253165](https://bugzilla.redhat.com/1253165): glusterd services are not handled properly when re configuring services
-- [1253212](https://bugzilla.redhat.com/1253212): snapd crashed due to stack overflow
-- [1253260](https://bugzilla.redhat.com/1253260): posix_make_ancestryfromgfid doesn't set op_errno
-- [1253542](https://bugzilla.redhat.com/1253542): rebalance stuck at 0 byte when auth.allow is set
-- [1253607](https://bugzilla.redhat.com/1253607): gluster snapshot status --xml gives back unexpected non xml output
-- [1254419](https://bugzilla.redhat.com/1254419): nfs-ganesha: new volume creation tries to bring up glusterfs-nfs even when nfs-ganesha is already on
-- [1254436](https://bugzilla.redhat.com/1254436): logging: Revert usage of global xlator for log buffer
-- [1254437](https://bugzilla.redhat.com/1254437): tiering: rename fails with "Device or resource busy"" error message"
-- [1254438](https://bugzilla.redhat.com/1254438): Tiering: segfault when trying to rename a file
-- [1254439](https://bugzilla.redhat.com/1254439): Quota list is not working on tiered volume.
-- [1254442](https://bugzilla.redhat.com/1254442): tiering/snapshot: Tier daemon failed to start during volume start after restoring into a tiered volume from a non-tiered volume.
-- [1254468](https://bugzilla.redhat.com/1254468): Data Tiering : Some tier xlator_fops translate to the default fops
-- [1254494](https://bugzilla.redhat.com/1254494): nfs-ganesha: refresh-config stdout output does not make sense
-- [1254503](https://bugzilla.redhat.com/1254503): fuse: check return value of setuid
-- [1254607](https://bugzilla.redhat.com/1254607): rpc: Address issues with transport object reference and leak
-- [1254865](https://bugzilla.redhat.com/1254865): non-default symver macros are incorrect
-- [1255244](https://bugzilla.redhat.com/1255244): Quota: After rename operation , gluster v quota <volname> list-objects command give incorrect no. of files in output
-- [1255311](https://bugzilla.redhat.com/1255311): Snapshot: When soft limit is reached, auto-delete is enable, create snapshot doesn't logs anything in log files
-- [1255351](https://bugzilla.redhat.com/1255351): fail the fops if inode context get fails
-- [1255604](https://bugzilla.redhat.com/1255604): Not able to recover the corrupted file on Replica volume
-- [1255605](https://bugzilla.redhat.com/1255605): Scrubber log should mark file corrupted message as Alert not as information
-- [1255636](https://bugzilla.redhat.com/1255636): Remove unwanted tests from volume-snapshot.t
-- [1255644](https://bugzilla.redhat.com/1255644): quota : display the size equivalent to the soft limit percentage in gluster v quota <volname> list* command
-- [1255690](https://bugzilla.redhat.com/1255690): AFR: gluster v restart force or brick process restart doesn't heal the files
-- [1255698](https://bugzilla.redhat.com/1255698): Write performance from a Windows client on 3-way replicated volume decreases substantially when one brick in the replica set is brought down
-- [1256265](https://bugzilla.redhat.com/1256265): Data Loss:Remove brick commit passing when remove-brick process has not even started(due to killing glusterd)
-- [1256283](https://bugzilla.redhat.com/1256283): [remove-brick]: Creation of file from NFS writes to the decommissioned subvolume and subsequent lookup from fuse creates a link
-- [1256307](https://bugzilla.redhat.com/1256307): [Backup]: Glusterfind session entry persists even after volume is deleted
-- [1256485](https://bugzilla.redhat.com/1256485): [Snapshot]/[NFS-Ganesha] mount point hangs upon snapshot create-activate and 'cd' into .snaps directory
-- [1256605](https://bugzilla.redhat.com/1256605): `gluster volume heal <vol-name> split-brain' changes required for entry-split-brain
-- [1256616](https://bugzilla.redhat.com/1256616): libgfapi : adding follow flag to glfs_h_lookupat()
-- [1256669](https://bugzilla.redhat.com/1256669): Though scrubber settings changed on one volume log shows all volumes scrubber information
-- [1256702](https://bugzilla.redhat.com/1256702): remove-brick: avoid mknod op falling on decommissioned brick even after fix-layout has happened on parent directory
-- [1256909](https://bugzilla.redhat.com/1256909): Unable to examine file in metadata split-brain after setting `replica.split-brain-choice' attribute to a particular replica
-- [1257193](https://bugzilla.redhat.com/1257193): protocol server : Pending - porting log messages to a new framework
-- [1257204](https://bugzilla.redhat.com/1257204): sharding - VM image size as seen from the mount keeps growing beyond configured size on a sharded volume
-- [1257441](https://bugzilla.redhat.com/1257441): marker: set loc.parent if NULL
-- [1257881](https://bugzilla.redhat.com/1257881): Quota list on a volume hangs after glusterd restart an a node.
-- [1258306](https://bugzilla.redhat.com/1258306): bug-1238706-daemons-stop-on-peer-cleanup.t fails occasionally
-- [1258344](https://bugzilla.redhat.com/1258344): tests: rebasing bad tests from mainline branch to release-3.7 branch
-
-### Upgrade notes
-
-#### Insecure ports by default
-
-GlusterFS uses insecure ports by default from release v3.7.3. This causes problems when upgrading from release 3.7.2 and below to 3.7.3 and above. Performing the following steps before upgrading helps avoid problems.
-
-- Enable insecure ports for all volumes.
-
- ```
- gluster volume set <VOLNAME> server.allow-insecure on
- gluster volume set <VOLNAME> client.bind-insecure on
- ```
-
-- Enable insecure ports for GlusterD. Set the following line in `/etc/glusterfs/glusterd.vol`
-
- ```
- option rpc-auth-allow-insecure on
- ```
-
- This needs to be done on all the members in the cluster.
diff --git a/doc/release-notes/3.7.5.md b/doc/release-notes/3.7.5.md
deleted file mode 100644
index 47ac742a541..00000000000
--- a/doc/release-notes/3.7.5.md
+++ /dev/null
@@ -1,77 +0,0 @@
-## Bugs fixed
-The following bugs were fixed in this release.
-
-- [1246397](https://bugzilla.redhat.com/1246397) - POSIX ACLs as used by a FUSE mount can not use more than 32 groups
-- [1248890](https://bugzilla.redhat.com/1248890) - AFR: Make [f]xattrop metadata transaction
-- [1248941](https://bugzilla.redhat.com/1248941) - Logging : unnecessary log message "REMOVEXATTR No data available " when files are written to glusterfs mount
-- [1250388](https://bugzilla.redhat.com/1250388) - [RFE] changes needed in snapshot info command's xml output.
-- [1251821](https://bugzilla.redhat.com/1251821) - /usr/lib/glusterfs/ganesha/ganesha_ha.sh is distro specific
-- [1255110](https://bugzilla.redhat.com/1255110) - client is sending io to arbiter with replica 2
-- [1255384](https://bugzilla.redhat.com/1255384) - Detached node list stale snaps
-- [1257394](https://bugzilla.redhat.com/1257394) - Provide more meaningful errors on peer probe and peer detach
-- [1258113](https://bugzilla.redhat.com/1258113) - snapshot delete all command fails with --xml option.
-- [1258244](https://bugzilla.redhat.com/1258244) - Data Tieirng:Change error message as detach-tier error message throws as "remove-brick"
-- [1258313](https://bugzilla.redhat.com/1258313) - Start self-heal and display correct heal info after replace brick
-- [1258338](https://bugzilla.redhat.com/1258338) - Data Tiering: Tiering related information is not displayed in gluster volume info xml output
-- [1258340](https://bugzilla.redhat.com/1258340) - Data Tiering:Volume task status showing as remove brick when detach tier is trigger
-- [1258347](https://bugzilla.redhat.com/1258347) - Data Tiering: Tiering related information is not displayed in gluster volume status xml output
-- [1258377](https://bugzilla.redhat.com/1258377) - ACL created on a dht.linkto file on a files that skipped rebalance
-- [1258406](https://bugzilla.redhat.com/1258406) - porting log messages to a new framework
-- [1258411](https://bugzilla.redhat.com/1258411) - trace xlator: Print write size also in trace_writev logs
-- [1258717](https://bugzilla.redhat.com/1258717) - gluster-nfs : contents of export file is not updated correctly in its context
-- [1258727](https://bugzilla.redhat.com/1258727) - porting logging messages to new logging framework
-- [1258736](https://bugzilla.redhat.com/1258736) - porting log messages to a new framework
-- [1258769](https://bugzilla.redhat.com/1258769) - Porting log messages to new framework
-- [1258798](https://bugzilla.redhat.com/1258798) - bug-948686.t fails spuriously
-- [1258845](https://bugzilla.redhat.com/1258845) - Change order of marking AFR post op
-- [1258976](https://bugzilla.redhat.com/1258976) - packaging: gluster-server install failure due to %ghost of hooks/.../delete
-- [1259078](https://bugzilla.redhat.com/1259078) - should not spawn another migration daemon on graph switch
-- [1259079](https://bugzilla.redhat.com/1259079) - Data Tiering:3.7.0:data loss:detach-tier not flushing data to cold-tier
-- [1259081](https://bugzilla.redhat.com/1259081) - I/O failure on attaching tier on fuse client
-- [1259225](https://bugzilla.redhat.com/1259225) - Add node of nfs-ganesha not working on rhel7.1
-- [1259360](https://bugzilla.redhat.com/1259360) - garbage files created in /var/run/gluster
-- [1259652](https://bugzilla.redhat.com/1259652) - quota test 'quota-nfs.t' fails spuriously
-- [1259659](https://bugzilla.redhat.com/1259659) - Fix bug in arbiter-statfs.t
-- [1259694](https://bugzilla.redhat.com/1259694) - Data Tiering:Regression:Commit of detach tier passes without directly without even issuing a detach tier start
-- [1259697](https://bugzilla.redhat.com/1259697) - Disperse volume: Huge memory leak of glusterfsd process
-- [1259726](https://bugzilla.redhat.com/1259726) - Fix reads on zero-byte shards representing holes in the file
-- [1260511](https://bugzilla.redhat.com/1260511) - fuse client crashed during i/o
-- [1260593](https://bugzilla.redhat.com/1260593) - man or info page of gluster needs to be updated with self-heal commands.
-- [1260856](https://bugzilla.redhat.com/1260856) - xml output for volume status on tiered volume
-- [1260858](https://bugzilla.redhat.com/1260858) - glusterd: volume status backward compatibility
-- [1260859](https://bugzilla.redhat.com/1260859) - snapshot: from nfs-ganesha mount no content seen in .snaps/<snapshot-name> directory
-- [1260919](https://bugzilla.redhat.com/1260919) - Quota+Rebalance : While rebalance is in progress , quota list shows 'Used Space' more than the Hard Limit set
-- [1261008](https://bugzilla.redhat.com/1261008) - Do not expose internal sharding xattrs to the application.
-- [1261234](https://bugzilla.redhat.com/1261234) - Possible memory leak during rebalance with large quantity of files
-- [1261444](https://bugzilla.redhat.com/1261444) - cli : volume start will create/overwrite ganesha export file
-- [1261664](https://bugzilla.redhat.com/1261664) - Tiering status command is very cumbersome.
-- [1261715](https://bugzilla.redhat.com/1261715) - [HC] Fuse mount crashes, when client-quorum is not met
-- [1261716](https://bugzilla.redhat.com/1261716) - read/write performance improvements for VM workload
-- [1261742](https://bugzilla.redhat.com/1261742) - Tier: glusterd crash when trying to detach , when hot tier is having exactly one brick and cold tier is of replica type
-- [1262197](https://bugzilla.redhat.com/1262197) - DHT: Few files are missing after remove-brick operation
-- [1262335](https://bugzilla.redhat.com/1262335) - Fix invalid logic in tier.t
-- [1262341](https://bugzilla.redhat.com/1262341) - Database locking due to write contention between CTR sql connection and tier migrator sql connection
-- [1262344](https://bugzilla.redhat.com/1262344) - quota: numbers of warning messages in nfs.log a single file itself
-- [1262408](https://bugzilla.redhat.com/1262408) - Data Tieirng:Detach tier status shows number of failures even when all files are migrated successfully
-- [1262547](https://bugzilla.redhat.com/1262547) - `getfattr -n replica.split-brain-status <file>' command hung on the mount
-- [1262700](https://bugzilla.redhat.com/1262700) - DHT + rebalance :- file permission got changed (sticky bit and setgid is set) after file migration failure
-- [1262881](https://bugzilla.redhat.com/1262881) - nfs-ganesha: refresh-config stdout output includes dbus messages "method return sender=:1.61 -> dest=:1.65 reply_serial=2"
-- [1263191](https://bugzilla.redhat.com/1263191) - Error not propagated correctly if selfheal layout lock fails
-- [1263746](https://bugzilla.redhat.com/1263746) - Data Tiering:Setting only promote frequency and no demote frequency causes crash
-- [1264738](https://bugzilla.redhat.com/1264738) - 'gluster v tier/attach-tier/detach-tier help' command shows the usage, and then throws 'Tier command failed' error message
-- [1265633](https://bugzilla.redhat.com/1265633) - AFR : "gluster volume heal <volume_name info" doesn't report the fqdn of storage nodes.
-- [1265890](https://bugzilla.redhat.com/1265890) - rm command fails with "Transport end point not connected" during add brick
-- [1265892](https://bugzilla.redhat.com/1265892) - Data Tiering : Writes to a file being promoted/demoted are missing once the file migration is complete
-- [1266822](https://bugzilla.redhat.com/1266822) - Add more logs in failure code paths + port existing messages to the msg-id framework
-- [1266872](https://bugzilla.redhat.com/1266872) - FOP handling during file migration is broken in the release-3.7 branch.
-- [1266882](https://bugzilla.redhat.com/1266882) - RFE: posix: xattrop 'GF_XATTROP_ADD_DEF_ARRAY' implementation
-- [1267149](https://bugzilla.redhat.com/1267149) - Perf: Getting bad performance while doing ls
-- [1267532](https://bugzilla.redhat.com/1267532) - Data Tiering:CLI crashes with segmentation fault when user tries "gluster v tier" command
-- [1267817](https://bugzilla.redhat.com/1267817) - No quota API to get real hard-limit value.
-- [1267822](https://bugzilla.redhat.com/1267822) - Have a way to disable readdirp on dht from glusterd volume set command
-- [1267823](https://bugzilla.redhat.com/1267823) - Perf: Getting bad performance while doing ls
-- [1268804](https://bugzilla.redhat.com/1268804) - Test tests/bugs/shard/bug-1245547.t failing consistently when run with patch http://review.gluster.org/#/c/11938/
-
-## Upgrade notes
-
-If upgrading from v3.7.2 or older, please follow instructions in [upgrading-from-3.7.2-or-older](./upgrading-from-3.7.2-or-older.md).
diff --git a/doc/release-notes/3.7.6.md b/doc/release-notes/3.7.6.md
deleted file mode 100644
index a1cebedf658..00000000000
--- a/doc/release-notes/3.7.6.md
+++ /dev/null
@@ -1,76 +0,0 @@
-## Release Notes for GlusterFS 3.7.6
-
-This is a bugfix release. The [Release Notes for 3.7.0](3.7.0.md),
-[3.7.1](3.7.1.md), [3.5.2](3.7.2.md), [3.7.3](3.7.3.md), [3.7.4](3.7.4.md) and
-[3.7.5](3.7.5.md) contain a listing of all the new features that were added and
-bugs fixed in the GlusterFS 3.7 stable release.
-
-### Bugs Fixed:
-
-- [1057295](https://bugzilla.redhat.com/1057295): glusterfs doesn't include firewalld rules
-- [1219399](https://bugzilla.redhat.com/1219399): NFS interoperability problem: Gluster Striped-Replicated can't read on vmware esxi 5.x NFS client
-- [1221957](https://bugzilla.redhat.com/1221957): Fully support data-tiering in 3.7.x, remove out of 'experimental' status
-- [1258197](https://bugzilla.redhat.com/1258197): gNFSd: NFS mount fails with "Remote I/O error"
-- [1258242](https://bugzilla.redhat.com/1258242): Data Tiering: detach-tier start force command not available on a tier volume(unlike which is possible in force remove-brick)
-- [1258833](https://bugzilla.redhat.com/1258833): Data Tiering: Disallow attach tier on a volume where any rebalance process is in progress to avoid deadlock(like remove brick commit pending etc)
-- [1259167](https://bugzilla.redhat.com/1259167): GF_LOG_NONE logs always
-- [1261146](https://bugzilla.redhat.com/1261146): Legacy files pre-existing tier attach must be promoted
-- [1261732](https://bugzilla.redhat.com/1261732): Disperse volume: df -h on a nfs mount throws Invalid argument error
-- [1261744](https://bugzilla.redhat.com/1261744): Tier/shd: Tracker bug for tier and shd compatibility
-- [1261758](https://bugzilla.redhat.com/1261758): Tiering/glusted: volume status failed after detach tier start
-- [1262860](https://bugzilla.redhat.com/1262860): Data Tiering: Tiering deamon is seeing each part of a file in a Disperse cold volume as a different file
-- [1265623](https://bugzilla.redhat.com/1265623): Data Tiering:Promotions and demotions fail after quota hard limits are hit for a tier volume
-- [1266836](https://bugzilla.redhat.com/1266836): AFR : fuse,nfs mount hangs when directories with same names are created and deleted continuously
-- [1266880](https://bugzilla.redhat.com/1266880): Tiering: unlink failed with error "Invaid argument"
-- [1267816](https://bugzilla.redhat.com/1267816): quota/marker: marker code cleanup
-- [1269035](https://bugzilla.redhat.com/1269035): Data Tiering:Throw a warning when user issues a detach-tier commit command
-- [1269125](https://bugzilla.redhat.com/1269125): Data Tiering:Regression: automation blocker:vol status for tier volumes using xml format is not working
-- [1269344](https://bugzilla.redhat.com/1269344): tier/cli: number of bricks remains the same in v info --xml
-- [1269501](https://bugzilla.redhat.com/1269501): Self-heal daemon crashes when bricks godown at the time of data heal
-- [1269530](https://bugzilla.redhat.com/1269530): Core:Blocker:Segmentation fault when using fallocate command on a gluster volume
-- [1269730](https://bugzilla.redhat.com/1269730): Sharding - Send inode forgets on _all_ shards if/when the protocol layer (FUSE/Gfapi) at the top sends a forget on the actual file
-- [1270123](https://bugzilla.redhat.com/1270123): Data Tiering: Database locks observed on tiered volumes on continous writes to a file
-- [1270527](https://bugzilla.redhat.com/1270527): add policy mechanism for promotion and demotion
-- [1270769](https://bugzilla.redhat.com/1270769): quota/marker: dir count in inode quota is not atomic
-- [1271204](https://bugzilla.redhat.com/1271204): Introduce priv dump in shard xlator for better debugging
-- [1271249](https://bugzilla.redhat.com/1271249): tiering:compiler warning with gcc v5.1.1
-- [1271490](https://bugzilla.redhat.com/1271490): rm -rf on /run/gluster/vol/<directory name>/ is not showing quota output header for other quota limit applied directories
-- [1271540](https://bugzilla.redhat.com/1271540): RHEL7/systemd : can't have server in debug mode anymore
-- [1271627](https://bugzilla.redhat.com/1271627): Creating a already deleted snapshot-clone deletes the corresponding snapshot.
-- [1271967](https://bugzilla.redhat.com/1271967): ECVOL: glustershd log grows quickly and fills up the root volume
-- [1272036](https://bugzilla.redhat.com/1272036): Data Tiering:getting failed to fsync on germany-hot-dht (Structure needs cleaning) warning
-- [1272331](https://bugzilla.redhat.com/1272331): Tier: Do not promote/demote files on which POSIX locks are held
-- [1272334](https://bugzilla.redhat.com/1272334): Data Tiering:Promotions fail when brick of EC (disperse) cold layer are down
-- [1272398](https://bugzilla.redhat.com/1272398): Data Tiering:Lot of Promotions/Demotions failed error messages
-- [1273246](https://bugzilla.redhat.com/1273246): Tier xattr name is misleading (trusted.tier-gfid)
-- [1273334](https://bugzilla.redhat.com/1273334): Fix in afr transaction code
-- [1274101](https://bugzilla.redhat.com/1274101): need a way to pause/stop tiering to take snapshot
-- [1274600](https://bugzilla.redhat.com/1274600): [sharding+geo-rep]: On existing slave mount, reading files fails to show sharded file content
-- [1275157](https://bugzilla.redhat.com/1275157): Reduce 'CTR disabled' brick log message from ERROR to INFO/DEBUG
-- [1275483](https://bugzilla.redhat.com/1275483): Data Tiering:heat counters not getting reset and also internal ops seem to be heating the files
-- [1275502](https://bugzilla.redhat.com/1275502): [Tier]: Typo in the output while setting the wrong value of low/hi watermark
-- [1275921](https://bugzilla.redhat.com/1275921): Disk usage mismatching after self-heal
-- [1276029](https://bugzilla.redhat.com/1276029): Upgrading a subset of cluster to 3.7.5 leads to issues with glusterd commands
-- [1276060](https://bugzilla.redhat.com/1276060): dist-geo-rep: geo-rep status shows Active/Passive even when all the gsync processes in a node are killed
-- [1276208](https://bugzilla.redhat.com/1276208): [RFE] 'gluster volume help' output could be sorted alphabetically
-- [1276244](https://bugzilla.redhat.com/1276244): gluster-nfs : Server crashed due to an invalid reference
-- [1276550](https://bugzilla.redhat.com/1276550): FUSE clients in a container environment hang and do not recover post losing connections to all bricks
-- [1277080](https://bugzilla.redhat.com/1277080): quota: set quota version for files/directories
-- [1277394](https://bugzilla.redhat.com/1277394): Wrong value of snap-max-hard-limit observed in 'gluster volume info'.
-- [1277587](https://bugzilla.redhat.com/1277587): Data Tiering:tiering deamon crashes when trying to heat the file
-- [1277590](https://bugzilla.redhat.com/1277590): Tier : Move common functions into tier.rc
-- [1277800](https://bugzilla.redhat.com/1277800): [New] - Message displayed after attach tier is misleading
-- [1277984](https://bugzilla.redhat.com/1277984): Upgrading to 3.7.-5-5 has changed volume to distributed disperse
-- [1278578](https://bugzilla.redhat.com/1278578): move mount-nfs-auth.t to failed tests lists
-- [1278603](https://bugzilla.redhat.com/1278603): fix lookup-unhashed for tiered volumes.
-- [1278640](https://bugzilla.redhat.com/1278640): [New] - Files in a tiered volume gets promoted when bitd signs them
-- [1278744](https://bugzilla.redhat.com/1278744): ec-readdir.t is failing consistently
-- [1278850](https://bugzilla.redhat.com/1278850): Tests/tiering: Correct typo in bug-1214222-directories_miising_after_attach_tier.t in bad_tests
-
-### Known Issues:
-
-- Volume commands fail with "staging failed" message when few nodes in trusted storage pool have 3.7.6 installed and other nodes have 3.7.5 installed. Please upgrade all nodes to recover from this error. This issue is not seen if upgrading from 3.7.4 or previous to 3.7.6.
-
-### Upgrade notes
-
-If upgrading from v3.7.2 or older, please follow instructions in [upgrading-from-3.7.2-or-older](./upgrading-from-3.7.2-or-older.md).
diff --git a/doc/release-notes/3.7.7.md b/doc/release-notes/3.7.7.md
deleted file mode 100644
index cfbc1bd37a7..00000000000
--- a/doc/release-notes/3.7.7.md
+++ /dev/null
@@ -1,171 +0,0 @@
-## Bugs fixed
-The following bugs were fixed in this release.
-
-- [1212676](https://bugzilla.redhat.com/1212676) - NetBSD port
-- [1225567](https://bugzilla.redhat.com/1225567) - [geo-rep]: Traceback "ValueError: filedescriptor out of range in select()" observed while creating huge set of data on master
-- [1250410](https://bugzilla.redhat.com/1250410) - [Backup]: Password of the peer nodes prompted whenever a glusterfind session is deleted.
-- [1251467](https://bugzilla.redhat.com/1251467) - ec sequentializes all reads, limiting read throughtput
-- [1257141](https://bugzilla.redhat.com/1257141) - [Backup]: Glusterfind pre attribute '--output-prefix' not working as expected in case of DELETEs
-- [1257546](https://bugzilla.redhat.com/1257546) - [Backup]: Glusterfind list shows the session as corrupted on the peer node
-- [1257710](https://bugzilla.redhat.com/1257710) - Copy NFS-Ganesha export files as part of volume snapshot creation
-- [1258594](https://bugzilla.redhat.com/1258594) - build: compile error on RHEL5
-- [1262860](https://bugzilla.redhat.com/1262860) - Data Tiering: Tiering deamon is seeing each part of a file in a Disperse cold volume as a different file
-- [1264441](https://bugzilla.redhat.com/1264441) - Data Tiering:Regression:Detach tier commit is passing when detach tier is in progress
-- [1266880](https://bugzilla.redhat.com/1266880) - Tiering: unlink failed with error "Invaid argument"
-- [1269702](https://bugzilla.redhat.com/1269702) - Glusterfsd crashes on pmap signin failure
-- [1272007](https://bugzilla.redhat.com/1272007) - tools/glusterfind: add query command to list files without session
-- [1272926](https://bugzilla.redhat.com/1272926) - libgfapi: brick process crashes if attr KEY length > 255 for glfs_lgetxattr(...)
-- [1274100](https://bugzilla.redhat.com/1274100) - need a way to pause/stop tiering to take snapshot
-- [1275173](https://bugzilla.redhat.com/1275173) - geo-replication: [RFE] Geo-replication + Tiering
-- [1276907](https://bugzilla.redhat.com/1276907) - Arbiter volume becomes replica volume in some cases
-- [1277390](https://bugzilla.redhat.com/1277390) - snap-max-hard-limit for snapshots always shows as 256 in info file.
-- [1278640](https://bugzilla.redhat.com/1278640) - Files in a tiered volume gets promoted when bitd signs them
-- [1278744](https://bugzilla.redhat.com/1278744) - ec-readdir.t is failing consistently
-- [1279059](https://bugzilla.redhat.com/1279059) - [Tier]: restarting volume reports "insert/update failure" in cold brick logs
-- [1279095](https://bugzilla.redhat.com/1279095) - I/O failure on attaching tier on nfs client
-- [1279306](https://bugzilla.redhat.com/1279306) - Dist-geo-rep : checkpoint doesn't reach even though all the files have been synced through hybrid crawl.
-- [1279309](https://bugzilla.redhat.com/1279309) - Message shown in gluster vol tier <volname> status output is incorrect.
-- [1279331](https://bugzilla.redhat.com/1279331) - quota: removexattr on /d/backends/patchy/.glusterfs/79/99/799929ec-f546-4bbf-8549-801b79623262 (for trusted.glusterfs.quota.add7e3f8-833b-48ec-8a03-f7cd09925468.contri) [No such file or directory]
-- [1279345](https://bugzilla.redhat.com/1279345) - Fails to build twice in a row
-- [1279351](https://bugzilla.redhat.com/1279351) - [GlusterD]: Volume start fails post add-brick on a volume which is not started
-- [1279362](https://bugzilla.redhat.com/1279362) - Monitor should restart the worker process when Changelog agent dies
-- [1279644](https://bugzilla.redhat.com/1279644) - Starting geo-rep session
-- [1279776](https://bugzilla.redhat.com/1279776) - stop-all-gluster-processes.sh doesn't return correct return status
-- [1280715](https://bugzilla.redhat.com/1280715) - fops-during-migration-pause.t spurious failure
-- [1281226](https://bugzilla.redhat.com/1281226) - Remove selinux mount option from "man mount.glusterfs"
-- [1281893](https://bugzilla.redhat.com/1281893) - packaging: gfind_missing_files are not in geo-rep %if ... %endif conditional
-- [1282315](https://bugzilla.redhat.com/1282315) - Data Tiering:Metadata changes to a file should not heat/promote the file
-- [1282465](https://bugzilla.redhat.com/1282465) - [Backup]: Crash observed when keyboard interrupt is encountered in the middle of any glusterfind command
-- [1282675](https://bugzilla.redhat.com/1282675) - ./tests/basic/tier/record-metadata-heat.t is failing upstream
-- [1283036](https://bugzilla.redhat.com/1283036) - Index entries are not being purged in case of file does not exist
-- [1283038](https://bugzilla.redhat.com/1283038) - libgfapi to support set_volfile-server-transport type "unix"
-- [1283060](https://bugzilla.redhat.com/1283060) - [RFE] Geo-replication support for Volumes running in docker containers
-- [1283107](https://bugzilla.redhat.com/1283107) - Setting security.* xattrs fails
-- [1283138](https://bugzilla.redhat.com/1283138) - core dump in protocol/client:client_submit_request
-- [1283142](https://bugzilla.redhat.com/1283142) - glusterfs does not register with rpcbind on restart
-- [1283187](https://bugzilla.redhat.com/1283187) - [GlusterD]: Incorrect peer status showing if volume restart done before entire cluster update.
-- [1283288](https://bugzilla.redhat.com/1283288) - cache mode must be the default mode for tiered volumes
-- [1283302](https://bugzilla.redhat.com/1283302) - volume start command is failing when glusterfs compiled with debug enabled
-- [1283473](https://bugzilla.redhat.com/1283473) - Dist-geo-rep: Too many "remote operation failed: No such file or directory" warning messages in auxilary mount log on slave while executing "rm -rf"
-- [1283478](https://bugzilla.redhat.com/1283478) - While file is self healing append to the file hangs
-- [1283480](https://bugzilla.redhat.com/1283480) - Data Tiering:Rename of cold file to a hot file causing split brain and showing two copies of files in mount point
-- [1283568](https://bugzilla.redhat.com/1283568) - qupta/marker: backward compatibility with quota xattr vesrioning
-- [1283570](https://bugzilla.redhat.com/1283570) - Better indication of arbiter brick presence in a volume.
-- [1283679](https://bugzilla.redhat.com/1283679) - remove mount-nfs-auth.t from bad tests lists
-- [1283756](https://bugzilla.redhat.com/1283756) - self-heal won't work in disperse volumes when they are attached as tiers
-- [1283757](https://bugzilla.redhat.com/1283757) - EC: File healing promotes it to hot tier
-- [1283833](https://bugzilla.redhat.com/1283833) - Warning messages seen in glusterd logs in executing gluster volume set help
-- [1283856](https://bugzilla.redhat.com/1283856) - [Tier]: Space is missed b/w the words in the detach tier stop error message
-- [1283881](https://bugzilla.redhat.com/1283881) - BitRot :- Data scrubbing status is not available
-- [1283923](https://bugzilla.redhat.com/1283923) - Data Tiering: "ls" count taking link files and promote/demote files into consideration both on fuse and nfs mount
-- [1283956](https://bugzilla.redhat.com/1283956) - Self-heal triggered every couple of seconds and a 3-node 1-arbiter setup
-- [1284453](https://bugzilla.redhat.com/1284453) - Dist-geo-rep: Support geo-replication to work with sharding
-- [1284737](https://bugzilla.redhat.com/1284737) - Geo-replication is logging in Localtime
-- [1284746](https://bugzilla.redhat.com/1284746) - tests/geo-rep: Existing geo-rep regressino test suite is time consuming.
-- [1284850](https://bugzilla.redhat.com/1284850) - Resource leak in marker
-- [1284863](https://bugzilla.redhat.com/1284863) - Full heal of volume fails on some nodes "Commit failed on X", and glustershd logs "Couldn't get xlator xl-0"
-- [1285139](https://bugzilla.redhat.com/1285139) - Extending writes filling incorrect final size in postbuf
-- [1285168](https://bugzilla.redhat.com/1285168) - vol heal info fails when transport.socket.bind-address is set in glusterd
-- [1285174](https://bugzilla.redhat.com/1285174) - Create doesn't remember flags it is opened with
-- [1285335](https://bugzilla.redhat.com/1285335) - [Tier]: Stopping and Starting tier volume triggers fixing layout which fails on local host
-- [1285629](https://bugzilla.redhat.com/1285629) - Snapshot creation after attach-tier causes glusterd crash
-- [1285688](https://bugzilla.redhat.com/1285688) - sometimes files are not getting demoted from hot tier to cold tier
-- [1285758](https://bugzilla.redhat.com/1285758) - Brick crashes because of race in bit-rot init
-- [1285762](https://bugzilla.redhat.com/1285762) - reads fail on sharded volume while running iozone
-- [1285793](https://bugzilla.redhat.com/1285793) - Data Tiering: Change the error message when a detach-tier status is issued on a non-tier volume
-- [1285961](https://bugzilla.redhat.com/1285961) - glusterfsd to support volfile-server-transport type "unix"
-- [1285978](https://bugzilla.redhat.com/1285978) - AFR self-heal-daemon option is still set on volume though tier is detached
-- [1286169](https://bugzilla.redhat.com/1286169) - We need to skip data self-heal for arbiter bricks
-- [1286517](https://bugzilla.redhat.com/1286517) - cli/geo-rep : remove unused code
-- [1286601](https://bugzilla.redhat.com/1286601) - vol quota enable fails when transport.socket.bind-address is set in glusterd
-- [1286985](https://bugzilla.redhat.com/1286985) - Tier: ec xattrs are set on a newly created file present in the non-ec hot tier
-- [1287079](https://bugzilla.redhat.com/1287079) - nfs-ganesha: Upcall sent on null gfid
-- [1287456](https://bugzilla.redhat.com/1287456) - [geo-rep]: Recommended Shared volume use on geo-replication is broken
-- [1287531](https://bugzilla.redhat.com/1287531) - Perf: Metadata operation(ls -l) performance regression.
-- [1287538](https://bugzilla.redhat.com/1287538) - [Snapshot]: Clone creation fails on tiered volume with pre-validation failed message
-- [1287560](https://bugzilla.redhat.com/1287560) - Data Tiering:Don't allow or reset the frequency threshold values to zero when record counter features.record-counter is turned off
-- [1287583](https://bugzilla.redhat.com/1287583) - Data Tiering:Read heat not getting calculated and read operations not heating the file with counter enabled
-- [1287597](https://bugzilla.redhat.com/1287597) - [upgrade] Error messages seen in glusterd logs, while upgrading from RHGS 2.1.6 to RHGS 3.1
-- [1287877](https://bugzilla.redhat.com/1287877) - glusterfs does not allow passing standard SElinux mount options to fuse
-- [1287960](https://bugzilla.redhat.com/1287960) - Geo-Replication failes on uppercase hostnames
-- [1288027](https://bugzilla.redhat.com/1288027) - [geo-rep+tiering]: symlinks are not getting synced to slave on tiered master setup
-- [1288030](https://bugzilla.redhat.com/1288030) - Clone creation should not be successful when the node participating in volume goes down.
-- [1288052](https://bugzilla.redhat.com/1288052) - [Quota]: Peer status is in "Rejected" state with Quota enabled volume
-- [1288056](https://bugzilla.redhat.com/1288056) - glusterd: all the daemon's of existing volume stopping upon peer detach
-- [1288060](https://bugzilla.redhat.com/1288060) - glusterd: disable ping timer b/w glusterd and make epoll thread count default 1
-- [1288352](https://bugzilla.redhat.com/1288352) - Few snapshot creation fails with pre-validation failed message on tiered volume.
-- [1288484](https://bugzilla.redhat.com/1288484) - tiering: quota list command is not working after attach or detach
-- [1288716](https://bugzilla.redhat.com/1288716) - add bug-924726.t to ignore list in regression
-- [1288922](https://bugzilla.redhat.com/1288922) - Use after free bug in notify_kernel_loop in fuse-bridge code
-- [1288963](https://bugzilla.redhat.com/1288963) - [GlusterD]Probing a node having standalone volume, should not happen
-- [1288992](https://bugzilla.redhat.com/1288992) - Possible memory leak in the tiered daemon
-- [1289063](https://bugzilla.redhat.com/1289063) - quota cli: enhance quota list command to list usage even if the limit is not set
-- [1289414](https://bugzilla.redhat.com/1289414) - [tiering]: Tier daemon crashed on two of eight nodes and lot of "demotion failed" seen in the system
-- [1289570](https://bugzilla.redhat.com/1289570) - Iozone on sharded volume fails on NFS
-- [1289602](https://bugzilla.redhat.com/1289602) - After detach-tier start writes still go to hot tier
-- [1289898](https://bugzilla.redhat.com/1289898) - Without detach tier commit, status changes back to tier migration
-- [1290048](https://bugzilla.redhat.com/1290048) - [Tier]: Failed to open "demotequeryfile-master-tier-dht" errors logged on the node having only cold bricks
-- [1290295](https://bugzilla.redhat.com/1290295) - tiering: Seeing error messages E "/usr/lib64/glusterfs/3.7.5/xlator/features/changetimerecorder.so(ctr_lookup+0x54f) [0x7f6c435c116f] ) 0-ctr: invalid argument: loc->name [Invalid argument] after attach tier
-- [1290363](https://bugzilla.redhat.com/1290363) - Data Tiering:File create terminates with "Input/output error" as split brain is observed
-- [1290532](https://bugzilla.redhat.com/1290532) - Several intermittent regression failures
-- [1290534](https://bugzilla.redhat.com/1290534) - Minor improvements and cleanup for the build system
-- [1290655](https://bugzilla.redhat.com/1290655) - Sharding: Remove dependency on performance.strict-write-ordering
-- [1290658](https://bugzilla.redhat.com/1290658) - tests/basic/afr/arbiter-statfs.t fails most of the times on NetBSD
-- [1290719](https://bugzilla.redhat.com/1290719) - Geo-replication doesn't deal properly with sparse files
-- [1291002](https://bugzilla.redhat.com/1291002) - File is not demoted after self heal (split-brain)
-- [1291046](https://bugzilla.redhat.com/1291046) - spurious failure of bug-1279376-rename-demoted-file.t
-- [1291208](https://bugzilla.redhat.com/1291208) - Regular files are listed as 'T' files on nfs mount
-- [1291546](https://bugzilla.redhat.com/1291546) - bitrot: bitrot scrub status command should display the correct value of total number of scrubbed, unsigned file
-- [1291557](https://bugzilla.redhat.com/1291557) - Data Tiering:File create terminates with "Input/output error" as split brain is observed
-- [1291970](https://bugzilla.redhat.com/1291970) - Data Tiering: new set of gluster v tier commands not working as expected
-- [1291985](https://bugzilla.redhat.com/1291985) - store afr pending xattrs as a volume option
-- [1292046](https://bugzilla.redhat.com/1292046) - Renames/deletes failed with "No such file or directory" when few of the bricks from the hot tier went offline
-- [1292254](https://bugzilla.redhat.com/1292254) - hook script for CTDB should not change Samba config
-- [1292359](https://bugzilla.redhat.com/1292359) - [tiering]: read/write freq-threshold allows negative values
-- [1292697](https://bugzilla.redhat.com/1292697) - Symlinks Rename fails in Symlink not exists in Slave
-- [1292755](https://bugzilla.redhat.com/1292755) - S30Samba scripts do not work on systemd systems
-- [1292945](https://bugzilla.redhat.com/1292945) - [tiering]: cluster.tier-max-files option in tiering is not honored
-- [1293224](https://bugzilla.redhat.com/1293224) - Disperse: Disperse volume (cold vol) crashes while writing files on tier volume
-- [1293265](https://bugzilla.redhat.com/1293265) - md5sum of files mismatch after the self-heal is complete on the file
-- [1293300](https://bugzilla.redhat.com/1293300) - Detach tier fails to migrate the files when there are corrupted objects in hot tier.
-- [1293309](https://bugzilla.redhat.com/1293309) - [georep+tiering]: Geo-replication sync is broken if cold tier is EC
-- [1293342](https://bugzilla.redhat.com/1293342) - Data Tiering:Watermark:File continuously trying to demote itself but failing " [dht-rebalance.c:608:__dht_rebalance_create_dst_file] 0-wmrk-tier-dht: chown failed for //AP.BH.avi on wmrk-cold-dht (No such file or directory)"
-- [1293348](https://bugzilla.redhat.com/1293348) - first file created after hot tier full fails to create, but gets database entry and later ends up as a stale erroneous file (file with ???????????)
-- [1293536](https://bugzilla.redhat.com/1293536) - afr: warn if pending xattrs missing during init()
-- [1293584](https://bugzilla.redhat.com/1293584) - Corrupted objects list does not get cleared even after all the files in the volume are deleted and count increases as old + new count
-- [1293595](https://bugzilla.redhat.com/1293595) - [geo-rep]: ChangelogException: [Errno 22] Invalid argument observed upon rebooting the ACTIVE master node
-- [1293659](https://bugzilla.redhat.com/1293659) - Creation of files on hot tier volume taking very long time
-- [1293698](https://bugzilla.redhat.com/1293698) - [Tier]: start tier daemon using rebal tier start doesnt start tierd if it is failed on any of single node
-- [1293827](https://bugzilla.redhat.com/1293827) - fops-during-migration.t fails if hot and cold tiers are dist-rep
-- [1294410](https://bugzilla.redhat.com/1294410) - Friend update floods can render the cluster incapable of handling other commands
-- [1294608](https://bugzilla.redhat.com/1294608) - quota: limit xattr not healed for a sub-directory on a newly added bricks
-- [1294609](https://bugzilla.redhat.com/1294609) - quota: handle quota xattr removal when quota is enabled again
-- [1294797](https://bugzilla.redhat.com/1294797) - "Transport endpoint not connected" in heal info though hot tier bricks are up
-- [1294942](https://bugzilla.redhat.com/1294942) - [tiering]: Incorrect display of 'gluster v tier help'
-- [1294954](https://bugzilla.redhat.com/1294954) - tier-snapshot.t runs too slowly on RHEL6
-- [1294969](https://bugzilla.redhat.com/1294969) - Large system file distribution is broken
-- [1296024](https://bugzilla.redhat.com/1296024) - Unable to modify quota hard limit on tier volume after disk limit got exceeded
-- [1296108](https://bugzilla.redhat.com/1296108) - xattrs on directories are unavailable on distributed replicated volume after adding new bricks
-- [1296795](https://bugzilla.redhat.com/1296795) - Good files does not promoted in a tiered volume when bitrot is enabled
-- [1296996](https://bugzilla.redhat.com/1296996) - Stricter dependencies for glusterfs-server
-- [1297213](https://bugzilla.redhat.com/1297213) - Stale stat information for corrupted objects (replicated volume)
-- [1297305](https://bugzilla.redhat.com/1297305) - [GlusterD]: Peer detach happening with a node which is hosting volume bricks
-- [1297309](https://bugzilla.redhat.com/1297309) - Rebalance crashed after detach tier.
-- [1297862](https://bugzilla.redhat.com/1297862) - Ganesha hook script executes showmount and causes a hang
-- [1299314](https://bugzilla.redhat.com/1299314) - glusterfs crash during load testing
-- [1299712](https://bugzilla.redhat.com/1299712) - [HC] Implement fallocate, discard and zerofill with sharding
-- [1299822](https://bugzilla.redhat.com/1299822) - Snapshot creation fails on a tiered volume
-- [1300174](https://bugzilla.redhat.com/1300174) - volume info xml does not show arbiter details
-- [1300210](https://bugzilla.redhat.com/1300210) - Fix sparse-file-self-heal.t and remove from bad tests
-- [1300243](https://bugzilla.redhat.com/1300243) - Quota Aux mount crashed
-- [1300600](https://bugzilla.redhat.com/1300600) - tests/bugs/quota/bug-1049323.t fails in fedora
-- [1300924](https://bugzilla.redhat.com/1300924) - Fix mem leaks related to gfapi applications
-- [1300978](https://bugzilla.redhat.com/1300978) - I/O failure during a graph change followed by an option change.
-- [1302012](https://bugzilla.redhat.com/1302012) - [Tiering]: Values of watermarks, min free disk etc will be miscalculated with quota set on root directory of gluster volume
-- [1302199](https://bugzilla.redhat.com/1302199) - Scrubber crash (list corruption)
-- [1302521](https://bugzilla.redhat.com/1302521) - Improve error message for unsupported clients
-- [1302943](https://bugzilla.redhat.com/1302943) - Lot of Inode not found messages in glfsheal log file
-
-## Upgrade notes
-
-If upgrading from v3.7.2 or older, please follow instructions in [upgrading-from-3.7.2-or-older](./upgrading-from-3.7.2-or-older.md).
diff --git a/doc/release-notes/3.7.8.md b/doc/release-notes/3.7.8.md
deleted file mode 100644
index f4d969575d0..00000000000
--- a/doc/release-notes/3.7.8.md
+++ /dev/null
@@ -1,24 +0,0 @@
-# Release notes for GlusterFS-v3.7.8
-GlusterFS-v3.7.8 is a quick bugfix release done to solve a bug in 3.7.7 which prevented updates from successfully happening.
-
-Release 3.7.7 included two changes to the AFR xlator, which broke rolling updates from pre 3.7.7 releases. The two offending patches have been reverted in 3.7.8 until a proper fix is found. The revert commits are
-
-- de6e920 Revert "glusterd/afr: store afr pending xattrs as a volume option"
-- d35e386 Revert "afr: warn if pending xattrs missing during init()"
-
-
-## Bugs fixed
-The following bugs have been fixed in addition to the above two reverts,
-
-- [1304889](https://bugzilla.redhat.com/1304889) - Memory leak in dht
-- [1303899](https://bugzilla.redhat.com/1303899) - heal info reporting slow when IO is in progress on the volume
-- [1302955](https://bugzilla.redhat.com/1302955) - Hook scripts are not installed after make install
-- [1279331](https://bugzilla.redhat.com/1279331) - quota: removexattr on /d/backends/patchy/.glusterfs/79/99/799929ec-f546-4bbf-8549-801b79623262 (for trusted.glusterfs.quota.add7e3f8-833b-48ec-8a03-f7cd09925468.contri) [No such file or directory]
-- [1288857](https://bugzilla.redhat.com/1288857) - Use after free bug in notify_kernel_loop in fuse-bridge code
-- [1288922](https://bugzilla.redhat.com/1288922) - Use after free bug in notify_kernel_loop in fuse-bridge code
-- [1296400](https://bugzilla.redhat.com/1296400) - Fix spurious failure in bug-1221481-allow-fops-on-dir-split-brain.t
-
-
-## Upgrade notes
-
-If upgrading from v3.7.2 or older, please follow instructions in [upgrading-from-3.7.2-or-older](./upgrading-from-3.7.2-or-older.md).
diff --git a/doc/release-notes/3.7.9.md b/doc/release-notes/3.7.9.md
deleted file mode 100644
index 1d2c00111e0..00000000000
--- a/doc/release-notes/3.7.9.md
+++ /dev/null
@@ -1,134 +0,0 @@
-# Release notes for GlusterFS v3.7.9
-GlusterFS v3.7.9 is a bugfix release. It contains several bug fixes for better stability and usability.
-
-Data Tiering feature has received several bug fixes in 3.7.9. Following is the state of tiering as of 3.7.9:
-- Performance tests indicate that we see decent performance when the workload fits in the hot tier. Exception is with an erasure coded volume being the cold tier for small file tests, in which case performance is not as good as expected (Working on RCA).
-- When attaching a tier, tiering will not start until fix-layout is done, which can take some time. Patch 13491 for this is under review to fix that, it’s slated for a subsequent release.
-- Counters currently show number of promotions and demotions, other statistics would be helpful. Need to devise better statistics.
-
-More testing feedback on tiering would be welcome.
-
-## Bugs fixed
-The following bugs have been fixed in 3.7.9:
-- [1317959](https://bugzilla.redhat.com/1317959) - inode ref leaks with perf-test.sh
-- [1318203](https://bugzilla.redhat.com/1318203) - Tiering should break out of iterating query file once cycle time completes.
-- [1314680](https://bugzilla.redhat.com/1314680) - Speed up regression tests
-- [1301030](https://bugzilla.redhat.com/1301030) - [Snapshot]: Snapshot restore stucks in post validation.
-- [1311377](https://bugzilla.redhat.com/1311377) - Memory leak in glusterd
-- [1309462](https://bugzilla.redhat.com/1309462) - Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance. Fresh install of 3.7.8 also has low write performance
-- [1315935](https://bugzilla.redhat.com/1315935) - glusterfs-libs postun scriptlet fail /sbin/ldconfig: relative path `1' used to build cache
-- [1315552](https://bugzilla.redhat.com/1315552) - glusterfs brick process crashed
-- [1299712](https://bugzilla.redhat.com/1299712) - [HC] Implement fallocate, discard and zerofill with sharding
-- [1315557](https://bugzilla.redhat.com/1315557) - SEEK_HOLE and SEEK_DATA should return EINVAL when protocol support is missing
-- [1315639](https://bugzilla.redhat.com/1315639) - Glusterfind hook script failing if /var/lib/glusterd/glusterfind dir was absent
-- [1312762](https://bugzilla.redhat.com/1312762) - [geo-rep]: Session goes to faulty with Errno 13: Permission denied
-- [1314641](https://bugzilla.redhat.com/1314641) - Encrypted rpc clients do not reconnect sometimes
-- [1296175](https://bugzilla.redhat.com/1296175) - geo-rep: hard-link rename issue on changelog replay
-- [1315939](https://bugzilla.redhat.com/1315939) - 'gluster volume get' returns 0 value for server-quorum-ratio
-- [1315562](https://bugzilla.redhat.com/1315562) - setting lower op-version should throw failure message
-- [1309462](https://bugzilla.redhat.com/1309462) - Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance. Fresh install of 3.7.8 also has low write performance
-- [1297209](https://bugzilla.redhat.com/1297209) - no-mtab (-n) mount option ignore next mount option
-- [1296208](https://bugzilla.redhat.com/1296208) - Geo-Replication Session goes "FAULTY" when application logs rolled on master
-- [1315582](https://bugzilla.redhat.com/1315582) - Geo-replication CPU usage is 100%
-- [1314617](https://bugzilla.redhat.com/1314617) - Data Tiering:Don't allow a detach-tier commit if detach-tier start has failed to complete
-- [1305749](https://bugzilla.redhat.com/1305749) - Vim commands from a non-root user fails to execute on fuse mount with trash feature enabled
-- [1309191](https://bugzilla.redhat.com/1309191) - [RFE] Schedule Geo-replication
-- [1313309](https://bugzilla.redhat.com/1313309) - Handle Rsync/Tar errors effectively
-- [1313310](https://bugzilla.redhat.com/1313310) - [RFE]Add --no-encode option to the `glusterfind pre` command
-- [1313311](https://bugzilla.redhat.com/1313311) - Dist-geo-rep : geo-rep worker crashed while init with [Errno 34] Numerical result out of range.
-- [1302979](https://bugzilla.redhat.com/1302979) - [georep+tiering]: Hardlink sync is broken if master volume is tiered
-- [1311865](https://bugzilla.redhat.com/1311865) - Data Tiering:Lot of Promotions/Demotions failed error messages
-- [1257012](https://bugzilla.redhat.com/1257012) - Fix the tests infra
-- [1313131](https://bugzilla.redhat.com/1313131) - quarantine folder becomes empty and bitrot status does not list any files which are corrupted
-- [1313623](https://bugzilla.redhat.com/1313623) - [georep+disperse]: Geo-Rep session went to faulty with errors "[Errno 5] Input/output error"
-- [1313921](https://bugzilla.redhat.com/1313921) - Incorrect file size on mount if stat is served from the arbiter brick.
-- [1315140](https://bugzilla.redhat.com/1315140) - Statedump crashes in open-behind because of NULL dereference
-- [1315142](https://bugzilla.redhat.com/1315142) - Move away from gf_log completely to gf_msg
-- [1315008](https://bugzilla.redhat.com/1315008) - glusterfs-server %post script is not quiet, prints "success" during installation
-- [1302202](https://bugzilla.redhat.com/1302202) - Unable to get the client statedump, as /var/run/gluster directory is not available by default
-- [1313233](https://bugzilla.redhat.com/1313233) - Wrong permissions set on previous copy of truncated files inside trash directory
-- [1313313](https://bugzilla.redhat.com/1313313) - Gluster manpage doesn't show georeplication options
-- [1315009](https://bugzilla.redhat.com/1315009) - glusterd: coverity warning in glusterd-snapshot-utils.c copy_nfs_ganesha_file()
-- [1313921](https://bugzilla.redhat.com/1313921) - Incorrect file size on mount if stat is served from the arbiter brick.
-- [1313315](https://bugzilla.redhat.com/1313315) - [HC] glusterfs mount crashed
-- [1283757](https://bugzilla.redhat.com/1283757) - EC: File healing promotes it to hot tier
-- [1314571](https://bugzilla.redhat.com/1314571) - rsyslog can't be completely removed due to dependency in libglusterfs
-- [1313302](https://bugzilla.redhat.com/1313302) - quota: reduce latency for testcase ./tests/bugs/quota/bug-1293601.t
-- [1312954](https://bugzilla.redhat.com/1312954) - quota: xattr trusted.glusterfs.quota.limit-objects not healed on a root of newly added brick
-- [1314164](https://bugzilla.redhat.com/1314164) - glusterd: does not start
-- [1313339](https://bugzilla.redhat.com/1313339) - features.sharding is not available in 'gluster volume set help'
-- [1314548](https://bugzilla.redhat.com/1314548) - tier: GCC throws Unused variable warning for conf in tier_link_cbk function
-- [1314204](https://bugzilla.redhat.com/1314204) - nfs-ganesha setup fails on fedora
-- [1312878](https://bugzilla.redhat.com/1312878) - Glusterd: Creation of volume is failing if one of the brick is down on the server
-- [1313776](https://bugzilla.redhat.com/1313776) - ec-read-policy.t can report a false-failure
-- [1293224](https://bugzilla.redhat.com/1293224) - Disperse: Disperse volume (cold vol) crashes while writing files on tier volume
-- [1313448](https://bugzilla.redhat.com/1313448) - Readdirp op_ret is modified by client xlator in case of xdata_rsp presence
-- [1311822](https://bugzilla.redhat.com/1311822) - rebalance : output of rebalance status should show ' run time ' in proper format (day,hour:min:sec)
-- [1312721](https://bugzilla.redhat.com/1312721) - tar complains: <fileName>: file changed as we read it
-- [1311445](https://bugzilla.redhat.com/1311445) - Implement inode_forget_cbk() similar fops in gfapi
-- [1312623](https://bugzilla.redhat.com/1312623) - gluster vol get volname user.metadata-text" Command fails with "volume get option: failed: Did you mean cluster.metadata-self-heal?"
-- [1311572](https://bugzilla.redhat.com/1311572) - tests : remove brick command execution displays success even after, one of the bricks down.
-- [1311451](https://bugzilla.redhat.com/1311451) - Add missing release-notes for release-3.7
-- [1311441](https://bugzilla.redhat.com/1311441) - Fix mem leaks related to gfapi applications
-- [1309238](https://bugzilla.redhat.com/1309238) - Issues with refresh-config when the ".export_added" has different values on different nodes
-- [1312200](https://bugzilla.redhat.com/1312200) - Handle negative fcntl flock->l_len values
-- [1311441](https://bugzilla.redhat.com/1311441) - Fix mem leaks related to gfapi applications
-- [1306131](https://bugzilla.redhat.com/1306131) - Attach tier : Creates fail with invalid argument errors
-- [1311836](https://bugzilla.redhat.com/1311836) - [Tier]: Endup in multiple entries of same file on client after rename which had a hardlinks
-- [1306129](https://bugzilla.redhat.com/1306129) - promotions not happening when space is created on previously full hot tier
-- [1290865](https://bugzilla.redhat.com/1290865) - nfs-ganesha server do not enter grace period during failover/failback
-- [1304889](https://bugzilla.redhat.com/1304889) - Memory leak in dht
-- [1311411](https://bugzilla.redhat.com/1311411) - gfapi : listxattr is broken for handle ops.
-- [1311041](https://bugzilla.redhat.com/1311041) - Tiering status and rebalance status stops getting updated
-- [1309233](https://bugzilla.redhat.com/1309233) - cd to .snaps fails with "transport endpoint not connected" after force start of the volume.
-- [1311043](https://bugzilla.redhat.com/1311043) - [RFE] While creating a snapshot the timestamp has to be appended to the snapshot name.
-- [1310999](https://bugzilla.redhat.com/1310999) - " Failed to aggregate response from node/brick"
-- [1310969](https://bugzilla.redhat.com/1310969) - glusterd logs are filled with "readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)"
-- [1310972](https://bugzilla.redhat.com/1310972) - After GlusterD restart, Remove-brick commit happening even though data migration not completed.
-- [1310632](https://bugzilla.redhat.com/1310632) - Newly created volume start, starting the bricks when server quorum not met
-- [1296007](https://bugzilla.redhat.com/1296007) - libgfapi: Errno incorrectly set to EINVAL even on success
-- [1308415](https://bugzilla.redhat.com/1308415) - DHT : for many operation directory/file path is '(null)' in brick log
-- [1308410](https://bugzilla.redhat.com/1308410) - for each Directory which was self healed
-- [1306131](https://bugzilla.redhat.com/1306131) - Attach tier : Creates fail with invalid argument errors
-- [1257012](https://bugzilla.redhat.com/1257012) - Fix the tests infra
-- [1310544](https://bugzilla.redhat.com/1310544) - DHT: Take blocking locks while renaming files
-- [1295359](https://bugzilla.redhat.com/1295359) - tiering: T files getting created , even after disk quota exceeds
-- [1295359](https://bugzilla.redhat.com/1295359) - tiering: T files getting created , even after disk quota exceeds
-- [1295359](https://bugzilla.redhat.com/1295359) - tiering: T files getting created , even after disk quota exceeds
-- [1295360](https://bugzilla.redhat.com/1295360) - [Tier]: can not delete symlinks from client using rm
-- [1295347](https://bugzilla.redhat.com/1295347) - [Tier]: "Bad file descriptor" on removal of symlink only on tiered volume
-- [1282388](https://bugzilla.redhat.com/1282388) - Data Tiering:delete command rm -rf not deleting files the linkto file(hashed) which are under migration and possible spit-brain observed and possible disk wastage
-- [1295365](https://bugzilla.redhat.com/1295365) - [Tier]: Killing glusterfs tier process doesn't reflect as failed/faulty in tier status
-- [1306131](https://bugzilla.redhat.com/1306131) - Attach tier : Creates fail with invalid argument errors
-- [1306131](https://bugzilla.redhat.com/1306131) - Attach tier : Creates fail with invalid argument errors
-- [1306131](https://bugzilla.redhat.com/1306131) - Attach tier : Creates fail with invalid argument errors
-- [1289031](https://bugzilla.redhat.com/1289031) - RFE:nfs-ganesha:prompt the nfs-ganesha disable cli to let user provide "yes or no" option
-- [1302528](https://bugzilla.redhat.com/1302528) - Remove brick command execution displays success even after, one of the bricks down.
-- [1254430](https://bugzilla.redhat.com/1254430) - glusterfs dead when user creates a rdma volume
-- [1308414](https://bugzilla.redhat.com/1308414) - AFR: cluster options like data-self-heal, metadata-self-heal and entry-self-heal should not be allowed to set, if volume is not distribute-replicate volume
-- [1285829](https://bugzilla.redhat.com/1285829) - AFR: 3-way-replication: Transport point not connected error messaged not displayed when one of the replica pair is down
-- [1308400](https://bugzilla.redhat.com/1308400) - dht: NULL layouts referenced while the I/O is going on tiered volume
-- [1306129](https://bugzilla.redhat.com/1306129) - promotions not happening when space is created on previously full hot tier
-- [1304963](https://bugzilla.redhat.com/1304963) - [GlusterD]: After log rotate of cmd_history.log file, the next executed gluster commands are not present in the cmd_history.log file.
-- [1308800](https://bugzilla.redhat.com/1308800) - access-control : spurious error log message on every setxattr call
-- [1288352](https://bugzilla.redhat.com/1288352) - Few snapshot creation fails with pre-validation failed message on tiered volume.
-- [1304692](https://bugzilla.redhat.com/1304692) - hardcoded gsyncd path causes geo-replication to fail on non-redhat systems
-- [1306922](https://bugzilla.redhat.com/1306922) - Self heal command gives error "Launching heal operation to perform index self heal on volume vol0 has been unsuccessful"
-- [1305256](https://bugzilla.redhat.com/1305256) - GlusterD restart, starting the bricks when server quorum not met
-- [1246121](https://bugzilla.redhat.com/1246121) - Disperse volume : client glusterfs crashed while running IO
-- [1293534](https://bugzilla.redhat.com/1293534) - guest paused due to IO error from gluster based storage doesn't resume automatically or manually
-- [1302962](https://bugzilla.redhat.com/1302962) - Rebalance process crashed during cleanup_and_exit
-- [1306163](https://bugzilla.redhat.com/1306163) - [USS]: If .snaps already exists, ls -la lists it even after enabling USS
-- [1305868](https://bugzilla.redhat.com/1305868) - [USS]: Need defined rules for snapshot-directory, setting to a/b works but in linux a/b is b is subdirectory of a
-- [1305742](https://bugzilla.redhat.com/1305742) - Lot of assertion failures are seen in nfs logs with disperse volume
-- [1306514](https://bugzilla.redhat.com/1306514) - promotions not balanced across hot tier sub-volumes
-- [1306302](https://bugzilla.redhat.com/1306302) - Data Tiering:Change the default tiering values to optimize tiering settings
-- [1306738](https://bugzilla.redhat.com/1306738) - gluster volume heal info takes extra 2 seconds
-- [1305029](https://bugzilla.redhat.com/1305029) - [quota]: Incorrect disk usage shown on a tiered volume
-- [1306136](https://bugzilla.redhat.com/1306136) - Able to create files when quota limit is set to 0
-- [1299712](https://bugzilla.redhat.com/1299712) - [HC] Implement fallocate, discard and zerofill with sharding
-- [1306138](https://bugzilla.redhat.com/1306138) - [tiering]: Quota object limits not adhered to, in a tiered volume
-- [1296040](https://bugzilla.redhat.com/1296040) - [tiering]: Tiering isn't started after attaching hot tier and hence no promotion/demotion
-- [1305755](https://bugzilla.redhat.com/1305755) - Start self-heal and display correct heal info after replace brick
-- [1303033](https://bugzilla.redhat.com/1303033) - tests : Modifying tests for crypt xlator
-- [1305428](https://bugzilla.redhat.com/1305428) - [Fuse: ] crash while --attribute-timeout and -entry-timeout are set to 0
diff --git a/doc/release-notes/geo-rep-in-3.7 b/doc/release-notes/geo-rep-in-3.7
deleted file mode 100644
index a83b6c31931..00000000000
--- a/doc/release-notes/geo-rep-in-3.7
+++ /dev/null
@@ -1,211 +0,0 @@
-### Improved Node fail-over issues handling by using Gluster Meta Volume
-
-In replica pairs one Geo-rep worker should be active and all
-the other replica workers should be passive. When Active worker goes
-down, Passive worker will become active. In previous releases, this logic
-was based on node-uuid, but now it is based on Lock file in Meta
-Volume. Now it is possible to decide Active/Passive more accurately
-and multiple Active worker scenarios minimized.
-
-Geo-rep works without Meta Volume also, this feature is backward
-compatible. By default config option `use_meta_volume` is False. This
-feature can be turned on with geo-rep config `use_meta_volume`
-true. Without this feature Geo-rep works as it was working in previous
-releases.
-
-Issues if meta_volume is turned off:
-
-1. Multiple workers becoming active and participate in
-syncing. Duplicate efforts and all the issues related to concurrent
-execution exists.
-
-2. Failover only works at node level, if a brick process goes down but
-node is alive then fail-back will not happen and delay in syncing.
-
-3. Very difficult documented steps about placements of bricks in case
-of replica 3. For example, first brick in each replica should not be
-placed in same node. etc.
-
-4. Consuming Changelogs from previously failed node when it comes
-back, which may lead to issues like delayed syncing and data
-inconsistencies in case of Renames.
-
-**Fixes**: [1196632](https://bugzilla.redhat.com/show_bug.cgi?id=1196632),
-[1217939](https://bugzilla.redhat.com/show_bug.cgi?id=1217939)
-
-
-### Improved Historical Changelogs consumption
-
-Support for consuming Historical Changelogs introduced in previous
-releases, with this release this is more stable and improved. Use of
-Filesystem crawl is minimized and limited only during initial sync.In
-previous release, Node reboot or brick process going down was treated as
-Changelog Breakage and Geo-rep was fallback to XSync for that
-duration. With this release, Changelog session will be considered
-broken only if Changelog is turned off. All the other scenarios
-considered as safe.
-
-This feature is also required by glusterfind.
-
-**Fixes**: [1217944](https://bugzilla.redhat.com/show_bug.cgi?id=1217944)
-
-
-### Improved Status and Checkpoint
-
-Status got many improvements, Showing accurate details of Session
-info, User info, Slave node to which master node is connected, Last
-Synced Time etc. Initializing time is reduced, Status change will
-happen as soon as geo-rep workers ready.(In previous releases
-Initializing time was 60 sec)
-
-**Fixes**: [1212410](https://bugzilla.redhat.com/show_bug.cgi?id=1212410)
-
-### Worker Restart improvements
-
-Workers going down and coming back is very common in geo-rep for
-reasons like network failure, Slave node going down etc. When it comes
-up it has to reprocess the changelogs again because worker died before
-updating the last sync time. The batch size is now optimized such that
-the amount of reprocess is minimized.
-
-**Fixes**: [1210965](https://bugzilla.redhat.com/show_bug.cgi?id=1210965)
-
-
-### Improved RENAME handling
-
-When renamed filename hash falls to other brick, respective brick's
-changelog records RENAME, but rest of the fops like CREATE, DATA are
-recorded in first brick. Each Geo-rep worker per brick syncs data to
-Slave Volume independently, These things go out of order and Master
-and Slave Volume become inconsistent. With the help of DHT team,
-RENAMEs are recorded where CREATE and DATA are recorded.
-
-**Fixes**: [1141379](https://bugzilla.redhat.com/show_bug.cgi?id=1141379)
-
-
-### Syncing xattrs and acls
-
-Syncing both xattrs and acls to Slave cluster are now supported. These
-can be disabled setting config options sync-xattrs or sync-acls to
-false.
-
-**Fixes**: [1187021](https://bugzilla.redhat.com/show_bug.cgi?id=1187021),
-[1196690](https://bugzilla.redhat.com/show_bug.cgi?id=1196690)
-
-
-### Identifying Entry failures
-
-Logging improvements to identify exact reason for Entry failures, GFID
-conflicts, I/O errors etc. Safe errors are not logged in Mount logs
-in Slave, Safe errors are post processed and only genuine errors are
-logged in Master logs.
-
-**Fixes**: [1207115](https://bugzilla.redhat.com/show_bug.cgi?id=1207115),
-[1210562](https://bugzilla.redhat.com/show_bug.cgi?id=1210562)
-
-
-### Improved rm -rf issues handling
-
-Successive deletes and creates had issues, Handling these issues
-minimized. (Not completely fixed since it depends on Open issues of
-DHT)
-
-**Fixes**: [1211037](https://bugzilla.redhat.com/show_bug.cgi?id=1211037)
-
-
-### Non root Geo-replication simplified
-
-Manual editing of Glusterd vol file is simplified by introducing
-`gluster system:: mountbroker` command
-
-**Fixes**: [1136312](https://bugzilla.redhat.com/show_bug.cgi?id=1136312)
-
-### Logging Rsync performance on request basis
-
-Rsync performance can be evaluated by enabling a config option. After
-this Geo-rep starts recording rsync performance in log file, which can
-be post processed to get meaningful metrics.
-
-**Fixes**: [764827](https://bugzilla.redhat.com/show_bug.cgi?id=764827)
-
-### Initial sync issues due to upper limit comparison during Filesystem Crawl
-
-Bug fix, Fixed wrong logic in Xsync Change detection. Upper limit was
-considered during xsync crawl. Geo-rep XSync was missing many files
-considering Changelog will take care. But Changelog will not have
-complete details of the files created before enabling Geo-replication.
-
-When rsync/tarssh fails, geo-rep is now capable of identifying safe
-errors and continue syncing by ignoring those issues. For example,
-rsync fails to sync a file which is deleted in master during
-sync. This can be ignored since the file is unlinked and no need to
-try syncing.
-
-**Fixes**: [1200733](https://bugzilla.redhat.com/show_bug.cgi?id=1200733)
-
-
-### Changelog failures and Brick failures handling
-
-When Brick process goes down, or any Changelog exception Geo-rep
-worker was failing back to XSync crawl. Which was bad since Xsync
-fails to identify Deletes and Renames. Now this is prevented, worker
-goes to Faulty and wait for that Brick process to comeback.
-
-
-**Fixes**: [1202649](https://bugzilla.redhat.com/show_bug.cgi?id=1202649)
-
-
-### Archive Changelogs in working directory after processing
-
-Archive Changelogs after processing not generate empty changelogs when
-no data is available. This is great improvement in terms of reducing
-the inode consumption in Brick.
-
-**Fixes**: [1169331](https://bugzilla.redhat.com/show_bug.cgi?id=1169331)
-
-
-### Virtual xattr to trigger sync
-
-Since we use Historical Changelogs when Geo-rep worker restarts. Only
-`SETATTR` will be recorded when we touch a file. In previous versions,
-Re triggering a file sync is stop geo-rep, touch files and start
-geo-replication. Now touch will not help since it records only `SETATTR`.
-Virtual Xattr is introduced to retrigger the sync. No Geo-rep restart
-required.
-
-**Fixes**: [1176934](https://bugzilla.redhat.com/show_bug.cgi?id=1176934)
-
-
-### SSH Keys overwrite issues during Geo-rep create
-
-Parallel creates or multiple Geo-rep session creation was overwriting
-the pem keys written by first one. This leads to connectivity issues
-when Geo-rep is started.
-
-**Fixes**: [1183229](https://bugzilla.redhat.com/show_bug.cgi?id=1183229)
-
-
-### Ownership sync improvements
-
-Geo-rep was failing to sync ownership information from master cluster
-to Slave cluster.
-
-**Fixes**: [1104954](https://bugzilla.redhat.com/show_bug.cgi?id=1104954)
-
-
-### Slave node failover handling improvements
-
-When slave node goes down, Master worker which is connected to that
-brick will go to faulty. Now it tries to connect to another slave node
-instead of waiting for that Slave node to come back.
-
-**Fixes**: [1151412](https://bugzilla.redhat.com/show_bug.cgi?id=1151412)
-
-
-### Support of ssh keys custom location
-
-If ssh `authorized_keys` are configured in non standard location instead
-of default `$HOME/.ssh/authorized_keys`. Geo-rep create was failing, now
-this is supported.
-
-**Fixes**: [1181117](https://bugzilla.redhat.com/show_bug.cgi?id=1181117)
diff --git a/doc/release-notes/upgrading-from-3.7.2-or-older.md b/doc/release-notes/upgrading-from-3.7.2-or-older.md
deleted file mode 100644
index f4f41568455..00000000000
--- a/doc/release-notes/upgrading-from-3.7.2-or-older.md
+++ /dev/null
@@ -1,37 +0,0 @@
-A new feature in 3.7.3 is causing troubles during upgrades from previous versions of GlusterFS to 3.7.3.
-The details of the feature, issue and work around are below.
-
-## Feature
-In GlusterFS-3.7.3, insecure-ports have been enabled by default. This
-means that by default, servers accept connections from insecure ports,
-clients use insecure ports to connect to servers. This change
-particularly benefits usage of libgfapi, for example when it is used
-in qemu run by a normal user.
-
-## Issue
-This has caused troubles when upgrading from previous versions to
-3.7.3 in rolling upgrades and when attempting to use 3.7.3 clients
-with older servers. The 3.7.3 clients establish connections using
-insecure ports by default. But the older servers still expect
-connections to come from secure-ports (if this setting has not been
-changed). This causes servers to reject connections from 3.7.3, and
-leads to broken clusters during upgrade and rejected clients.
-
-## Workaround
-There are two possible workarounds.
-Before upgrading,
-
-1. Set 'client.bind-insecure off' on all volumes.
-This forces 3.7.3 clients to use secure ports to connect to the servers.
-This does not affect older clients as this setting is the default for them.
-
-2. Set 'server.allow-insecure on' on all volumes.
-This enables servers to accept connections from insecure ports.
-The new clients can successfully connect to the servers with this set.
-
-
-If anyone faces any problems with these workarounds, please let us know via email[1][1] or in IRC[2][2].
-
-
-[1]: gluster-devel at gluster dot org / gluster-users at gluster dot org
-[2]: #gluster / #gluster-dev @ freenode