summaryrefslogtreecommitdiffstats
path: root/doc/release-notes
diff options
context:
space:
mode:
Diffstat (limited to 'doc/release-notes')
-rw-r--r--doc/release-notes/3.8.6.md60
-rw-r--r--doc/release-notes/3.8.7.md76
2 files changed, 136 insertions, 0 deletions
diff --git a/doc/release-notes/3.8.6.md b/doc/release-notes/3.8.6.md
new file mode 100644
index 00000000000..1ad77f3dbf8
--- /dev/null
+++ b/doc/release-notes/3.8.6.md
@@ -0,0 +1,60 @@
+# Release notes for Gluster 3.8.6
+
+This is a bugfix release. The [Release Notes for 3.8.0](3.8.0.md),
+[3.8.1](3.8.1.md), [3.8.2](3.8.2.md), [3.8.3](3.8.3.md), [3.8.4](3.8.4.md) and
+[3.8.5](3.8.5.md) contain a listing of all the new features that were added and
+bugs fixed in the GlusterFS 3.8 stable release.
+
+
+## Change in port allocation, may affect deployments with strict firewalls
+
+'''Problem description''': GlusterD used to assume that the brick port which
+was previously allocated to a brick, would still be available, and in doing so
+would reuse the port for the brick without registering with the port map
+server. The port map server would not be aware of the brick reusing the same
+port, and try to allocate it to another process, and in turn result in that
+process' failure to connect to the port.
+
+'''Fix and port usage changes''': With the fix, we force GlusterD to unregister
+a port previously used by the brick, and register a new port with the port map
+server and then use it. As a result of this change, there will be no conflict
+between processes competing over the same port, thereby fixing the issue. Also
+because of this change, a brick process on restart is not guaranteed to reuse
+the same port it used to be connected to.
+
+
+## Bugs addressed
+
+A total of 34 patches have been merged, addressing 31 bugs:
+
+- [#1336376](https://bugzilla.redhat.com/1336376): Sequential volume start&stop is failing with SSL enabled setup.
+- [#1347717](https://bugzilla.redhat.com/1347717): removal of file from nfs mount crashs ganesha server
+- [#1369766](https://bugzilla.redhat.com/1369766): glusterd: add brick command should re-use the port for listening which is freed by remove-brick.
+- [#1371397](https://bugzilla.redhat.com/1371397): [Disperse] dd + rm + ls lead to IO hang
+- [#1375125](https://bugzilla.redhat.com/1375125): arbiter volume write performance is bad.
+- [#1377448](https://bugzilla.redhat.com/1377448): glusterd: Display proper error message and fail the command if S32gluster_enable_shared_storage.sh hook script is not present during gluster volume set all cluster.enable-shared-storage <enable/disable> command
+- [#1384345](https://bugzilla.redhat.com/1384345): usage text is wrong for use-readdirp mount default
+- [#1384356](https://bugzilla.redhat.com/1384356): Polling failure errors getting when volume is started&stopped with SSL enabled setup.
+- [#1385442](https://bugzilla.redhat.com/1385442): invalid argument warning messages seen in fuse client logs 2016-09-30 06:34:58.938667] W [dict.c:418ict_set] (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x58722) 0-dict: !this || !value for key=link-count [Invalid argument]
+- [#1385620](https://bugzilla.redhat.com/1385620): Recording (ffmpeg) processes on FUSE get hung
+- [#1386071](https://bugzilla.redhat.com/1386071): Spurious permission denied problems observed
+- [#1387976](https://bugzilla.redhat.com/1387976): Continuous warning messages getting when one of the cluster node is down on SSL setup.
+- [#1388354](https://bugzilla.redhat.com/1388354): Memory Leaks in snapshot code path
+- [#1388580](https://bugzilla.redhat.com/1388580): crypt: changes needed for openssl-1.1 (coming in Fedora 26)
+- [#1388948](https://bugzilla.redhat.com/1388948): glusterfs can't self heal character dev file for invalid dev_t parameters
+- [#1390838](https://bugzilla.redhat.com/1390838): write-behind: flush stuck by former failed write
+- [#1390870](https://bugzilla.redhat.com/1390870): DHT: Rebalance- Misleading log messages from __dht_check_free_space function
+- [#1391450](https://bugzilla.redhat.com/1391450): md-cache: Invalidate cache entry in case of OPEN with O_TRUNC
+- [#1392288](https://bugzilla.redhat.com/1392288): gfapi clients crash while using async calls due to double fd_unref
+- [#1392364](https://bugzilla.redhat.com/1392364): trashcan max file limit cannot go beyond 1GB
+- [#1392716](https://bugzilla.redhat.com/1392716): Quota version not changing in the quota.conf after upgrading to 3.7.1 from 3.6.1
+- [#1392846](https://bugzilla.redhat.com/1392846): Hosted Engine VM paused post replace-brick operation
+- [#1392868](https://bugzilla.redhat.com/1392868): The FUSE client log is filling up with posix_acl_default and posix_acl_access messages
+- [#1393630](https://bugzilla.redhat.com/1393630): Better logging when reporting failures of the kind "<file-path> Failing MKNOD as quorum is not met"
+- [#1393682](https://bugzilla.redhat.com/1393682): stat of file is hung with possible deadlock
+- [#1394108](https://bugzilla.redhat.com/1394108): Continuous errors getting in the mount log when the volume mount server glusterd is down.
+- [#1394187](https://bugzilla.redhat.com/1394187): SMB[md-cache Private Build]:Error messages in brick logs related to upcall_cache_invalidate gf_uuid_is_null
+- [#1394226](https://bugzilla.redhat.com/1394226): "nfs-grace-monitor" timed out messages observed
+- [#1394883](https://bugzilla.redhat.com/1394883): Failed to enable nfs-ganesha after disabling nfs-ganesha cluster
+- [#1395627](https://bugzilla.redhat.com/1395627): Labelled geo-rep checkpoints hide geo-replication status
+- [#1396418](https://bugzilla.redhat.com/1396418): [md-cache]: All bricks crashed while performing symlink and rename from client at the same time
diff --git a/doc/release-notes/3.8.7.md b/doc/release-notes/3.8.7.md
new file mode 100644
index 00000000000..5a2fc980297
--- /dev/null
+++ b/doc/release-notes/3.8.7.md
@@ -0,0 +1,76 @@
+# Release notes for Gluster 3.8.7
+
+This is a bugfix release. The [Release Notes for 3.8.0](3.8.0.md),
+[3.8.1](3.8.1.md), [3.8.2](3.8.2.md), [3.8.3](3.8.3.md), [3.8.4](3.8.4.md),
+[3.8.5](3.8.5.md) and [3.8.6](3.8.6.md) contain a listing of all the new
+features that were added and bugs fixed in the GlusterFS 3.8 stable release.
+
+
+## New CLI option for granular entry heal enablement/disablement
+
+When there are already existing non-granular indices created that are yet to be
+healed, if granular-entry-heal option is toggled from `off` to `on`, AFR
+self-heal whenever it kicks in, will try to look for granular indices in
+`entry-changes`. Because of the absence of name indices, granular entry healing
+logic will fail to heal these directories, and worse yet unset pending extended
+attributes with the assumption that are no entries that need heal.
+
+To get around this, a new CLI is introduced which will invoke glfsheal program
+to figure whether at the time an attempt is made to enable granular entry heal,
+there are pending heals on the volume OR there are one or more bricks that are
+down. If either of them is true, the command will be failed with the
+appropriate error.
+
+ # gluster volume heal <VOL> granular-entry-heal {enable,disable}
+
+With this change, the user does not need to worry about when to enable/disable
+the option - the CLI command itself performs the necessary checks before
+allowing the "enable" command to proceed.
+
+What are those checks?
+* Whether heal is already needed on the volume
+* Whether any of the replicas is down
+
+In both of the cases, the command will be failed since AFR will be switching
+from creating heal indices (markers for files that need heal) under
+`.glusterfs/indices/xattrop` to creating them under
+`.glusterfs/indices/entry-changes`.
+The moment this switch happens, self-heal-daemon will cease to crawl the
+entire directory if a directory needs heal and instead looks for exact names
+under a directory that need heal under `.glusterfs/indices/entry-changes`. This
+might cause self-heal to miss healing some entries (because before the
+switch directories already needing heal won't have any indices under
+`.glusterfs/indices/entry-changes`) and mistakenly unset the pending heal
+xattrs even though the individual replicas are not in sync.
+
+When should users enable this option?
+* When they want to use the feature ;)
+* which is useful for faster self-healing in use cases with large number of
+ files under a single directory.
+ For example, it is useful in VM use cases with smaller shard sizes, given
+ that all shards are created under a single directory `.shard`. When a shard
+ is created while a replica was down, once it is back up, self-heal due to its
+ maintaining granular indices will know exactly which shard to recreate on the
+ sync as opposed to crawling the entire `.shard` directory to find out the
+ same information.
+
+
+## Bugs addressed
+
+A total of 16 patches have been merged, addressing 15 bugs:
+
+- [#1395652](https://bugzilla.redhat.com/1395652): ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
+- [#1397663](https://bugzilla.redhat.com/1397663): libgfapi core dumps
+- [#1398501](https://bugzilla.redhat.com/1398501): [granular entry sh] - Provide a CLI to enable/disable the feature that checks that there are no heals pending before allowing the operation
+- [#1399018](https://bugzilla.redhat.com/1399018): performance.read-ahead on results in processes on client stuck in IO wait
+- [#1399088](https://bugzilla.redhat.com/1399088): geo-replica slave node goes faulty for non-root user session due to fail to locate gluster binary
+- [#1399090](https://bugzilla.redhat.com/1399090): [geo-rep]: Worker crashes seen while renaming directories in loop
+- [#1399130](https://bugzilla.redhat.com/1399130): SEEK_HOLE/ SEEK_DATA doesn't return the correct offset
+- [#1399635](https://bugzilla.redhat.com/1399635): Refresh config fails while exporting subdirectories within a volume
+- [#1400459](https://bugzilla.redhat.com/1400459): [USS,SSL] .snaps directory is not reachable when I/O encryption (SSL) is enabled
+- [#1400573](https://bugzilla.redhat.com/1400573): Ganesha services are not stopped when pacemaker quorum is lost
+- [#1400802](https://bugzilla.redhat.com/1400802): glusterfs_ctx_defaults_init is re-initializing ctx->locks
+- [#1400927](https://bugzilla.redhat.com/1400927): Memory leak when self healing daemon queue is full
+- [#1402672](https://bugzilla.redhat.com/1402672): Getting the warning message while erasing the gluster "glusterfs-server" package.
+- [#1403192](https://bugzilla.redhat.com/1403192): Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
+- [#1403646](https://bugzilla.redhat.com/1403646): self-heal not happening, as self-heal info lists the same pending shards to be healed