diff options
author | Jiffin Tony Thottan <jthottan@redhat.com> | 2017-10-13 00:58:39 +0530 |
---|---|---|
committer | Jiffin Tony Thottan <jthottan@redhat.com> | 2017-10-13 01:50:46 +0530 |
commit | 3c0b53018acfe0fec721c5e2f50141fc8b061165 (patch) | |
tree | bae92ec254fb50afccefccb8458c620af00006e4 | |
parent | b79588a90bb2fcb2f02b65b98e2ae265a926e507 (diff) |
doc : release-notes for GlusterFS-3.12.2v3.12.2
Change-Id: I92a227548867a46bebdf0150605bdd6072f06bd9
BUG: 1491574
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
-rw-r--r-- | doc/release-notes/3.12.2.md | 64 |
1 files changed, 64 insertions, 0 deletions
diff --git a/doc/release-notes/3.12.2.md b/doc/release-notes/3.12.2.md new file mode 100644 index 00000000000..3f0ab9abc17 --- /dev/null +++ b/doc/release-notes/3.12.2.md @@ -0,0 +1,64 @@ +# Release notes for Gluster 3.12.2 + +This is a bugfix release. The release notes for [3.12.0](3.12.0.md), [3.12.1](3.12.1.md), +[3.12.2](3.12.2.md) contain a listing of all the new features that were added and bugs +fixed in the GlusterFS 3.12 stable release. + +## Major changes, features and limitations addressed in this release + 1.) In a pure distribute volume there is no source to heal the replaced brick + from and hence would cause a loss of data that was present in the replaced brick. + The CLI has been enhanced to prevent a user from inadvertently using replace brick + in a pure distribute volume. It is advised to use add/remove brick to migrate from + an existing brick in a pure distribute volume. + +## Major issues +1. Expanding a gluster volume that is sharded may cause file corruption + - Sharded volumes are typically used for VM images, if such volumes are + expanded or possibly contracted (i.e add/remove bricks and rebalance) there + are reports of VM images getting corrupted. + - The last known cause for corruption #1465123 is still pending, and not yet + part of this release. + +2. Gluster volume restarts fail if the sub directory export feature is in use. + Status of this issue can be tracked here, #1501315 + +3. Mounting a gluster snapshot will fail, when attempting a FUSE based mount of + the snapshot. So for the current users, it is recommend to only access snapshot + via ".snaps" directory on a mounted gluster volume. + Status of this issue can be tracked here, #1501378 + +## Bugs addressed + + A total of 31 patches have been merged, addressing 28 bugs + + +- [#1490493](https://bugzilla.redhat.com/1490493): Sub-directory mount details are incorrect in /proc/mounts +- [#1491178](https://bugzilla.redhat.com/1491178): GlusterD returns a bad memory pointer in glusterd_get_args_from_dict() +- [#1491292](https://bugzilla.redhat.com/1491292): Provide brick list as part of VOLUME_CREATE event. +- [#1491690](https://bugzilla.redhat.com/1491690): rpc: TLSv1_2_method() is deprecated in OpenSSL-1.1 +- [#1492026](https://bugzilla.redhat.com/1492026): set the shard-block-size to 64MB in virt profile +- [#1492061](https://bugzilla.redhat.com/1492061): CLIENT_CONNECT event not being raised +- [#1492066](https://bugzilla.redhat.com/1492066): AFR_SUBVOL_UP and AFR_SUBVOLS_DOWN events not working +- [#1493975](https://bugzilla.redhat.com/1493975): disallow replace brick operation on plain distribute volume +- [#1494523](https://bugzilla.redhat.com/1494523): Spelling errors in 3.12.1 +- [#1495162](https://bugzilla.redhat.com/1495162): glusterd ends up with multiple uuids for the same node +- [#1495397](https://bugzilla.redhat.com/1495397): Make event-history feature configurable and have it disabled by default +- [#1495858](https://bugzilla.redhat.com/1495858): gluster volume create asks for confirmation for replica-2 volume even with force +- [#1496238](https://bugzilla.redhat.com/1496238): [geo-rep]: Scheduler help needs correction for description of --no-color +- [#1496317](https://bugzilla.redhat.com/1496317): [afr] split-brain observed on T files post hardlink and rename in x3 volume +- [#1496326](https://bugzilla.redhat.com/1496326): [GNFS+EC] lock is being granted to 2 different client for the same data range at a time after performing lock acquire/release from the clients1 +- [#1497084](https://bugzilla.redhat.com/1497084): glusterfs process consume huge memory on both server and client node +- [#1499123](https://bugzilla.redhat.com/1499123): Readdirp is considerably slower than readdir on acl clients +- [#1499150](https://bugzilla.redhat.com/1499150): Improve performance with xattrop update. +- [#1499158](https://bugzilla.redhat.com/1499158): client-io-threads option not working for replicated volumes +- [#1499202](https://bugzilla.redhat.com/1499202): self-heal daemon stuck +- [#1499392](https://bugzilla.redhat.com/1499392): [geo-rep]: Improve the output message to reflect the real failure with schedule_georep script +- [#1500396](https://bugzilla.redhat.com/1500396): [geo-rep]: Observed "Operation not supported" error with traceback on slave log +- [#1500472](https://bugzilla.redhat.com/1500472): Use a bitmap to store local node info instead of conf->local_nodeuuids[i].uuids +- [#1500662](https://bugzilla.redhat.com/1500662): gluster volume heal info "healed" and "heal-failed" showing wrong information +- [#1500835](https://bugzilla.redhat.com/1500835): [geo-rep]: Status shows ACTIVE for most workers in EC before it becomes the PASSIVE +- [#1500841](https://bugzilla.redhat.com/1500841): [geo-rep]: Worker crashes with OSError: [Errno 61] No data available +- [#1500845](https://bugzilla.redhat.com/1500845): [geo-rep] master worker crash with interrupted system call +- [#1500853](https://bugzilla.redhat.com/1500853): [geo-rep]: Incorrect last sync "0" during hystory crawl after upgrade/stop-start +- [#1501022](https://bugzilla.redhat.com/1501022): Make choose-local configurable through `volume-set` command +- [#1501154](https://bugzilla.redhat.com/1501154): Brick Multiplexing: Gluster volume start force complains with command "Error : Request timed out" when there are multiple volumes |