From d403b416c1694e28f8e15f66823c1a6ffd23f34b Mon Sep 17 00:00:00 2001 From: Atin Mukherjee Date: Fri, 1 Apr 2016 17:12:02 +0530 Subject: update 3.7.10 release notes Add 1322772 & 1323287 in known issues section Change-Id: I1269e91ca0062162ac92f65f4f746beeb100db54 Signed-off-by: Atin Mukherjee Reviewed-on: http://review.gluster.org/13886 NetBSD-regression: NetBSD Build System CentOS-regression: Gluster Build System Reviewed-by: Niels de Vos Smoke: Gluster Build System Reviewed-by: Vijay Bellur --- doc/release-notes/3.7.10.md | 11 +++++++++++ 1 file changed, 11 insertions(+) (limited to 'doc') diff --git a/doc/release-notes/3.7.10.md b/doc/release-notes/3.7.10.md index 8d4c685af5e..85df72ebc79 100644 --- a/doc/release-notes/3.7.10.md +++ b/doc/release-notes/3.7.10.md @@ -50,3 +50,14 @@ The following bugs have been fixed in 3.7.10, - [1322516](https://bugzilla.redhat.com/1322516) - RFE: Need type of gfid in index_readdir - [1322521](https://bugzilla.redhat.com/1322521) - Choose self-heal source as local subvolume if possible - [1322552](https://bugzilla.redhat.com/1322552) - Self-heal and manual heal not healing some file + +### Known Issues + +[1322772](https://bugzilla.redhat.com/1322772): glusterd: glusterd didn't come up after node reboot error" realpath () failed for brick /run/gluster/snaps/130949baac8843cda443cf8a6441157f/brick3/b3. The underlying file system may be in bad state [No such file or directory]" +* Problem : If snapshot is activated and cluster has some snapshots and if a node is rebooted, glusterd instance doesn't come up and an error log "The underlying file system may be in bad state [No such file or directory]" is seens in glusterd log file. +* Workaround would be to run [this script](https://gist.github.com/atinmu/a3682ba6782e1d79cf4362d040a89bd1#file-bz1322772-work-around-sh) and post that restart glusterd service on all the nodes. + +[1323287](https://bugzilla.redhat.com/1323287): TIER : Attach tier fails +* Problem: This is not a tiering related issue, rather its on glusterd. If on a multi node cluster, one of the node/glusterd instance is down and volume operations are performed, once the faulty node or glusterd instance comes back, real_path info doesn't get populated back for all the existing bricks resulting into further volume create/attach tier/add-brick commands to fail. +* Workaround would be to restart glusterd instance once again. + -- cgit