summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: statedump supportAtin Mukherjee2014-10-153-4/+261
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although glusterd currently has statedump support but it doesn't dump its context information. Implementing glusterd_dump_priv function to export per-node glusterd information would be useful for debugging bugs. Once implemented, we could enhance sos-report to fetch this information. This would potentially reduce our time to root cause and data needed for debugability can be dumped gradually. Following is the main items of the dump list targeted in this patch : * Supported max/min op-version and current op-version * Information about peer list * Information about peer list involved while a transaction is going on (xaction_peers) * option dictionary in glusterd_conf_t * mgmt_v3_lock in glusterd_conf_t * List of connected clients * uuid of glusterd * A section of rpc related information like live connections and their statistics There are couple of issues which were found during implementation and testing phase: - xaction_peers of glusterd_conf_t was not initialized in init because of which traversing through this list head was crashing when there was no active transaction - gf_free was not setting the typestr to NULL if the the alloc count becomes 0 for a mem-type earlier allocated. Change-Id: Ic9bce2d57682fc1771cd2bc6af0b7316ecbc761f BUG: 1139682 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/8665 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/geo-rep: Fix race in updating status fileKotresh HR2014-10-121-18/+26
| | | | | | | | | | | | | | | | | | | | | | | | When geo-rep is in paused state and a node in a cluster is rebooted, the geo-rep status goes to "faulty (Paused)" and no worker processes are started on that node yet. In this state, when geo-rep is resumed, there is a race in updating status file between glusterd and gsyncd itself as geo-rep is resumed first and then status is updated. glusterd tries to update to previous state and gsyncd tries to update it to "Initializing...(Paused)" on restart as it was paused previously. If gsyncd on restart wins, the state is always paused but the process is not acutally paused. So the solution is glusterd to update the status file and then resume. Change-Id: I348761a6e8c3ad2630c79833bc86587d062a8f92 BUG: 1149982 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/8911 Reviewed-by: Aravinda VK <avishwan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd: print the peer name instead of a null UUID in a rpc failure messageAtin Mukherjee2014-10-093-111/+125
| | | | | | | | | | | | | This patch improves the failure message by printing the correct peer name instead of a blank uuid in case of rpc connection is lost/broken. Change-Id: Ia232792051f23896883b239982cb48130e3ce60e BUG: 1146902 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/8597 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: make bricks respect 'transport.socket.bind-address'Niels de Vos2014-10-081-0/+8
| | | | | | | | | | | | | | | When GlusterD starts the brick processes, these will listen on all interfaces. When the 'transport.socket.bind-address' option is set in glusterd.vol, the brick processes should only listen on the specified hostname or IP-address. Change-Id: I8e7d1f294904081137c23f3446261329d0d13bba BUG: 1149863 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8910 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: pass the bind-address to starting servicesNiels de Vos2014-10-071-3/+15
| | | | | | | | | | | | | | | | | | | | | | | | When the transport.socket.bind-address option is set to a hostname or ip-address, the services started by GlusterD fail to connect to the management daemon. GlusterD always forces the services to connect to the "localhost" hostname, even if it is not listening on that address. GlusterD should take the transport.socket.bind-address option into consideration, and pass that to the glusterfs-clients with the -s or --volfile commandline parameter. Note that this is not a change that removes all hard-coded dependencies on "localhost". This change merely makes it possible to start required services when the transport.socket.bind-address option is set. Change-Id: I36a0ed6c69342e6327adc258fea023929055d7f2 BUG: 1149863 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8908 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* Do not hardcode umount(8) path, emulate lazy umountEmmanuel Dreyfus2014-10-035-74/+26
| | | | | | | | | | | | | | | | | | 1) Use a system-dependent macro for umount(8) location instead of relying on $PATH to find it, for security and portability sake. 2) Introduce gf_umount_lazy() to replace umount -l (-l for lazy) invocations, which is only supported on Linux; On Linux behavior in unchanged. On other systems, we fork an external process (umountd) that will take care of periodically attempt to unmount, and optionally rmdir. BUG: 1129939 Change-Id: Ia91167c0652f8ddab85136324b08f87c5ac1e51d Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8649 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Csaba Henk <csaba@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/quota: Heal pgfid xattr on existing data when the quota isvmallika2014-09-301-3/+3
| | | | | | | | | | | | | | | | | | | | | enable The pgfid extended attributes are used to construct the ancestry path (from the file to the volume root) for nameless lookups on files. As NFS relies on nameless lookups heavily, quota enforcement through NFS would be inconsistent if quota were to be enabled on a volume with existing data. Solution is to heal the pgfid extended attributes as a part of lookup perfomed by quota-crawl process. In a posix lookup check for pgfid xattr and if it is missing set the xattr. Change-Id: I5912ea96787625c496bde56d43ac9162596032e9 BUG: 1147378 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8878 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Perform brick order check in originator node.GauravKumarGarg2014-09-291-16/+19
| | | | | | | | | | | | | | | | Currently in case of multi node cluster brick-order check for replicate volume done on every node. Its waste of time to perform brick order check on every node. This change will perform brick order check only at originator node. Change-Id: I8687fd28e587de8a280a9003b015ccd5729c9740 BUG: 1091935 Signed-off-by: ggarg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/8881 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Kaushal M <kaushal@redhat.com>
* glusterd: file-snapshot and features-encryption options should be validate ↵Gaurav Kumar Garg2014-09-251-2/+3
| | | | | | | | | | | | | | | | | | | | | | correctly By giving non-boolean value to volume set command for features.file-snapshot and features.encryption option the command failed after that subsequent volume set request with valid value of the existing any volume set option fail. Previously when user supplies a non-boolean value in volume set command for features.file-snapshot and features.encryption option's then validation of that value was done by volinfo->dict but actual value of that option store in input dictonary. Now with this change it will refer correct dictonary for validation of supplies value. Change-Id: I4a93d8be848cd33fdf4b4eb9b1a8d15ec9d1e66a BUG: 1140162 Reviewed-on: http://review.gluster.org/8688 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* OSX/FreeBSD: Regression fixHarshavardhana2014-09-241-4/+16
| | | | | | | | | | | Introduced in "1f6e992f1aaa676be5bd47d17e58f1171825cf43" Change-Id: I655cf613ca93a749ab5403cb3ec038e739993e2e BUG: 1146279 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8841 Reviewed-by: Justin Clift <justin@gluster.org> Tested-by: Justin Clift <justin@gluster.org>
* glusterd: Move brick order check from cli to glusterd.ggarg2014-09-242-1/+247
| | | | | | | | | | | | | | | | | | | | | | | | | | | Previously the brick order check for replicate volumes on volume create and add-brick was done by the cli. This check would fail when a hostname wasn't resolvable and would question the user if it was ok to continue. If the user continued, glusterd would fail the command again as the hostname wouldn't be resolvable. This was unnecessary. This change, moves the check from cli into glusterd. The check is now performed during staging of volume create after the bricks have been resolved. This prevents the above condition from occurring. As a result of this change, the user will no longer be questioned and given an option to continue the operation when a bad brick order is given or the brick order check fails. In such a case, the user can use 'force' to bypass the check and allow the command to succeed. Change-Id: I009861efaf3fb7f553a9b00116a992f031f652cb BUG: 1091935 Signed-off-by: ggarg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/7589 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* Do not hardcode setfattr(1) pathEmmanuel Dreyfus2014-09-231-1/+13
| | | | | | | | | | | | | Turn setfattr(1) absolute path into an OS-dependant macro. Let compiler option override it to fit custom installation if needed. BUG: 1129939 Change-Id: I8f469c5741a85b6e8d8f6299a9540b3d64611d2f Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8786 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Authenticate management handshake requestsKaushal M2014-09-233-0/+66
| | | | | | | | | | | | | | | | | | | Management handshake requests, which are used to validate op-version supported by the peers, are now only allowed if, - the glusterd doesn't have any other peer, or - the request was sent by another peer. This prevents the op-version of a peer being changed because of a connection attempt by an invalid peer. Change-Id: I248c386ed5ec4f8360e7b5e7f9ab74b7e8a7fc65 BUG: 1109741 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/8126 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Add last successful glusterd lock backtraceKrishnan Parthasarathi2014-09-221-0/+14
| | | | | | | | | | | | | | Also, moved the backtrace fetching logic to a separate function. Modified the backtrace fetching logic able to work under memory pressure conditions. Change-Id: Ie38bea425a085770f41831314aeda95595177ece BUG: 1138503 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/8584 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* snapview-server: register a callback with glusterd to get notificationsRaghavendra Bhat2014-09-083-0/+44
| | | | | | | | | | | | | | | * As of now snapview-server is polling (sending rpc requests to glusterd) to get the latest list of snapshots at some regular time intervals (non configurable). Instead of that register a callback with glusterd so that glusterd sends notifications to snapd whenever a snapshot is created/deleted and snapview-server can configure itself. Change-Id: I17a274fd2ab487d030678f0077feb2b0f35e5896 BUG: 1119628 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/8150 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Always check for ENODATA with ENOATTREmmanuel Dreyfus2014-09-081-0/+3
| | | | | | | | | | | | | | | | | | Linux defines ENODATA and ENOATTR with the same value, which means that code can miss on on the two without breaking. FreeBSD does not have ENODATA and GlusterFS defines it as ENOATTR just like Linux does. On NetBSD, ENODATA != ENOATTR, hence we need to check for both values to get portable behavior. BUG: 764655 Change-Id: I003a3af055fdad285d235f2a0c192c9cce56fab8 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8447 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* barrier: features.barrier should be a NO_DOCAtin Mukherjee2014-09-051-0/+1
| | | | | | | | | | | | features.barrier is turned on/off internally by snapshot feature and hence it should not be exposed to the customer and this option should not be documented. Change-Id: Id9a421f94e291f1dc77044904bb18dd730c2d43e BUG: 1135691 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/8572 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Improve debugging experience for glusterd locksKrishnan Parthasarathi2014-09-031-8/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | Today, when glusterd's internal locking mechanism fails with invalid type or when another competing lock is being held, the log message doesn't provide enough information directly as to which command saw this (first). Following is a snippet of how a failure would look in the log file. This would greatly assist in debugging. [2014-09-03 04:57:58.549418] E [glusterd-locks.c:520:glusterd_mgmt_v3_lock] (-->/usr/local/lib/glusterfs/3.7dev/xlator/mgmt/glusterd.so(__glusterd_handle_create_volume+0x801) [0x7f30b071e651] (-->/usr/local/lib/glusterfs/3.7dev/xlator/mgmt/glusterd.so(glusterd_op_begin_synctask+0x2c) [0x7f30b072e19c] (-->/usr/local/lib/glusterfs/3.7dev/xlator/mgmt/glusterd.so(gd_sync_task_begin+0x55d) [0x7f30b072de6d]))) 0-management: Invalid entity. Cannot perform locking operation on vol types Change-Id: I0595f49d60e620e8b065f3506bdb147ccee383a7 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/8580 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Prevent rebalance starting with old clientsKaushal M2014-09-035-38/+81
| | | | | | | | | | | | | | | | | Glusterd will prevent rebalance from starting when clients older than glusterfs-v3.6.0 are connected to a volume. This is needed as running rebalance with old clients connected could lead to data loss in some cases. The DHT xlator on newer clients (>= 3.6.0) has been fixed to prevent the data loss issues. Change-Id: If58640236382a2fc13f73f6b43777f01713859f7 BUG: 1136201 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/8583 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Set the rlimit for Open FDs to higher valueVijaikumar M2014-09-031-0/+18
| | | | | | | | | | | | | | | | | | Default 'open FD limit' is 1024. As the number of volumes/bricks increases, brick-to-glusterd socket FDs also increases in glusterd and runs out of the limit. Solution is to set the 'Open FD' limit to higher value in glusterd Change-Id: Iaa60b2155df2fa5a0759e054bdebffbc09f63ec1 BUG: 1136352 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8578 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Add xml specific functions in HAVE_LIB_XML blockHarshavardhana2014-08-261-3/+5
| | | | | | | | | | | | Build failure on OSX and also on Linux with '--disable-xml-output' introduced in following commit "c080403393987f807b9ca81be140618fa5e994f1" Change-Id: I0db92c9f5e319dc1932bed9ecc1acc98adb57de3 BUG: 983317 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8546 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli/glusterd: Support of volume get for a specific volume optionAtin Mukherjee2014-08-265-154/+601
| | | | | | | | | | | | | | This patch introduces a cli command to display a specific volume option/all volume options of a specific volume with the following usage: Usage: volume get <VOLNAME> <key|all> Change-Id: Ic88edb33c5509d7a37cd5ade6341e45e3cdbf59d BUG: 983317 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/8305 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* Fix glustershd detection on volume restartEmmanuel Dreyfus2014-08-251-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | On NetBSD and FreeBSD, doing a 'gluster volume start $volume force' causes NFS server, quotad, snapd and glustershd to be undetected by glusterd once the volume has restarted. 'gluster volume status' shows the three processes as 'N' in the online column, while they have been launched successfully. This happens because glusterd attempts to connect to its child processes just between the child does a unlink() on the socket in __socket_server_bind() and the time it calls bind() and listen(). Different scheduling policy may explain why the problem does not happen on Linux, but it may pop up some day since we make no guaranteed assumptions here. This patchet works this around by introducing a boolean transport.socket.ignore-enoent option, set by nfs and glustershd, which prevents ENOENT to be fatal and cause glusterd to retry and suceed later. Behavior of other clients is unaffected. BUG: 1129939 Change-Id: Ifdc4d45b2513743ed42ee235a5c61a086321644c Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8403 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot : Fail the snapshot create operation if geo-rep is running.Sachin Pandit2014-08-211-1/+11
| | | | | | | | | | | | | | | | | As one of the recommandations for taking a snapshot is not to have an active geo-replication session, its better to display an error saying session is active when snapshot create command is issued. Change-Id: I94593dbd2659610e033ca316176dda1ac8dc5ce6 BUG: 1129038 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8461 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* geo-rep/glusterd: API to check active geo-rep session for the volumeKotresh H R2014-08-214-67/+183
| | | | | | | | | | | | | | | | | | | | | | | | | | Requirement: Snapshot needs an API to fail the CLI if any geo-rep session is active for that volume. Solution: A function "gd_vol_is_geo_rep_active" is provided to check if any geo-rep session is active for that volume. An in memory dict called 'gsync_running_slaves' is maintained in 'volinfo' structure to keep track of active geo-rep session for the volume. The key 'slavenode::slavevol' with value 'running' is added whenever geo-rep is started/resumed into the dict and the same is removed if stopped/paused. So the 'count' in dict is used to decide whether the geo-rep is active or not for that volume. Also added "this->name" in gf_log in routines which this patch is touched. Change-Id: I2b5de7dd686541c6b89c0fd0f7a4dbc92eecfac5 BUG: 1129008 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/8459 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* NetBSD /dev/fuse detectionEmmanuel Dreyfus2014-08-131-0/+4
| | | | | | | | | | | | | | NetBSD's FUSE being pure userland implementation, there is no /dev/fuse to open. Test /dev/puffs (kernel fs-in-userland subsystem supporting FUSE) insead. BUG: 764655 Change-Id: Ia65e95c246dc31ea2839cf64d7c851430828542e Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8478 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Coverity fix for going out of scope leaks of directory pointerggarg2014-08-131-0/+2
| | | | | | | | | | | | | | | | In function volgen_apply_filters() directory stream associated with "filterdir" should be close after opening directory stream corresponding to directory name. closedir() also closes the underlying file descriptor associated with "filterdir". Coverity CID: 1124723 Change-Id: I78ed04047ded98bf95d201afed01c727aa506882 BUG: 789278 Reviewed-on: http://review.gluster.org/8088 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* build: make GLUSTERD_WORKDIR rely on localstatedirHarshavardhana2014-08-074-8/+7
| | | | | | | | | | | | | | | | | | | | | | - Break-way from '/var/lib/glusterd' hard-coded previously, instead rely on 'configure' value from 'localstatedir' - Provide 's/lib/db' as default working directory for gluster management daemon for BSD and Darwin based installations - loff_t is really off_t on Darwin - fix-off the warnings generated by clang on FreeBSD/Darwin - Now 'tests/*' use GLUSTERD_WORKDIR a common variable for all platforms. - Define proper environment for running tests, define correct PATH and LD_LIBRARY_PATH when running tests, so that the desired version of glusterfs is used, regardless where it is installed. (Thanks to manu@netbsd.org for this additional work) Change-Id: I2339a0d9275de5939ccad3e52b535598064a35e7 BUG: 1111774 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8246 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* feature/geo-rep: Keep marker.tstamp's mtime unchangeable during snapshot.Kotresh H R2014-08-042-11/+123
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Geo-replicatoin does a full xsync crawl after snapshot restoration of slave and master. It does not do history crawl. Analysis: Marker creates 'marker.tstamp' file when geo-rep is started for the first time. The virtual extended attribute 'trusted.glusterfs.volume-mark' is maintained and whenever it is queried on gluster mount point, marker fills it on the fly and returns the combination of uuid, ctime of marker.tstamp and others. So ctime of marker.tstamp, in other sense 'volume-mark' marks the geo-rep start time when the session is freshly created. From the above, after the first filesystem crawl(xsync) is done during first geo-rep start, stime should always be less than 'volume-mark'. So whenever stime is less than volume-mark, it does full filesystem crawl (xsync). Root Cause: When snapshot is restored, marker.tstamp file is freshly created losing the timestamps, it was originally created with. Solution: 1. Change is made to depend on mtime instead of ctime. 2. mtime and atime of marker.tstamp is restored back when snapshot is created and restored. Change-Id: I4891b112f4aedc50cfae402832c50c5145807d7a BUG: 1125918 Signed-off-by: Kotresh H R <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/8401 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Inherit the mount options of a original brickVijaikumar M2014-08-037-122/+250
| | | | | | | | | | | | | | | | | | | | when creating snapshots When creating a snapshot a LVM is created at the backend and is mounted under /var/run/gluster/snaps/... However, this mount does not inherit the mount options for the original brick acting as the parent for the snap. If the snap is restored, this could lead to performance degredations, functional limitations, or in extreme scenarios even potential data loss. Change-Id: I67d70fd83430d83dacc5380c6c928e27fb9c9e1b BUG: 1125180 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8394 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Error msg for snapshot status for no existing snapsVijaikumar M2014-07-301-35/+42
| | | | | | | | | | | | | | | | | should be aligned with error messages of info and list When a snapshot operation like status, info, list performed on a non-existing snapshot. For Status error message is displayed as 'Snap not found' For List and Info error message is displayed as 'Snapshot does not exist' Have the consistant error message all the places Change-Id: I7b241217dba62fda844481731a6858e4ecb12897 BUG: 1119641 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8309 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Proper err msg for snapshot create commandRajesh Joseph2014-07-301-0/+77
| | | | | | | | | | | | | | | problem: Snapshot command fails if one or more bricks are not thinly provisioned. But the error message is a generic error message which is confusing to the user. fix: Provide correct error message in case of failure. Change-Id: Iad247f966423a8f73ef6da57cab7ed6cddc05861 BUG: 1123646 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/8377 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* rpc,glusterd: Set ret to 0 after call to rpc_clnt_submit()Krutika Dhananjay2014-07-271-3/+10
| | | | | | | | | | | | | | | | | This is to guard against a double STACK_DESTROY in the event that rpc_clnt_submit() failed and returned -1 anytime after sending the request over the wire. in which case the stack could be destroyed twice: once in the callback function of the request and once in the error codepath of some of the callers of glusterd_submit_request(). This bug was introduced in http://review.gluster.org/#/c/8257/ Change-Id: I18a345db6eafc62ba312743201ced7d3f8697622 BUG: 1116243 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8301 Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* feature/snapshot : Interface to delete all snapshots belongingSachin Pandit2014-07-251-25/+186
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to a system as-well-as to a particular volume. Problem : With the current design we can only delete a single snapshot. And the deletion of volume which contains snapshot is not allowed. Because of that user might be forced to delete all the snapshots manually before he is allowed to delete a volume. Solution: Following is the interface with which user can delete all the snapshots of a system or belonging to a particular volume. Syntax : gluster snapshot delete all *To delete all the snapshots present in a system Syntax : gluster snapshot delete volume <volname> *To deletes all the snapshot present in a volume specified. ======================================================================== Sample Output: Case 1 : Deleting a single snapshot. [root@snapshot-24 glusterfs]# gluster snapshot delete snap1 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: snap1: snap removed successfully ----------------------------------------------------------------- Case 2 : Deleting all the snapshots in a Volume. [root@snapshot-24 glusterfs]# gluster snapshot delete volume vol1 Volume (vol1) contains 9 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap2: snap removed successfully snapshot delete: snap3: snap removed successfully snapshot delete: snap4: snap removed successfully snapshot delete: snap5: snap removed successfully . . . ----------------------------------------------------------------- Case 3 : Deleting all the snapshots in a system. [root@snapshot-24 glusterfs]# gluster snapshot delete all System contains 4 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap7: snap removed successfully snapshot delete: snap8: snap removed successfully snapshot delete: snap9: snap removed successfully snapshot delete: snap10: snap removed successfully ======================================================================== Change-Id: Ifec8e128ab2011cbbba208376b9c92cfbe7d8d71 BUG: 1112613 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8162 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: update volinfo->subvol_count during remove-brick operation.Ravishankar N2014-07-241-2/+2
| | | | | | | | | | | | | | | | | | Problem: In glusterd_op_remove_brick(), volinfo->subvol_count was getting updated only if the replica count was reduced due to which subvol_matcher_verify() gave false errors under certain scenarios (see bug description). Fix: updated subvol_count for every remove-brick operation. Change-Id: Id72691e2bda1c624cd7d8cae92f6bf32c101a6d3 BUG: 1120647 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/8326 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Avoid spurious WARNING 'op_ctx modification failed'Harshavardhana2014-07-241-2/+35
| | | | | | | | | | | | | | This patch fixes by wrapping this whole scenario and skips op_ctx modification for necessary commands. Change-Id: I3ecec19caefdc699d9a2dabfb456a89758ae4aa4 BUG: 1066529 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8337 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli: Xml output for geo-replication status command.ndarshan2014-07-241-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds xml output for geo-replication status and status detail command. sample: -------------------------------------------------------------- <geoRep> <volume> <name>master</name> <sessions> <session> <session_slave>:2a301d66-b9d2-44b4-b827-d680d67123eb:ssh://XXXXXXXXXX::slave</session_slave> <pair> <master_node>localhost.localdomain</master_node> <master_node_uuid>2a301d66-b9d2-44b4-b827-d680d67123eb</master_node_uuid> <master_brick>/root/master_b1</master_brick> <slave>ssh://XXXXXXXXXXX::slave</slave> <status>faulty</status> <checkpoint_status>N/A</checkpoint_status> <crawl_status>N/A</crawl_status> </pair> </session> </sessions> </volume> </geoRep> ------------------------------------------------------------- Change-Id: Ia19dbe751c3ab1ec7cb8923cdd6c8b99c374072f BUG: 1121518 Signed-off-by: ndarshan <dnarayan@redhat.com> Reviewed-on: http://review.gluster.org/8089 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/snapshot: Print correct error message on cliVijaikumar M2014-07-241-0/+10
| | | | | | | | | | | | | | | | | | | | for snapshot operation performed on a cluster with op-version less than 30600 Currently we get error message as on cli 'Another transaction is in progress Please try again after sometime' when a snapshot operation is performed on a cluster with op-version less than 30600. We need to print the correct error message in this case. Change-Id: I5f144428d928393c3796bde96ce6e3a40fca8141 BUG: 1122816 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8371 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* cli/snapshot : Dont display the snapshot hard-limit, soft-limitSachin Pandit2014-07-244-257/+222
| | | | | | | | | | | | | | | | | | | | | | | | | and auto-delete value in gluster volume info. Problem : Even though snap-max-hard-limit, snap-max-soft-limit and auto-delete values were not set explicitly, It was getting showed in the output of gluster volume info. Solution : Check if the value is already present in dictionary (That means, it is set), If value is not present then consider the default value, NOTE : This patch doesn't solve the problem where the values which is set globally are being displayed in gluster volume info Change-Id: I61445b3d2a12eb68c38a19bea53b9051ad028050 BUG: 1113476 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8191 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* gluster: Fix the recursive goto outs in the source code.Avra Sengupta2014-07-213-14/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Added a script check_goto.pl, that when run from the source code root, will scan all .c files to match the following pattern: label: if (condition) goto label; On finding such a pattern the script will print the file name and the line number. There are certain cases where the above recursive pattern is intended. Hence adding those labels to ignore-labels. Thanks Vijaikumar Mallikarjuna for the perl script. Also fixed all such existing errors Change-Id: I1b821d0a8c296f16e40faff20bd029bdc880c2e9 BUG: 1119256 Signed-off-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/8307 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* build: add libraries to LIBADD instead of LDFLAGSTiziano Müller2014-07-191-3/+3
| | | | | | | | | | | | | | | For a number of linker flags the order of the object files and the libs to link against matter (for example -Wl,--as-needed). Make sure that libraries are added via the LIBADD variable instead of LDFLAGS and therefore always come after the object files. Change-Id: I59d114752a0c7664b8678a72082ba5e445497fe5 Signed-off-by: Tiziano Müller <tiziano.mueller@stepping-stone.ch> Reviewed-on: http://review.gluster.org/8331 Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-by: Prashanth Pai <ppai@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Correctly reset volinfo->caps during volume createKaushal M2014-07-171-14/+12
| | | | | | | | | | Change-Id: I012899be08a06d39ea5c9fb98a66acf833d7213f BUG: 1120589 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/8323 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: do not check for snapd handle in restore if uss is disabledRaghavendra Bhat2014-07-151-0/+19
| | | | | | | | | Change-Id: I01afe64685a5794cce9265580c6c5de57a045201 BUG: 1119582 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/8310 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Improvements to peer identificationKaushal M2014-07-1515-721/+1339
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch improves the peer identification mechanism in glusterd and lays down the framework for further improvements, including better multi network support in glusterd. This patch mainly does two things, 1. Extend the peerinfo object to store a list of addresses instead of a single hostname as it does now. This also includes changes to make the peer update behaviour of 'peer probe' to add to the list. 2. Improve glusterd_friend_find_by_hostname() to perform better matching of hostnames. glusterd_friend_find_by_hostname() now does and initial quick string compare against all the peer addresses known to glusterd, after which it tries a more thorough search using address resolution and matching the struc sockaddr's. The above two changes together improve the peer identification situation in glusterd a lot. More information regarding the problem this patch attempts to resolve and the approach chosen can be found at http://www.gluster.org/community/documentation/index.php/Features/Better_peer_identification This commit is a squashed commit of the following changes, the development branch of which can be viewed at, https://github.com/kshlm/glusterfs/tree/better-peer-identification or, https://forge.gluster.org/~kshlm/glusterfs-core/kshlms-glusterfs/commits/better-peer-identification commit 198f86e60fab74faf082eaa02657a4d8f60b92f0 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 15 14:34:06 2014 +0530 Update gluster.8 commit 35d597f3a6b3248373e727f7b7e889c92554d56c Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 15 09:01:01 2014 +0530 Address review comments https://review.gluster.org/#/c/8238/3 commit 47b5331e17304477322bd2daed5bbed503c34ca1 Merge: c71b12c 78128af Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 15 08:41:39 2014 +0530 Merge branch 'master' into better-peer-identification commit c71b12c164330e8d19d1df4734ab34ef9a8caad2 Merge: 57bc9de 0f5719a Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 19:50:19 2014 +0530 Merge branch 'master' into better-peer-identification commit 57bc9de9e4f49ff2b1620df9906cda50a3527a25 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 19:49:08 2014 +0530 More fixes to review comments commit 5482cc363a687a9e246a0780ec88acd53e218501 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 18:36:40 2014 +0530 Code refactoring in peer-utils based on review comments https://review.gluster.org/#/c/8238/2/xlators/mgmt/glusterd/src/glusterd-peer-utils.c commit 89b22c34757178f64d5fbaffa31e6302f841c060 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 12:30:00 2014 +0530 Hostnames in peer status commit 63ebf9485cf50d736cf640238a1ab241671fcaf1 Merge: c8c8fdd f5f9721 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jul 10 12:06:33 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit c8c8fdd2104b5b6b8a1af739b1dd952b74e6dd66 Author: Kaushal M <kaushal@redhat.com> Date: Wed Jul 9 18:35:27 2014 +0530 Hostnames in xml output commit 732a92a0167ad7b1d70edbc35ebd8307c2766ae1 Author: Kaushal M <kaushal@redhat.com> Date: Wed Jul 9 15:12:10 2014 +0530 Add hostnames to cli rsp dict during list-friends commit fcf43e3e317508f0c225024738a988a4af8e9205 Merge: c0e2624 72d96e2 Author: Kaushal M <kaushal@redhat.com> Date: Wed Jul 9 12:53:03 2014 +0530 Merge branch 'master' into better-peer-identification commit c0e262416728a3c536a8347a216e471eb2251535 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 16:11:19 2014 +0530 Use list_for_each_entry_safe when cleaning peer hostnames commit 6132e60224eb592f3657e535a12a3e72c772da42 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 15:52:19 2014 +0530 Fix crash in gd_add_friend_to_dict commit 88ffa9a508fd5aac0b2a76e6e76487ce0cab786a Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 13:19:44 2014 +0530 gd_peerinfo_destroy -> glusterd_peerinfo_destroy commit 4b36930a715b1e13cd1a77d136ef1cf78a06d574 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 12:50:12 2014 +0530 More refactoring commit ee559b081d608c6501c10ae22166f26eeb65690e Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 12:14:40 2014 +0530 Major refactoring of code based on review comments at https://review.gluster.org/#/c/8238/1/xlators/mgmt/glusterd/src/glusterd-peer-utils.h commit e96dbc7bbb05fad2a9c424de41a394b8023fe48d Merge: 2613d1d 83c09b7 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jul 7 09:47:05 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit 2613d1daebff0c56812de821c06ed4c16bb9d447 Merge: b242cf6 9a50211 Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 15:28:57 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit b242cf66d95dd3dd5e3975aa430baa6bd74b8a29 Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 15:08:18 2014 +0530 Fix a silly mistake, if (ctx->req) => if (ctx->req == NULL) commit c835ed26433830ceed57289143f596cf60421558 Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 14:58:23 2014 +0530 Fix reverse probe. commit 9ede17f9329b854b02e8ad159f173244789fd08c Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 13:31:32 2014 +0530 Fix friend import for existing peers commit 891bf74c7350064dfb008d1b7294bcec28d680fd Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 13:08:36 2014 +0530 Set first hostname in peerinfo->hostnames to peerinfo->hostname commit 9421d6a217381a7427a7d84f369280883ca4297a Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 12:21:40 2014 +0530 Fix gf_asprintf return val check in glusterd_store_peer_write commit defac978c1d94011ce8195e311839b9ffce057e7 Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 11:16:13 2014 +0530 Fix store_retrieve_peers to correctly cleanup. commit 00a799f5de1121b0cb7421da8285f9407063e1bd Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 10:52:11 2014 +0530 Update address list in glusterd_probe_cbk only when needed. commit 7a628e8a9c562d85709c69cfa13fb1774c521b75 Merge: d191985 dc46d5e Author: Kaushal M <kaushal@redhat.com> Date: Fri Jul 4 09:24:12 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit d1919858e6639d2b54d716a61f662d9752ec5ff1 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 18:59:49 2014 +0530 gf_compare_addrinfo -> gf_compare_sockaddr commit 31d8ef730d408f8d9ba8f504fa648f7dcd59da87 Merge: 93bbede 86ee233 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 18:16:13 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit 93bbedeac5181e29f59b2acd08f638146812ec41 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 18:15:16 2014 +0530 Improve glusterd_friend_find_by_hostname glusterd_friend_find_by_hostname will now do an initial quick search for the peerinfo performing string comparisions on the given host string. It follows it with a more thorough match, by resolving the addresses and comparing addrinfos instead of strings. commit 2542cdbc45aa9cfcaf1f174686158d5565cdd07b Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 17:21:10 2014 +0530 New utility gf_compare_addrinfo commit 338676e8389a44bd91136eebd110197429c2566c Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 14:55:56 2014 +0530 Use gd_peer_has_address instead of strcmp commit 28d45be51f594328741c44455bd80ac9d64ca501 Merge: 728266e 991dd5e Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 14:54:40 2014 +0530 Merge branch 'master' into better-peer-identification commit 728266eb16d5f5a4bf36266044425ae164337f99 Merge: 7d9b87b 2417de9 Author: Kaushal M <kaushal@redhat.com> Date: Tue Jul 1 09:55:13 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit 7d9b87b84955ec17daeaf88a3e7462914039430f Merge: b890625 e02275c Author: Kaushal M <kshlmster@gmail.com> Date: Tue Jul 1 08:41:40 2014 +0530 Merge pull request #4 from vpshastry/better-peer-identification Better peer identification commit e02275c52fb83c72ad082c098fd3e432c2b9c526 Merge: 75ee90d b890625 Author: Varun Shastry <vshastry@redhat.com> Date: Mon Jun 30 16:44:29 2014 +0530 Merge branch 'better-peer-identification' of https://github.com/kshlm/glusterfs into better-peer-identification-kaushal-github commit 75ee90d2f272e49b94d24c9ca4571e89a83055ff Author: Varun Shastry <vshastry@redhat.com> Date: Mon Jun 30 15:36:10 2014 +0530 glusterd: add to the list if the probed uuid pre-exists Signed-off-by: Varun Shastry <vshastry@redhat.com> commit b890625d8164c660695daef3285c67979eef723e Merge: 04c5d60 187a7a9 Author: Kaushal M <kaushal@redhat.com> Date: Mon Jun 30 11:44:13 2014 +0530 Merge remote-tracking branch 'origin/master' into better-peer-identification commit 04c5d60cb938c8d94b214689580b40abb1b0ffcd Merge: 3a5bfa1 e01edb6 Author: Kaushal M <kshlmster@gmail.com> Date: Sat Jun 28 19:23:33 2014 +0530 Merge pull request #3 from vpshastry/better-peer-identification glusterd: search through the list of hostnames in the peerinfo commit 0c64f3346a977f9165ac55a84a1e03c40a7573a7 Merge: e01edb6 3a5bfa1 Author: Varun Shastry <vshastry@redhat.com> Date: Sat Jun 28 10:43:29 2014 +0530 Merge branch 'better-peer-identification' of https://github.com/kshlm/glusterfs into better-peer-identification-kaushal-github commit e01edb63153a1008db70b8fa76ae5b535e099326 Author: Varun Shastry <vshastry@redhat.com> Date: Fri Jun 27 12:29:36 2014 +0530 glusterd: search through the list of hostnames in the peerinfo Signed-off-by: Varun Shastry <vshastry@redhat.com> commit 3a5bfa15855e660db2bfde644727371dd2d618cc Merge: cda6d31 371ea35 Author: Kaushal M <kshlmster@gmail.com> Date: Fri Jun 27 11:31:17 2014 +0530 Merge pull request #1 from vpshastry/better-peer-identification glusterd: Add hostname to list instead of replaceing upon update commit 371ea354f198b4182382d5403c5960c0b2add6b6 Author: Varun Shastry <vshastry@redhat.com> Date: Fri Jun 27 11:24:54 2014 +0530 glusterd: Add hostname to list instead of replaceing upon update Signed-off-by: Varun Shastry <vshastry@redhat.com> commit cda6d3152886623ecbf46baf0048ebe0119b30b6 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 19:52:52 2014 +0530 Import address lists commit 6649b54aa0440130c08e827e0a1d1bbfb840eca9 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 19:15:37 2014 +0530 Implement export address list commit 55990034eead92bc9b936240029e460a4bf152d5 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 18:11:59 2014 +0530 Use first address in list to when setting up the peer RPC. commit a35fde8d19b9988eb04c652fb3a5e4f84d90ad00 Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 18:03:04 2014 +0530 Properly free addresses on glusterd_peer_destroy commit 1988081db09ac9205f3dc7268cef8be267f3ce8b Author: Kaushal M <kaushal@redhat.com> Date: Thu Jun 26 17:52:35 2014 +0530 Restore peerinfo with address list implemented. commit 66f524d5749a12f4910dd6b06c9d91f37e1d831e Author: Kaushal M <kaushal@redhat.com> Date: Mon Jun 23 13:02:23 2014 +0530 Move out all peer related utilities from glusterd-utils to glusterd-peer-utils commit 14a2a326a4dff11b55490dca2a14f39320931340 Author: Kaushal M <kaushal@redhat.com> Date: Tue May 27 12:16:41 2014 +0530 Compilation fix commit c59cd351d0a102d0d5f3ea9001fd33c4edcb262f Author: Kaushal M <kaushal@redhat.com> Date: Mon May 5 12:51:11 2014 +0530 Add store support for hostname list commit b70325f0beb884ad12645ef40185f0bf6cedd741 Author: Kaushal M <kaushal@redhat.com> Date: Fri May 2 15:58:07 2014 +0530 Add a hostnames list to glusterd_peerinfo_t glusterd_peerinfo_new will now init this list and add the given hostname as the lists first member. Signed-off-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Varun Shastry <vshastry@redhat.com> Change-Id: Ief3c5d6d6f16571ee2fab0a45e638b9d6506a06e BUG: 1119547 Reviewed-on: http://review.gluster.org/8238 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Fix for resource leak coverity bug 1223045.Raghavendra Talur2014-07-141-0/+1
| | | | | | | | | | Change-Id: I96261e7f5cd7b5550d3100750c80190dd932a8ab BUG: 789278 Signed-off-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-on: http://review.gluster.org/8252 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd/snapshot: Update fstype for local bricks onlyAvra Sengupta2014-07-141-11/+18
| | | | | | | | | | | | | | | | While creating snapshot, update fstype for local bricks only and not for bricks hosted on other nodes Also returning ret as 0, in case no cleanup is required in post-validation, so that a post-validation failure is not logged, every time a pre-validation failure happens. Change-Id: I6364e33cfd9528e0a988ee48f3443239ee884336 BUG: 1111060 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/8272 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli/snapshot: provide --xml support for all snapshot commandRajesh Joseph2014-07-122-5/+94
| | | | | | | | | | | | | | Now --xml option can be used with all snapshot command. It will form the cli output in xml form. Change-Id: Ifc0ac31d2a9f91e136e87f3b51a629df7dba94e8 BUG: 1096610 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/7663 Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* dht: support heterogeneous brick sizesJeff Darcy2014-07-121-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Calculation of layouts now considers the size of each brick, so that smaller bricks don't get an "unfair" share of allocations and start returning ENOSPC while the larger bricks still have plenty of space. The observation has been made that some clients might get ENOTCONN when trying to fetch disk-size information, and end up calculating layouts differently. The following meta-observations can be made. (1) This scenario is extremely unlikely in configurations with AFR. (2) The most likely consequence of this scenario is that some files will be placed sub-optimally by the client with the obsolete (non-weighted) layout. They'll still be found anyway, so this isn't a show stopper. (3) Without this patch it's *guaranteed* that some files will be placed sub-optimally, because any layout that fails to account for brick sizes is sub-optimal. (4) We shouldn't be doing fix-layout from two nodes simultaneously anyway. That's inefficient at best. Any instances of such behavior are separate bugs, which should be fixed separately. (5) In the most extreme edge case, two nodes doing weighted and non-weighted layout fixes could race and end up creating an internally inconsistent layout. This condition is still transient; it will be detected and repaired automatically the next time anyone fetches the layout. (If it's not that's also a preexisting bug that can show up in other contexts.) In conclusion, it's not the purpose of this patch to fix bugs elsewhere in DHT. Its purpose is to make life incrementally better for users who add new hardware with larger disks etc. than the older equipment. It's only one part of an ongoing process to improve layout management and repair, all the way up to support for multiple hash rings or tiering. Change-Id: I05eb6f9eface9cdaf8622e0260c8c7f29020447f BUG: 1114680 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/8093 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/glusterd: Added support for dispersed volumesXavier Hernandez2014-07-118-9/+168
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Two new options have been added to the 'create' command of the cli interface: disperse [<count>] redundancy <count> Both are optional. A dispersed volume is created by specifying, at least, one of them. If 'disperse' is missing or it's present but '<count>' does not, the number of bricks enumerated in the command line is taken as the disperse count. If 'redundancy' is missing, the lowest optimal value is assumed. A configuration is considered optimal (for most workloads) when the disperse count - redundancy count is a power of 2. If the resulting redundancy is 1, the volume is created normally, but if it's greater than 1, a warning is shown to the user and he/she must answer yes/no to continue volume creation. If there isn't any optimal value for the given number of bricks, a warning is also shown and, if the user accepts, a redundancy of 1 is used. If 'redundancy' is specified and the resulting volume is not optimal, another warning is shown to the user. A distributed-disperse volume can be created using a number of bricks multiple of the disperse count. Change-Id: Iab93efbe78e905cdb91f54f3741599f7ea6645e4 BUG: 1118629 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/7782 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* socket/glusterd/client: enable SSL for managementJeff Darcy2014-07-101-5/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The feature is controlled by presence of the following file: /var/lib/glusterd/secure-access See the comment near the definition of SECURE_ACCESS_FILE in glusterfs.h for the rationale. With this enabled, the following rules apply to connections: UNIX-domain sockets never have SSL. Management-port sockets (both connecting and accepting, in daemons and CLI) have SSL based on presence of the file. Other IP sockets have SSL based on the existing client.ssl and server.ssl volume options. Transport multi-threading is explicitly turned off in glusterd (it would otherwise be turned on when SSL is) due to multi-threading issues. Tests have been elided to avoid risk of leaving a file which will cause all subsequent tests to run with management SSL still enabled. IMPLEMENTATION NOTE The implementation is a bit messy, and consists of two stages. First we decide whether to set the relevant fields in our context structure, based on presence of the sentinel file OR a command-line override. Later we decide whether a particular connection should actually use SSL, based on the context flags plus what kind of connection we're making[1] and what kind of daemon we're in[2]. [1] inbound, outbound to glusterd port, other outbound [2] glusterd, glusterfsd, other TESTING NOTE Instead of just running one special test for this feature, the ideal would be to run all tests with management SSL enabled. However, it would be inappropriate or premature to set up an optional feature in the patch itself. Therefore, the method of choice is to submit a separate patch on top, which modifies "cleanup" in include.rc to recreate the secure-access file and associated SSL certificate/key files before each test. Change-Id: I0e04d6d08163893e24ec8c031748c5c447d7f780 BUG: 1114604 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/8094 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>