summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: check_volume_exists should query in-memory representationKrishnan Parthasarathi2014-12-281-19/+2
| | | | | | | | | | | | | | ... instead of consulting the on-disk data directory. There is no reason why the on-disk is more accurate than the in-memory representation. In fact, it is the other way around when a node is reconciling volume/cluster configuration with the rest of the cluster. Change-Id: I786823efdf1d0f6b9e6fcdb72d51e5227c399ce1 BUG: 1176770 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9292 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: coverity fix for overrun in glusterd_stop_uds_listenerAtin Mukherjee2014-12-281-5/+5
| | | | | | | | | | | | CID: 1260432 Change-Id: I6845bc4c231b53428419a5a2ad0c78ea9da31058 BUG: 1093692 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9338 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: Add option to enable lock tracePranith Kumar K2014-12-281-0/+6
| | | | | | | | | | | Change-Id: I24ed0f866d53e91a8323c043a38f73207cbfd7d2 BUG: 1168189 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/9351 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: fix unix domain notify fnKrishnan Parthasarathi2014-12-231-2/+11
| | | | | | | | | | | ... and unlink the 'right' socket file Change-Id: Id12ee8c622914555b7933104e13b43b3b31b5d19 BUG: 1176770 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9315 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Kaushal M <kaushal@redhat.com>
* glusterd: Maintain per transaction xaction_peers list in syncop & mgmt_v3Atin Mukherjee2014-12-225-139/+222
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In current implementation xaction_peers list is maintained in a global variable (glustrd_priv_t) for syncop/mgmt_v3. This means consistency and atomicity of peerinfo list across transactions is not guranteed when multiple syncop/mgmt_v3 transaction are going through. We had got into a problem in mgmt_v3-locks.t which was failing spuriously, the reason for that was two volume set operations (in two different volume) was going through simultaneouly and both of these transaction were manipulating the same xaction_peers structure which lead to a corrupted list. Because of which in some cases unlock request to peer was never triggered and we end up with having stale locks. Solution is to maintain a per transaction local xaction_peers list for every syncop. Please note I've identified this problem in op-sm area as well and a separate patch will be attempted to fix it. Finally thanks to Krishnan Parthasarathi and Kaushal M for your constant help to get to the root cause. Change-Id: Ib1eaac9e5c8fc319f4e7f8d2ad965bc1357a7c63 BUG: 1173414 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9269 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Returning success from mgmt_v3 handler functionsAvra Sengupta2014-12-191-8/+42
| | | | | | | | | | | | | | | | The mgmt_v3 handler functions already send the ret code as part of the *send_resp calls, and further propagating the ret code to the calling functions will lead to double deletion of the req object. Hence returning success from the mgmt_v3 handler functions. Change-Id: I1090e49c54a786daae5fd97b5c1fbcb5d819acba BUG: 1138577 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/8620 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Copy brick port no. if brick is runningAvra Sengupta2014-12-191-5/+18
| | | | | | | | | | | | Instead of relying on brickinfo->status, check if the brick process is running before copying the brick port number. Change-Id: I246465fa4cf4911da63a1c26bbb51cc4ed4630ac BUG: 1175700 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/9297 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: do not restart nfs server when snapshot is deactivatedRaghavendra Bhat2014-12-181-0/+3
| | | | | | | | | Change-Id: Ie5eaa2beb4446640b22873f91e17da90d1cd8fad BUG: 1174625 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/9280 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* features/snapview-client: handle readdir requests differently for sambaRaghavendra Bhat2014-12-091-0/+9
| | | | | | | | | | | * For samba export, the entry point is also added to the readdir response. Change-Id: I825c017e0f16db1f1890bb56e086f36e6558a1c2 BUG: 1168875 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/9218 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Coverity fix for string_overflow overrunGauravKumarGarg2014-12-082-3/+3
| | | | | | | | | | | | | | | | | | | | In function glusterd_dump_peer() it is copying "input_key" into "key" buffer without checking the length which might cause string_overflow overrun. Similar problem with other coverity issue. With this fix it will copy "input_key" into "key" buffer by maximum length of buffer. Coverity CID: 1256171 Coverity CID: 1256172 Coverity CID: 1256174 Change-Id: I4e092309d9503bd79ff82cf83ed5e8d758743453 BUG: 1093692 Signed-off-by: Gaurav Kumar Garg ggarg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/9208 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: use synclock_t for synchronizing concurrent '\op_sm\' invocationsKrishnan Parthasarathi2014-12-011-5/+7
| | | | | | | | | | | | | | | | | | | | | | | | | In glusterd_op_sm(), we lock and unlock the gd_op_sm_lock mutex. Unfortunately, locking and unlocking can happen in different threads (task swap will occur in handler call with use of synctasks). This case is explictely covered by POSIX: the behavior is undefined. http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_mutex_lock.html When unlocking from a thread that is not owner, Linux seems to be fine (though you never know with unspecified operation), while NetBSD returns EPERM, causing a spurious error in tests/basic/pump. To fix this, we use synclock_t which was precisely meant for this. synclock is a pthread_mutex_t like synchronization object which uses the synctask handle for owner and is immune to the task being run on multiple threads during its lifetime. Change-Id: Idca15190d42f32a843088cc8236138f676377586 BUG: 1129939 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/9212 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Kaushal M <kaushal@redhat.com>
* mgmt/glusterd: Out of bounds access to fs_info structPetr Medonos2014-12-011-1/+1
| | | | | | | | | | | Change-Id: Ifa0d4ac17f9da94660a7b7f567a0f07b5cec7aec BUG: 1164775 Signed-off-by: Petr Medonos <petr.medonos@etnetera.cz> Reviewed-on: http://review.gluster.org/9138 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/uss: Create rebalance volfile.Avra Sengupta2014-11-305-21/+117
| | | | | | | | | | | | | | | | | | | | Create a new rebalance volfile, which will not contain snap-view client translators, irrespective of the status of USS. This volfile, will be created and regenerated everytime the fuse-volfile is generated, and will be consumed by the rebalance process. Change-Id: I514a8e88d06c0b8fb6949c3a3e6dc4dbe55e38af BUG: 1164711 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/9190 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/uss: if snapd is not running, return success from ↵Atin Mukherjee2014-11-301-0/+3
| | | | | | | | | | | | | | | | | | glusterd_handle_snapd_option glusterd_handle_snapd_option was returning failure if snapd is not running because of which gluster commands were failing. Change-Id: I22286f4ecf28b57dfb6fb8ceb52ca8bdc66aec5d BUG: 1168803 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9206 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: logging improvement in txn_opinfo getter/setter functionAtin Mukherjee2014-11-301-11/+12
| | | | | | | | | | | | | | | | | | There is a code path (__glusterd_handle_stage_op) where glusterd_get_txn_opinfo may fail to get a valid transaction id if there is no volume name provided in the command, however if this function fails to get a txn id in op state machine then its a serious issue and op-sm is impacted. From debugability aspect gf_log () can never give the consumer of this function, so logging these failures with gf_log_calling_fn is must here. Change-Id: I4937a9fb20cc6a747fd30dcd9fd4936398d0602a BUG: 1168809 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9207 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* core: fix remaining *printf formation warnings on 32-bitKaleb S. KEITHLEY2014-11-261-2/+2
| | | | | | | | | | | | | | | | | | This fixes a few lingering size_t problems. Of particular note are some uses of off_t for size params in function calls. There is no correct, _portable_ way to correctly print an off_t. The best you can do is use a scratch int64_t/PRId64 or uint64_t/PRIu64. Change-Id: I86f3cf4678c7dbe5cad156ae8d540a66545f000d BUG: 1110916 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/8105 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* core: fix Ubuntu code audit (cppcheck) resultsKaleb S. KEITHLEY2014-11-255-16/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | See also http://review.gluster.org/#/c/7693/, BZ 1091677 AFAICT these are false positives: [geo-replication/src/gsyncd.c:100]: (error) Memory leak: str [geo-replication/src/gsyncd.c:403]: (error) Memory leak: argv [xlators/nfs/server/src/nlm4.c:1201]: (error) Possible null pointer dereference: fde [xlators/cluster/afr/src/afr-self-heal-common.c:138]: (error) Possible null pointer dereference: __ptr [xlators/cluster/afr/src/afr-self-heal-common.c:140]: (error) Possible null pointer dereference: __ptr [xlators/cluster/afr/src/afr-self-heal-common.c:331]: (error) Possible null pointer dereference: __ptr Test program: [extras/test/test-ffop.c:27]: (error) Buffer overrun possible for long command line arguments. [tests/basic/fops-sanity.c:55]: (error) Buffer overrun possible for long command line arguments. the remainder are fixed with this change-set: [cli/src/cli-rpc-ops.c:8883]: (error) Possible null pointer dereference: local [cli/src/cli-rpc-ops.c:8886]: (error) Possible null pointer dereference: local [contrib/uuid/gen_uuid.c:369]: (warning) %ld in format string (no. 2) requires 'long *' but the argument type is 'unsigned long *'. [contrib/uuid/gen_uuid.c:369]: (warning) %ld in format string (no. 3) requires 'long *' but the argument type is 'unsigned long *'. [xlators/cluster/dht/src/dht-rebalance.c:1734]: (error) Possible null pointer dereference: ctx [xlators/cluster/stripe/src/stripe.c:4940]: (error) Possible null pointer dereference: local [xlators/mgmt/glusterd/src/glusterd-geo-rep.c:1718]: (error) Possible null pointer dereference: command [xlators/mgmt/glusterd/src/glusterd-replace-brick.c:942]: (error) Resource leak: file [xlators/mgmt/glusterd/src/glusterd-replace-brick.c:1026]: (error) Resource leak: file [xlators/mgmt/glusterd/src/glusterd-sm.c:249]: (error) Possible null pointer dereference: new_ev_ctx [xlators/mgmt/glusterd/src/glusterd-snapshot.c:6917]: (error) Possible null pointer dereference: volinfo [xlators/mgmt/glusterd/src/glusterd-utils.c:4517]: (error) Possible null pointer dereference: this [xlators/mgmt/glusterd/src/glusterd-utils.c:6662]: (error) Possible null pointer dereference: this [xlators/mgmt/glusterd/src/glusterd-utils.c:7708]: (error) Possible null pointer dereference: this [xlators/mount/fuse/src/fuse-bridge.c:4687]: (error) Uninitialized variable: finh [xlators/mount/fuse/src/fuse-bridge.c:3080]: (error) Possible null pointer dereference: state [xlators/nfs/server/src/nfs-common.c:89]: (error) Dangerous usage of 'volname' (strncpy doesn't always null-terminate it). [xlators/performance/quick-read/src/quick-read.c:586]: (error) Possible null pointer dereference: iobuf Rerunning cppcheck after fixing the above: As before, test program: [extras/test/test-ffop.c:27]: (error) Buffer overrun possible for long command line arguments. [tests/basic/fops-sanity.c:55]: (error) Buffer overrun possible for long command line arguments. As before, false positive: [geo-replication/src/gsyncd.c:100]: (error) Memory leak: str [geo-replication/src/gsyncd.c:403]: (error) Memory leak: argv [xlators/nfs/server/src/nlm4.c:1201]: (error) Possible null pointer dereference: fde [xlators/cluster/afr/src/afr-self-heal-common.c:138]: (error) Possible null pointer dereference: __ptr [xlators/cluster/afr/src/afr-self-heal-common.c:140]: (error) Possible null pointer dereference: __ptr [xlators/cluster/afr/src/afr-self-heal-common.c:331]: (error) Possible null pointer dereference: __ptr False positive after fix: [xlators/performance/quick-read/src/quick-read.c:584]: (error) Possible null pointer dereference: iobuf Change-Id: I20e0e3ac1d600b2f2120b8d8536cd6d9e17023e8 BUG: 1109180 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/8064 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: .cmd_log_history should not be hiddenAtin Mukherjee2014-11-241-1/+1
| | | | | | | | | Change-Id: I4513a2c260530855e09be64083e9344108c7a6c0 BUG: 1165996 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9150 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Add hostname/ip-address along with host's UUID in glusterd log messageGauravKumarGarg2014-11-201-2/+3
| | | | | | | | | | | | | | | | | Previously when host disconnected from cluster then glusterd logs identifies host using host's UUID. Now with this fix, UUID along with host's ip will be present in glusterd log message when one of the peer disconnected from cluster. So it will enhancement better readability of user from log file. Change-Id: I3b7eaf1b1a8963ef2096e67a78cf69f67d5d5166 BUG: 1101382 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/9136 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* core: use gf_time_fmt() instead of localtime()+strftime()Kaleb S. KEITHLEY2014-11-201-24/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | gf_time_fmt() has existed since 3.3; it provides consistent timestamps (i.e. UTC times) throughout the implementation. (BTW, the other name for UTC is GMT.) N.B. many (all?) commercial storage solutions use UTC time for logging. This makes for easier debugging across geographically distributed systems. Also adding a "%s" fmt for portably printing time as simple numeric value on systems regardless of whether 32-bit or 64-bit time_t. Plus a minor tweak to return a ptr to the dest-string to allow gf_time_fmt() to be passed as a param in a *printf(). Someday we should pick the "one true" timestamp format and revise all calls to gf_time_fmt() to use it instead of the five or six different formats. Change-Id: I78202ae14b7246fa424efeea56bf2463e14abfb0 BUG: 1109917 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/8085 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rdma: Client volfile name change for supporting rdmaAnoop C S2014-11-192-4/+22
| | | | | | | | | | | | | | | | | | | For rdma only volumes, daemons like snapd, glustershd etc make use of tcp transport for their operations. This patch will introduce the support of rdma by default for those daemons in rdma only volumes. In order to accomodate this change we rename the tcp client volfile labels from <volname>-fuse.vol to <volname>.tcp-fuse.vol Change-Id: Id9727b97d00e62a4a1556b9c0c56653d45c8fe1d BUG: 1164079 Signed-off-by: Anoop C S <achiraya@redhat.com> Reviewed-on: http://review.gluster.org/9146 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* rdma :mount fails for nfs protocol in rdma volumesJiffin Tony Thottan2014-11-196-19/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we mount rdma only volume or tcp,rdma volume using newly peer probed IP's(nfs-server on new nodes) through nfs protocol, mount fails for rdma only volume and mount happens with help of tcp protocol in the case of tcp,rdma volumes. That is for newly added servers will always get transport type as "socket". This is due to nfs_transport_type is exported correctly and imported wrongly. This can be verified by the following , * Create a rdma only volume or tcp,rdma volume * Add a new server into the trusted pool. * Checkout the client transport type specified nfs-server volgraph.It will be always tcp(socket type) instead of rdma. * And also for rdma only volume in the nfs log, we can see 'connection refused' message for every reconnect between nfs server and glusterfsd. BUG: 1157381 Change-Id: I6bd4979e31adfc72af92c1da06a332557b6289e2 Author: Jiffin Tony Thottan <jthottan@redhat.com> Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: http://review.gluster.org/8975 Reviewed-by: Meghana M <mmadhusu@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Niels de Vos <ndevos@redhat.com>
* rdma: Wrong volfile fetch on fuse mounting tcp,rdma volume via rdmaAnoop C S2014-11-184-82/+128
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As of now for both tcp only volumes and rdma only volumes, volfile names are in the format <volname>-fuse.vol. This patch will change the client volfile namings as shown below. * TCP mounts always use <volname>-fuse.vol * RDMA mounts always use <volname>.rdma-fuse.vol Following the above naming convention, for tcp,rdma volumes both volfiles will be present under /var/lib/glusterd/vols/<volname>/ such that rdma only volume can be mounted as mount -t glusterfs -o transport=rdma <server/ip>:/<volname> <mount-point> OR mount -t glusterfs <server/ip>:/<volname>.rdma <mount-point> The above command format can also be used to fuse mount a tcp,rdma volume via rdma transport. When we try to fuse mount a tcp,rdma volume with transport-type as rdma it silently mounts via tcp. This change will also make sure that it fetches the correct volfile based on the transport-type specified from client side. BUG: 1131502 Change-Id: I34da4b01ac813b69494a43188f51145457412923 Signed-off-by: Anoop C S <achiraya@redhat.com> Reviewed-on: http://review.gluster.org/8498 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rdma: client connection establishment takes more timeMohammed Rafi KC2014-11-183-16/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For rdma type only volume client connection establishment with server takes more than three seconds. Because for tcp,rdma type volume, will have 2 ports one for tcp and one for rdma, tcp port is stored with brickname and rdma port is stored as "brickname.rdma" during pamap_sighin. During the handshake when trying to get the brick port for rdma clients, since we are not aware of server transport type, we will append '.rdma' with brick name. So for tcp,rdma volume there will be an entry with '.rdma', but it will fail for rdma type only volume. So we will try again, this time without appending '.rdma' using a flag variable need_different_port, and it will succeed, but the reconnection happens only after 3 seconds. In this patch for rdma only type volume we will append '.rdma' during the pmap_signin. So during the handshake we will get the correct port for first try itself. Since we don't need to retry , we can remove the need_different_port flag variable. Change-Id: Ie8e3a7f532d4104829dbe995e99b35e95571466c BUG: 1153569 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/8934 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* rdma:rdma fuse mount hangs for tcp,rdma volumes if brick is down.Mohammed Rafi KC2014-11-171-6/+14
| | | | | | | | | | | | | | | | | | | | | | | | When we try to mount a tcp,rdma volume as rdma transport using FUSE protocol, then mount will hang if the brick is down. When we kill a process, signal will be received in glusterfsd process and it will call pmap_signout with port listening on tcp only. In case of the tcp,rdma there will be two ports, and port which is listening for rdma will not called for sign out. So the mount process will try to connect to a port which is not open and it will keep trying to connect. This patch will call pmap_signout for rdma port also, So when mount tries to get the brick port,it will fail. Change-Id: I23676f65f96eb90b69b76478f7a21412a6aba70f BUG: 1143886 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/8762 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* USS : Kill snapd during glusterd restart if USS is disabledSachin Pandit2014-11-171-5/+25
| | | | | | | | | | | | | | | | | | | | | | | Problem : When glusterd is down on one of the nodes and during that time if USS is disabled then snapd will still be running in the node where glusterd was down. Solution : during restart of glusterd check if USS is disabled, if so then issue a kill for snapd. NOTE : The test case which I wrote in my previous patchset is facing some spurious failures, hence I thought of removing that test case. I'll add the test case once the issue is resolved. Change-Id: I2870ebb4b257d863cdfc319e8485b19e932576e9 BUG: 1161015 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/9062 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: Validate the options of ussvmallika2014-11-142-7/+15
| | | | | | | | | | | Change-Id: Id13dc4cd3f5246446a9dfeabc9caa52f91477524 BUG: 1111554 Signed-off-by: Varun Shastry <vshastry@redhat.com> Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8133 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Don't append nouuid mount option for snapshot brickvmallika2014-11-132-1/+37
| | | | | | | | | | | | | if original brick already has this option Change-Id: I2841d2ac371a3e9505f6061f35d1d447946c0bae BUG: 1133456 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8526 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Snapshot should be deactivated when it is createdvmallika2014-11-122-134/+203
| | | | | | | | | | | | | | | | | By default snapshot should be deactivated and this should be a configurable option. This behaviour can be configured by the command below: gluster snapshot config activate-on-create <enable|disable> Change-Id: I1911595c32beed43bb2fca4bf99f0d264b422513 BUG: 1157991 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8985 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/snapshot: Check if LVM device path exists before delete.Avra Sengupta2014-11-121-46/+59
| | | | | | | | | | | | | | | | | Check if the LV is present before deleting the LV. In case where the LV is absent (already deleted?), need not fail the snap delete operation. Also check if the LV is mounted before trying umount. In case it isn't umounted, only remove the LV. Change-Id: I0f5b2674797299d8748c6fac5b091f0caba65ca4 BUG: 1104714 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/8954 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* uss/gluster: Move all uss related logs into subfoldervmallika2014-11-121-4/+12
| | | | | | | | | | | | | | | | | | | | | | For USS we have 1 snapd log per volume and as many snap logs for volume. For example if there are 4 volumes having 256 snaps each and USS is enabled than total number of logs under /var/log/glusterfs for USS would be 1028 logs. Total logs = (4(snapd per volume) + 4(volumes)*256(snaps)) = 1028 Hence, it makes sense to move into into sub-folder structure like /var/log/glusterfs/snaps/<vol-name>/<snapd + snaps logs> Change-Id: I29262e6458c3906916923cd67d1145d6ae10bec3 BUG: 1160534 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/9050 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* USS : Display only the activated snapshotsSachin Pandit2014-11-121-0/+6
| | | | | | | | | | | | | | | Instead of displaying all the snapshots in the uss world, it is better if we display only the activated snapshots. Change-Id: I70d3ec212b62ec15956ae3e826bc4201d8dedd17 BUG: 1155042 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8958 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd : release cluster wide locks in op-sm during failuresAtin Mukherjee2014-11-064-69/+183
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | glusterd op-sm infrastructure has some loophole in handing error cases in locking/unlocking phases which ends up having stale locks restricting further transactions to go through. This patch still doesn't handle all possible unlocking error cases as the framework neither has retry mechanism nor the lock timeout. For eg - if unlocking fails in one of the peer, cluster wide lock is not released and further transaction can not be made until and unless originator node/the node where unlocking failed is restarted. Following test cases were executed (with the help of gdb) after applying this patch: * RPC timesout in lock cbk * Decoding of RPC response in lock cbk fails * RPC response is received from unknown peer in lock cbk * Setting peerinfo in dictionary fails while sending lock request for first peer in the list * Setting peerinfo in dictionary fails while sending lock request for other peers * Lock RPC could not be sent for peers For all above test cases the success criteria is not to have any stale locks Change-Id: Ia1550341c31005c7850ee1b2697161c9ca04b01a BUG: 1154635 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/9012 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* uss/gluster: Fix typo error in the description for USS under "gluster volumevmallika2014-11-051-1/+1
| | | | | | | | | | | | | | | | | set help" gluster volume set help for uss shows "User Servicable Snapshots" whereas it should be "User Serviceable Snapshots" Change-Id: I3cc8b3ea2cb6d209e1a12678eb7d0e68f4160d99 BUG: 1160236 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/9041 Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* features/quota: Use per-volume log file for crawlerKrutika Dhananjay2014-11-031-6/+8
| | | | | | | | | | | Change-Id: I195b3309bae7e684b7dbf771e4f3b4778d0dac4c BUG: 1146377 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8843 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/geo-rep: Fix glusterd crash in non-originator slave node.Kotresh HR2014-11-021-0/+1
| | | | | | | | | | | | | | | | | | | | Problem: glusterd crashes in non-originator slave node during geo-rep create push-pem. Cause: In glusterd_op_copy_file, the value of the key "common_pem_contents" is freed explicitly even after dict_set is successful when it is taken cared by dict_free. Solution: Free only in failure cases before dict_set. Change-Id: I65b5f32ee2b946107ad279b1fe3d728ec699bc7e BUG: 1159119 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/9018 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Poornima G <pgurusid@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: add option support for own-threadJeff Darcy2014-10-301-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | Like enabling SSL, enabling own-thread has to be done separately for clients and servers. * client.own-thread for clients (including internal like self-heal) * server.own-thread for servers (including e.g. glusterd) It's very unlikely that you would ever want to set one without the other, but they're separate anyway just in case. Check for "private polling thread" in the relevant logs to make sure the option took effect, because otherwise you might not notice any difference besides inreased performance. ;) Change-Id: Ifaee8de52f0b959bcdf7f6b56faeee549ee56604 BUG: 1158648 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/8931 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Store rebalance state on all peersKaushal M2014-10-291-1/+11
| | | | | | | | | | | | | | | The rebalance state was being saved only on the peers participating in the rebalance on a rebalance start. This change makes sure all nodes save the rebalance state. Change-Id: I436e5c34bcfb88f7da7378cec807328ce32397bc BUG: 1157979 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/8998 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: op state machine shouldn't use global peer listAtin Mukherjee2014-10-284-11/+42
| | | | | | | | | | | | | | | | | | | | Problem : op state machine was relying on the global peer list while sending lock/stage/unlock commit rpc requests to the peers in the cluster. Trusting on global peer list structure is dangerous as this structure gets modified if any peer modification command is attempted in the cluster when there is a ongoing transaction going through the state machine. An ideal usecase of this problem when rebalance is in progress and peer probe is executed rebalance op-sm and peer probe may run into race making peerinfo structure go for toss. Solution: Use local copy of peer list (xaction_peers) in glusterd op-sm. Change-Id: I1ff7118dc6a9a72633e2e87b7ab7bae1796595e0 BUG: 1152890 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/8932 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: really get the inode size for a brickNiels de Vos2014-10-271-12/+17
| | | | | | | | | | | | | | | The device to get the inode size from does not get passed to the tool (tune2fs, xfs_info or the like) that is called. This is probably just an oversight. While correcting this, cleanup some bits of the function too. Change-Id: Ida45852cba061631fb304bc7dd5286df1a808010 BUG: 1130462 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8492 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* geo-rep/glusterd: Enable changelog and marker during geo-rep create.Kotresh HR2014-10-271-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PROBLEM: Geo-rep misses few a files to sync when I/O happenned during geo-rep start. ANALYSES: To use the available changelogs to handle deletes/renames, 'xsync upper limit' is introduced which limits the xsync crawl till the changelog register time. But there is a small time interval between the changelog register time and the time changelog actually enabled. If there is I/O between this interval, it will not be synced through xsync as it is beyond changelog register time and not through changelog also as changelog is not actually enabled. SOLUTION: Enable changelog and marker during geo-rep create instead of geo-rep start so that entries are captured in changelog and above said interval is nullified. Change-Id: Ic5f0457a4b67a335cbbb37d34db5f8cb8bc901c4 BUG: 1139196 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/8650 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd: statedump supportAtin Mukherjee2014-10-153-4/+261
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although glusterd currently has statedump support but it doesn't dump its context information. Implementing glusterd_dump_priv function to export per-node glusterd information would be useful for debugging bugs. Once implemented, we could enhance sos-report to fetch this information. This would potentially reduce our time to root cause and data needed for debugability can be dumped gradually. Following is the main items of the dump list targeted in this patch : * Supported max/min op-version and current op-version * Information about peer list * Information about peer list involved while a transaction is going on (xaction_peers) * option dictionary in glusterd_conf_t * mgmt_v3_lock in glusterd_conf_t * List of connected clients * uuid of glusterd * A section of rpc related information like live connections and their statistics There are couple of issues which were found during implementation and testing phase: - xaction_peers of glusterd_conf_t was not initialized in init because of which traversing through this list head was crashing when there was no active transaction - gf_free was not setting the typestr to NULL if the the alloc count becomes 0 for a mem-type earlier allocated. Change-Id: Ic9bce2d57682fc1771cd2bc6af0b7316ecbc761f BUG: 1139682 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/8665 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/geo-rep: Fix race in updating status fileKotresh HR2014-10-121-18/+26
| | | | | | | | | | | | | | | | | | | | | | | | When geo-rep is in paused state and a node in a cluster is rebooted, the geo-rep status goes to "faulty (Paused)" and no worker processes are started on that node yet. In this state, when geo-rep is resumed, there is a race in updating status file between glusterd and gsyncd itself as geo-rep is resumed first and then status is updated. glusterd tries to update to previous state and gsyncd tries to update it to "Initializing...(Paused)" on restart as it was paused previously. If gsyncd on restart wins, the state is always paused but the process is not acutally paused. So the solution is glusterd to update the status file and then resume. Change-Id: I348761a6e8c3ad2630c79833bc86587d062a8f92 BUG: 1149982 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/8911 Reviewed-by: Aravinda VK <avishwan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd: print the peer name instead of a null UUID in a rpc failure messageAtin Mukherjee2014-10-093-111/+125
| | | | | | | | | | | | | This patch improves the failure message by printing the correct peer name instead of a blank uuid in case of rpc connection is lost/broken. Change-Id: Ia232792051f23896883b239982cb48130e3ce60e BUG: 1146902 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/8597 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: make bricks respect 'transport.socket.bind-address'Niels de Vos2014-10-081-0/+8
| | | | | | | | | | | | | | | When GlusterD starts the brick processes, these will listen on all interfaces. When the 'transport.socket.bind-address' option is set in glusterd.vol, the brick processes should only listen on the specified hostname or IP-address. Change-Id: I8e7d1f294904081137c23f3446261329d0d13bba BUG: 1149863 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8910 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: pass the bind-address to starting servicesNiels de Vos2014-10-071-3/+15
| | | | | | | | | | | | | | | | | | | | | | | | When the transport.socket.bind-address option is set to a hostname or ip-address, the services started by GlusterD fail to connect to the management daemon. GlusterD always forces the services to connect to the "localhost" hostname, even if it is not listening on that address. GlusterD should take the transport.socket.bind-address option into consideration, and pass that to the glusterfs-clients with the -s or --volfile commandline parameter. Note that this is not a change that removes all hard-coded dependencies on "localhost". This change merely makes it possible to start required services when the transport.socket.bind-address option is set. Change-Id: I36a0ed6c69342e6327adc258fea023929055d7f2 BUG: 1149863 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8908 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* Do not hardcode umount(8) path, emulate lazy umountEmmanuel Dreyfus2014-10-035-74/+26
| | | | | | | | | | | | | | | | | | 1) Use a system-dependent macro for umount(8) location instead of relying on $PATH to find it, for security and portability sake. 2) Introduce gf_umount_lazy() to replace umount -l (-l for lazy) invocations, which is only supported on Linux; On Linux behavior in unchanged. On other systems, we fork an external process (umountd) that will take care of periodically attempt to unmount, and optionally rmdir. BUG: 1129939 Change-Id: Ia91167c0652f8ddab85136324b08f87c5ac1e51d Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8649 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Csaba Henk <csaba@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/quota: Heal pgfid xattr on existing data when the quota isvmallika2014-09-301-3/+3
| | | | | | | | | | | | | | | | | | | | | enable The pgfid extended attributes are used to construct the ancestry path (from the file to the volume root) for nameless lookups on files. As NFS relies on nameless lookups heavily, quota enforcement through NFS would be inconsistent if quota were to be enabled on a volume with existing data. Solution is to heal the pgfid extended attributes as a part of lookup perfomed by quota-crawl process. In a posix lookup check for pgfid xattr and if it is missing set the xattr. Change-Id: I5912ea96787625c496bde56d43ac9162596032e9 BUG: 1147378 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8878 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Perform brick order check in originator node.GauravKumarGarg2014-09-291-16/+19
| | | | | | | | | | | | | | | | Currently in case of multi node cluster brick-order check for replicate volume done on every node. Its waste of time to perform brick order check on every node. This change will perform brick order check only at originator node. Change-Id: I8687fd28e587de8a280a9003b015ccd5729c9740 BUG: 1091935 Signed-off-by: ggarg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/8881 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Kaushal M <kaushal@redhat.com>
* glusterd: file-snapshot and features-encryption options should be validate ↵Gaurav Kumar Garg2014-09-251-2/+3
| | | | | | | | | | | | | | | | | | | | | | correctly By giving non-boolean value to volume set command for features.file-snapshot and features.encryption option the command failed after that subsequent volume set request with valid value of the existing any volume set option fail. Previously when user supplies a non-boolean value in volume set command for features.file-snapshot and features.encryption option's then validation of that value was done by volinfo->dict but actual value of that option store in input dictonary. Now with this change it will refer correct dictonary for validation of supplies value. Change-Id: I4a93d8be848cd33fdf4b4eb9b1a8d15ec9d1e66a BUG: 1140162 Reviewed-on: http://review.gluster.org/8688 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>