summaryrefslogtreecommitdiffstats
path: root/xlators
Commit message (Collapse)AuthorAgeFilesLines
* glusterd/snapshot : Barrier code integration with snapshot codebase.Sachin Pandit2014-05-014-9/+125
| | | | | | | | | | | | | | | | | As we have new barrier translator in place, we are making use of that during snapshot phase. During snapshot create (pre-commit), we enable the barrier feature and after the commit we disable it. Change-Id: I94212b1c06b0d9b12255ee98313e2d8549b34b17 BUG: 1061685 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7561 Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* mgmt/glusterd: Fix wrong usage of snprintfPranith Kumar K2014-05-011-1/+1
| | | | | | | | | | | Change-Id: I7db752390bb742fb9f6cacce84563ff782ae352b BUG: 1087677 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/7608 Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com>
* storage/posix: Remove dead code in xattropPranith Kumar K2014-05-011-32/+0
| | | | | | | | | Change-Id: I48ba81ad342e3c8fe1d81f8e57e61676dd356fb0 BUG: 1092749 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/7318 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* logging: Introduce suppression of repetitive log messagesKrutika Dhananjay2014-04-303-1/+162
| | | | | | | | | Change-Id: I8efa08cc9832ad509fba65a88bb0cddbaf056404 BUG: 1075611 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/7475 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* [glusterd/snapshot] snapshot create force optionJoseph Fernandes2014-04-301-7/+27
| | | | | | | | | | | | | | | | | | | | Implement force option in snapshot create i.e 1) Creation of snapshot fails if the original volume bricks are down 2) With a force option creation of snapshot will continue even if the original volume bricks are down. This was the fix for bugs 1089527 and 1083502 Change-Id: I8de0242adf8ee0af00db9fa8701d86fabc12e7fc BUG: 1090042 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/7520 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot : Fix for snapshot delete failureVijaikumar M2014-04-301-20/+21
| | | | | | | | | | | | | | | | | | | Problem : snapshot delete used to fail when executed in loop, as there was race between process kill and umount. Solution : Before an umount is issued check if the process is still running, If so then issue for process termination. Give three tries for doing umount operation Change-Id: I7f4315e5d7d4a156dd513ec77443ead6ccd37b2e BUG: 1090449 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7532 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* [glusterd/snapshot] Glusterd crashes when a same commandJoseph Fernandes2014-04-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | eg snapshot create is fired simultaneously on a node Cause: In glusterd_mgmt_v3_initiate_snap_phases() , the function glusterd_mgmt_v3_post_validate() asserts on the NULL value of req_dic. req_dic is not initialized as glusterd_mgmt_v3_initiate_lockdown() is not able to acquire the lock and comes to the "out" section, before initializing req_dic (via glusterd_mgmt_v3_build_payload) Fix: Call glusterd_mgmt_v3_post_validate() only if the lock is acquired. Change-Id: I7cb55b6c0013ad1c8bbb922a62c34aab097bafe9 BUG: 1090047 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/7500 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot : Clean up of old barrier code.Sachin Pandit2014-04-304-478/+0
| | | | | | | | | | | | | | As a new barrier translator is introduced, we dont require the old barrier code. Hence cleaning thar up. Change-Id: Ieedca6f33a746898f0d2332fda1f1d4c86fff98f BUG: 1061685 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7577 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* snapshot/config : Fix for bug which states gluster snapshot configSachin Pandit2014-04-291-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | command should only accept the decimal numeric value. Syntax : gluster snapshot config [volname] [snap-max-hard-limit <count>] [snap-max-soft-limit <percentage>] Problem : Snapshot config used to consider the alphanumeric value staring with digit as an integer (Example: "9abc" is converted to "9"). Solution : Refined the code to check if the entered value is numeric. This patch also fixes some of the minor problems related to snapshot config. 1) Output correction in gluster snapshot config snap-max-soft-limit. 2) setting the soft limit to greater than 100% displays that "Invalid snap-max-soft-limit 0". The error message used to display "zero" in the output, Changed this to display relevant value. 3) Setting greater than allowed snap-max-hard-limit output needs to have space in between. Change-Id: Ie7c7045722fe57b2b3c50c873664b67c28eb3853 BUG: 1087203 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7457 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: make sure that mntent.h is conditionally includedHarshavardhana2014-04-291-0/+5
| | | | | | | | | Change-Id: I39c362c0908166707e10e8820cc1ee9a0989dcbe BUG: 1089172 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/7584 Reviewed-by: Anand Avati <avati@redhat.com> Tested-by: Anand Avati <avati@redhat.com>
* glusterd: Ping timer implmentationKrishnan Parthasarathi2014-04-299-296/+12
| | | | | | | | | | | | | | | | | | | | | | | | This patch refactors the existing client ping timer implementation, and makes use of the common code for implementing both client ping timer and the glusterd ping timer. A new gluster rpc program for ping is introduced. The ping timer is only started for peers that have this new program. The deafult glusterd ping timeout is 30 seconds. It is configurable by setting the option 'ping-timeout' in glusterd.vol . Also, this patch introduces changes in the glusterd-handshake path. The client programs for a peer are now set in the callback of dump_versions, for both the older handshake and the newer op-version handshake. This is the only place in the handshake process where we know what programs a peer supports. Change-Id: I035815ac13449ca47080ecc3253c0a9afbe9016a BUG: 1038261 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/5202 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* storage/posix: add list xattr capability to lookupPranith Kumar K2014-04-292-16/+94
| | | | | | | | | BUG: 1078061 Change-Id: Ie26d28b8a74aa0d1eceff14a84c3cd3e302dcdb5 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/7293 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* feature/quota: Logging correctionsVarun Shastry2014-04-291-21/+36
| | | | | | | | | | | This patch solves the inconsistent quota usage logging when soft limit reached. Change-Id: I47e7f1e65ed4b8306a999a20cc8f6b1772d47627 BUG: 1087198 Signed-off-by: Varun Shastry <vshastry@redhat.com> Reviewed-on: http://review.gluster.org/7451 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Add a cli command to enable/disable barrierKaushal M2014-04-296-3/+195
| | | | | | | | | | | | | | | This patch adds a new 'gluster volume barrier <VOLNAME> {enable|disable}' cli command. This helps in testing the brick op code path when testing the barrier xlator. This patch can be reverted later if not required for end users. Change-Id: Icd86a2d13e7f276dda1ecbb2593d60638ece7dcd BUG: 1060002 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/6958 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Add a barrier brick-opKaushal M2014-04-294-1/+70
| | | | | | | | | | | | | | | | This patch introduces a new 'barrier' brick-op which will be used to activate/deactivate the barriering on the bricks. This includes barriering in the barrier xlator and in the changelog xlator. All the required code has been including a bricks select function, a payload builder and a brick-op handler. Change-Id: I91d9d77f691c2e89823f7dc4e84900ec40dc4dd2 BUG: 1060002 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/6943 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot : Dont acquire a lock if config is ofSachin Pandit2014-04-291-205/+258
| | | | | | | | | | | | | | | | | | | | | | | | | type DISPLAY. Problem : Currently we are acquiring a lock if we give "gluster snapshot config <volname>". As this is just a Read-Only command, we need not acquire a lock. Solution : This patch checks if the command given is of type DISPLAY. If so, then glusterd_v3_mgmt framework is not called, as reading information from local node is enough. This Patch also fixes "Assertion failed: volname" while doing the system config change when snap create was in progress. Change-Id: Ie8991f2cd746987b11152006e113e8706516138b BUG: 1087677 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7458 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* debug/trace: Fix length issue found in compilationPranith Kumar K2014-04-291-44/+45
| | | | | | | | | Change-Id: I1cd7fd6464d0912294009c2293ed70f3f6744930 BUG: 1092196 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/7586 Reviewed-by: Anand Avati <avati@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* nlm: fix rpc.statd typoHarshavardhana2014-04-281-1/+1
| | | | | | | | | Change-Id: I701fd82a8dd5a72727b8035bc6c2861465f1aa1f BUG: 1089172 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/7585 Reviewed-by: Anand Avati <avati@redhat.com> Tested-by: Anand Avati <avati@redhat.com>
* cluster/afr: Fix inode_forget assert failurePranith Kumar K2014-04-282-28/+47
| | | | | | | | | | | | | | | | | | | | | | Problem: If two self-heals are triggered on same inode in parallel then one inode will be linked and the other inode will not be linked as an inode with that gfid is already linked in inode table. Calling inode-forget on that inode leads to assert failure. Fix: Always use linked inode for performing self-heal. Added inode-forgets in other places as well even though its not really a memory leak. Change-Id: Ib84bf080c8cb6a4243f66541ece587db28f9a052 BUG: 1091597 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/7567 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Help does not show performance.nfs.* optSantosh Kumar Pradhan2014-04-281-7/+31
| | | | | | | | | | | | | | | | Gluster does not display performance.nfs.* options in help. In Gluster NFS, write-behind is the only performance xlator which gets loaded. Gluster volume set help should display all the options provided by write-behind xlator. Change-Id: I4a41151a6c15eeed8e8d123a6044c6f0c42b56b0 BUG: 1090826 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/7546 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* features/libgfchangelog: APIs to process history changelogs.Kotresh H R2014-04-284-2/+336
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1. Create directories in following fashion for history API's usage when consumer is registered with libgfchangelog shared library through gf_changelog_register. scratch_dir/.history scratch_dir/.history/.current scratch_dir/.history/.processed scratch_dir/.history/.processing 2. Added new file 'gf-history-changelog.c' and following APIs are provided for consumers to process history changelogs. 1. gf_history_changelog_scan: Move processed history changelog file from .processing to .processed 2. gf_history_changelog_next_change: Return the next history changelog file entry. Zero means all history chanelogs are consumed. 3. gf_history_changelog_done: Scan .processing directory and generate a list of change entries. 4. gf_history_changelog_start_fresh: For a set of changelogs, start from the begining. NOTE: Though this patch provides above funcationalities. It is considered functionally full with the patch (http://review.gluster.org/#/c/6930/). Change-Id: I200780c7278e0a6c008910d93faad5858a4b3e76 Original-author: Kotresh H R <khiremat@redhat.com> Signed-off-by: Kotresh H R <khiremat@redhat.com> Signed-off-by: Ajeet Jha <ajha@redhat.com> Reviewed-on: http://review.gluster.org/6998 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd/snapshot: Compare and update snapshots during peer handshakeAvra Sengupta2014-04-287-126/+984
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During a peer-handshake, after the volumes have synced, and the list of missed snapshots have synced, the node will perform the pending deletes and restores on this list. At this point, the current snapshot list in the node will be updated, and hence in case of conflicts arising during snapshot handshake, the peer hosting the bricks will be given precedence Likewise, if there will be a conflict, and both peers will be in the same state, i.e either both would be hosting bricks or both would not be hosting bricks, then a decision can't be taken and a peer-reject will happen. glusterd_compare_and_update_snap() implements the following algorithm to perform the above task: Step 1: Start. Step 2: Check if the peer is missing a delete on the said snap. If yes, goto step 6. Step 3: Check if there is a conflict between the peer's data and the local snap. If no, goto step 5. Step 4: As there is a conflict, check if both the peer and the local nodes are hosting bricks. Based on the results perform the following: Peer Hosts Bricks Local Node Hosts Bricks Action Yes Yes Goto Step 7 No No Goto Step 7 Yes No Goto Step 8 No Yes Goto Step 6 Step 5: Check if the local node is missing the peer's data. If yes, goto step 9. Step 6: It's a no-op. Goto step 10 Step 7: Peer Reject. Goto step 10 Step 8: Delete local node's data. Step 9: Accept Peer Data. Step 10: Stop Change-Id: I79be0f0f5f2a4f5c72277a4e77c2be732af432e1 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7525 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Rename the export dictionary as peer_dataAvra Sengupta2014-04-283-121/+144
| | | | | | | | | | | | | | | | | | During a glusterd handshake, a dictionary is passed among the peers which contains, info of volumes, global opts, and now also info of snaps and list of missed snaps As it now contains more than just volume specific data, renaming the dict in the code-base from "vols" to "peer_data" Change-Id: Ib457172789ddd0d8978b08bceab0988c48e9eea7 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7524 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Recreate the mount dirs and mount the lvm snapshots on ↵Avra Sengupta2014-04-284-70/+415
| | | | | | | | | | | | | | | | | | node reboot. The lvm snapshots of the bricks are mounted at /var/run/gluster/snaps/ or /run/gluster/snaps. These paths being on a tempfs, on reboot are removed. So when glusterd starts, we need to recreate these paths, activate the respective logical volumes (lvm snapshots of the bricks), and mount these logical volumes at their respective paths. Change-Id: Ic5ef61e79a25d9830df717c592391965fe09db62 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7452 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Perform missed snap deletes and restores.Avra Sengupta2014-04-288-171/+374
| | | | | | | | | | | | | | | | | Replacing is_volume_restored(gf_boolean_t) with restored_from_snap(uuid_t) in glusterd_volinfo_ Also removed gd_restore_snap_volume from glusterd-volgen.c to glusterd-snapshot.c Change-Id: Ic615a1658cfaffa98d4590506ac82f20bf709ad6 BUG: 1089906 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7455 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Barrier: Barrier translator options configurationAtin Mukherjee2014-04-273-3/+17
| | | | | | | | | | | barrier enable/disable, barrier-timeout configuration in barrier translator. Change-Id: I7cbf9cd4f5e55d42dcc6b7cd6827234566c7b6f3 BUG: 1060002 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/7177 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/snapshot: Adding snap_vol_id and snap_uuid to missed_snap_listAvra Sengupta2014-04-277-156/+305
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Persisting missing snapshot info on disk as well as in memory in the following format: -------------NODE-UUID--------------:--------------SNAP-UUID-------------=---------SNAP-VOL-ID------------:BRICKNUM:-------BRICKPATH--------:OPERATION:STATUS 927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=a17b4fe42c5a45f7a916438643edaa13: 3 :/brick/brick-dirs/brick3: 1 : 1 927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=a17b4fe42c5a45f7a916438643edaa13: 3 :/brick/brick-dirs/brick3: 3 : 1 927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=83a3cc05453b46b2a7eda4c9a9208638: 3 :/brick/brick-dirs/brick3: 1 : 1 This data will be stored on disk at /var/lib/glusterd/snaps/missed_snaps_list In memory we maintain the data as a list of glusterd_missed_snap_info in conf, the key for this list are the first two fields, i.e NODE-UUID:SNAP-UUID. For every NODE-UUID:SNAP-UUID, there can be multiple operations missed on multiple bricks. So we maintain a list of glusterd_snap_op_t for every node of glusterd_missed_snap_info This list is maintained or updated during snapshot create, delete, and restore operations which are the only operations that if missed, are recorded in this list. During snapshot create, if a node is down, or a brick is down, we don't receive their mount point infos. snap_status of such bricks is marked as -1, and their brick details are added to this list. During snapshot delete, we check from originator node, if any other nodes, holding bricks of the said snap are down. Those are also added to the list. Also if the node is up, but the snapshot was pending for a snap brick, and its snap_status is -1, we add that to the list too. When a subsequent delete entry is processed for an already existing create entry, we just mark the create entries status as done (2), and don't add the delete entry to the list. During snapshot restore, we check from originator node, if any other nodes, holding bricks of the said snap are down. Those are also added to the list. Also if the node is up, but the snapshot was pending for a snap brick, and its snap_status is -1, we add that to the list too. Like delete when a subsequent restore entry is processed for an already existing create entry, we just mark the create entries status as done (2), and don't add the restore entry to the list. Change-Id: I54f63e28d3c40555d0f84528f38227103171f594 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7454 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Update references to the maillinglist to gluster-devel@gluster.orgNiels de Vos2014-04-273-3/+3
| | | | | | | | | | | | | | gluster-devel@nongnu.org has moved to gluster-devel@gluster.org. All occurrences in the current (non legacy) documentation and code have been adjusted. Change-Id: I053162e633f7ea14fd3eed239ded017df165147c BUG: 1091705 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/7573 Reviewed-by: Justin Clift <justin@gluster.org> Reviewed-by: Anand Avati <avati@redhat.com> Tested-by: Anand Avati <avati@redhat.com>
* fuse: minor improvements for readdir(plus)Niels de Vos2014-04-271-17/+16
| | | | | | | | | | | | | | | | Instead of using 'int' for the sizes, use a 'size_t' as it is more correct. Save the size of a fuse_dirent in a temporary variable so that strlen() on the filename is called fewer times. Also correcting some typos in comments. Change-Id: Ic62d9d729a86a1a6a53ed1354fce153bac01d860 BUG: 1074023 Reported-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/7547 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gNFS: Log properly for pmap_getport() fail in NLMSantosh Kumar Pradhan2014-04-261-2/+4
| | | | | | | | | | | | | | | | | | | | In NLM callback path, if pmap_getport() FAILs, it just log the error message saying "Is firewall running on the client". It may happen that RPC services are not running e.g. "rpcbind" is not running or nlockmgr (NLM) is not registered with portmapper which all can be checked using "rpcinfo -p" command. FIX: Modify the log message to include the later case mentioned above. Change-Id: If422275b2ab59d1e974a6caa37132f31e9a34329 BUG: 1090782 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/7544 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* features/locks: Remove stale entrylk objects from 'blocked_locks' listKrutika Dhananjay2014-04-262-4/+48
| | | | | | | | | | | | | | | | | | | | | | * In the event of a DISCONNECT from a client, as part of cleanup, entrylk objects are not removed from the blocked_locks list before being unref'd and freed, causing the brick process to crash at some point when the (now) stale object is accessed again in the list. * Also during cleanup, it is pointless to try and grant lock to a previously blocked entrylk (say L1) as part of releasing another conflicting lock (L2), (which is a side-effect of L1 not being deleted from blocked_locks list before grant_blocked_entry_locks() in cleanup) if L1 is also associated with the DISCONNECTing client. This patch fixes the problem. Change-Id: I3d684c6bafc7e6db89ba68f0a2ed1dcb333791c6 BUG: 1089470 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/7560 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/snapshot-handshake: Perform handshake of missed_snaps_list.Avra Sengupta2014-04-256-0/+151
| | | | | | | | | | | | | | | | In a handshake, create a union of the missed_snap_lists of the two peers. If an entry is present, its no op. If an entry is pendng, and the peer entry is done, mark own entry as done. If an entry is done, and the peer ertry is pending, its a no-op. If its a new entry, add it. Change-Id: Idbfa49cc34871631ba8c7c56d915666311024887 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7453 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rpcgen: Remove autogenerated files instead build on demandHarshavardhana2014-04-254-24/+53
| | | | | | | | | | | | | Avoid modifying autogenerated files and keeping them in repository - autogenerate them on demand from ".x" files Change-Id: I2cdb1fe9b99768ceb80a8cb100fa00bd1d8fe2c6 BUG: 1090807 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/7526 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* feature/changelog: Change default rollover and fsync-interval timeKotresh H R2014-04-251-2/+2
| | | | | | | | | | | | This will change the following default configurables. 1. rollover-time: from 60 to 15. 2. fsync-interval: from 0 to 5. Change-Id: I9c8db01376967c4f19547ec87f54833f8b139d29 Signed-off-by: Kotresh H R <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/7545 Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* osx: Compilation fixesHarshavardhana2014-04-251-1/+3
| | | | | | | | | Change-Id: I822936cbeb4ec8af46be8e94644ea666b919ae5c BUG: 1089172 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/7556 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* features/locks: Remove stale inodelk objects from 'blocked_locks' listKrutika Dhananjay2014-04-252-6/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | | * In the event of a DISCONNECT from a client, as part of cleanup, inodelk objects are not removed from the blocked_locks list before being unref'd and freed, causing the brick process to crash at some point when the (now) stale object is accessed again in the list. * Also during cleanup, it is pointless to try and grant lock to a previously blocked inodelk (say L1) as part of releasing another conflicting lock (L2), (which is a side-effect of L1 not being deleted from blocked_locks list before grant_blocked_inode_locks() in cleanup) if L1 is also associated with the DISCONNECTing client. This patch fixes the problem. * Also, the codepath in cleanup of entrylks seems to be granting blocked inodelks, when it should be attempting to grant blocked entrylks, which is fixed in this patch. Change-Id: I8493365c33020333b3f61aa15f505e4e7e6a9891 BUG: 1089470 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/7512 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* build: MacOSX Porting fixesHarshavardhana2014-04-2467-529/+1356
| | | | | | | | | | | | | | | | | | | | | git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs Working functionality on MacOSX - GlusterD (management daemon) - GlusterCLI (management cli) - GlusterFS FUSE (using OSXFUSE) - GlusterNFS (without NLM - issues with rpc.statd) Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac BUG: 1089172 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Signed-off-by: Dennis Schafroth <dennis@schafroth.com> Tested-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Dennis Schafroth <dennis@schafroth.com> Reviewed-on: http://review.gluster.org/7503 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* gNFS: Support wildcard in RPC auth allow/rejectSantosh Kumar Pradhan2014-04-221-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | RFE: Support wildcard in "nfs.rpc-auth-allow" and "nfs.rpc-auth-reject". e.g. *.redhat.com 192.168.1[1-5].* 192.168.1[1-5].*, *.redhat.com, 192.168.21.9 Along with wildcard, support for subnetwork or IP range e.g. 192.168.10.23/24 The option will be validated for following categories: 1) Anonymous i.e. "*" 2) Wildcard pattern i.e. string containing any ('*', '?', '[') 3) IPv4 address 4) IPv6 address 5) FQDN 6) subnetwork or IPv4 range Currently this does not support IPv6 subnetwork. Change-Id: Iac8caf5e490c8174d61111dad47fd547d4f67bf4 BUG: 1086097 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/7485 Reviewed-by: Poornima G <pgurusid@redhat.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterfs-server : barrier timeout tuning fixAtin Mukherjee2014-04-221-12/+70
| | | | | | | | | | | | | | | | | | | | | | | | | Problem : Reconfiguration of barrier timeout through gluster volume set shows a success but it never changes the default timeout value which is 120 seconds. After digging into the code deeper, it was found that timeout is never modified in reconfigure() as the first check i.e. whether barrier is already enabled or disabled always fails since barrier option is not modified in this request. Fix : Introduced notify() in barrier translator which will take care of the rpc request to enable/disable barrier. reconfigure() will simply set barrier enable/disable and timeout options blindly without any validation. Please note this patch only contains the changes in barrier translator however from complete code flow perspective the caller in the glusterfsd mgmt should call notify instead of reconfigure to fix this problem. Change-Id: I1371b294935f6054da7c1dc6a9a19f1d861e60fb BUG: 1085671 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/7428 Reviewed-by: Varun Shastry <vshastry@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* snapshot: use volume's brick_ids for the snapsRavishankar N2014-04-212-9/+4
| | | | | | | | | | | | | | | | | brickinfo->brick_id was introduced to establish persistence of client xlator names and AFR chanelog attributes (http://review.gluster.org/7155). The snapshot volumes must also use the same IDs during snapshot create and restore to maintain persistence. Change-Id: I13d66d19b63520061ba9ec5f0ce661cf3b9eeafe BUG: 1066778 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/7477 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: pass the right argument for perf subgraphKrishnan Parthasarathi2014-04-211-1/+1
| | | | | | | | Change-Id: Ic292dcd8e477066c1079f0f1e170f5153459b029 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/7514 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* fuse: prevent READDIR(P) from writing to much data to /dev/fuseNiels de Vos2014-04-211-14/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In an environment with mixed architectures (32-bit servers, 64-bit client), it is possible that the on-wire Reply on a READDIR(P) procedure contains more direntries than the client can fit in the maximum size that the fuse-request indicated. A direntry is a dynamically sized structure, because the structure contains the name of the entry. The client sends a maximum size in the READDIR(P) Call to the server, and the server uses this size to limit the number of direntries to return. In case the server can pack more direntries in the requested maximum size (due to alignment differences between the architectures), it can happen that the client unpacks the list of direntries into a buffer that exceeds the maximum size that was given in the initial fuse-request. This change introduces a check for the maximum requested size of the fuse-response in fuse_readdir_cbk() and fuse_readdirp_cbk(). When the conversion from gluster-direntries to the fuse-direntry format takes place, the maximum size is checked, and the 'extra' direntries are discarded. The next readdir()/getdents() that is done, will fetch the just discarded direntries again. In addition to this bugfix, some extra logging in send_fuse_iov() and send_fuse_data() has been added to help diagnosing similar issues. Change-Id: If2eecfcdf9c248f3820035601446d2c89ff9d1a1 BUG: 1074023 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/7278 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-by: Anand Avati <avati@redhat.com>
* fuse: allow xlators to request for direct-io-mode on virtual filesAnand Avati2014-04-201-2/+14
| | | | | | | | | | | | | | | | | | | Translators like meta, create virtual files with dynamic content generated only at the time of open(). Therefore the file size returned in lookup or stat is 0 (just like files in /proc). However the VFS does not read beyond the size, and if the size is 0, no READ ever reaches gluster for that file -- unless direct-io-mode is enabled. This patch allows translators to return "direct-io-mode" flag for such 0-byte virtual files in xdata of open_cbk/create_cbk. Change-Id: I3fe3312cd96baa4eecfe1247ab7255b4f455f049 BUG: 1089216 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/7506 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* fuse-resolve: loc_wipe() after inode_link()Anand Avati2014-04-201-2/+2
| | | | | | | | | | | | the inode to be linked may have the last ref. loc_wipe() can destroy it before inode_link() gets to ref it. Change-Id: Ic2d44084e6e9c8289f35cae82c8e4575af105398 BUG: 1089216 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/7505 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* cluster/dht: force set dir inode ctx cached time in setattr()Anand Avati2014-04-173-1/+34
| | | | | | | | | | | | | | | | | In setattr, the inode times may have been explicitly set "back in time". In such cases, if the inode ctx times are not force set, then they continue to be higher and continue serving the higher/older value in future calls to dht_inode_ctx_time_update() Change-Id: I9cbfa7cf7c4069b0106d1f462de08c5d59bc91b5 BUG: 1083324 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/7378 Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* mount.glusterfs: return 32 in case of double mountingNiels de Vos2014-04-171-3/+3
| | | | | | | | | | | | | The mount.nfs helper returns 32 for this particular error case. It makes sense for mount.glusterfs to return the same error value. Change-Id: I628f4c93bc796bb096e91857195ffd3d296eaae9 BUG: 1031973 Reported-by: Deepak C Shetty <deepakcs@redhat.com> Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/7469 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* dict: add dict_set_dynstr_with_allocKrishnan Parthasarathi2014-04-141-24/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | There is an overwhelming no. of instances of the following pattern in glusterd module. ... char *dynstr = gf_strdup (str); if (!dynstr) goto err; ret = dict_set_dynstr (dict, key, dynstr); if (ret) goto err; ... With this changes it would look as below, ret = dict_set_dynstr_with_alloc (dict, key, str); if (ret) goto err; Change-Id: I6a47b1cbab4834badadc48c56d0b5c8c06c6dd4d Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/7379 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* logging: Make logger and log format configurable through cliKrutika Dhananjay2014-04-113-6/+170
| | | | | | | | | | Change-Id: Ic4b701a6621578848ff67ae4ecb5a10b5f32f93b BUG: 1075611 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/7372 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* mount.glusterfs: return an error when mounting failedNiels de Vos2014-04-111-21/+26
| | | | | | | | | | | | | | | | | | When mounting fails, mount.glusterfs incorrectly returns 0 for some error cases, it should return 1 instead. Also make sure that error messages are redirected to /dev/stderr and not printed to stdout. Unfortunately it is not possible with the current test-scripts to test commands like 'mount -t glusterfs ...'. Any mounting of Gluster volumes is done directly with the 'glusterfs' command instead. Change-Id: Ica9d45b6d5ae537de869a1fa0f6c3edab47225d1 BUG: 1031973 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/7441 Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli,glusterd: Improve detach check validationKaushal M2014-04-114-25/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch improves the validation for the 'peer detach' command. A check for if volumes exist with some bricks on the peer being detached validation is added in peer detach code flow (even force would have this validation). This patch also gurantees that peer detach doesn't fail for a volume with all its brick on the peer which is getting detached and there are no other bricks on this peer. The following steps need to be followed for removing a downed and unrecoverable peer. * If a replacement system is available - add it to the cluster - use replace-brick to migrate bricks of the downed peer to the new peer (since data cannot be recovered anyway use the 'replace-brick commit force' command) or, If no replacement system is available, - remove bricks of the downed peer using 'remove-brick' Change-Id: Ie85ac5b66e87bec365fdedd8352b645bb25e1c33 BUG: 983590 Signed-off-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/5325 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>