summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* doc/snapshot : admin guide updation.Sachin Pandit2014-09-241-0/+36
| | | | | | | | | | | Change-Id: I1980bc0984f941cd415fbc754495353f561155a8 BUG: 1145084 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8817 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8829
* cluster/afr: Fixed mem leaks in self-heal code path.Anuradha2014-09-242-1/+17
| | | | | | | | | | | | | | | | | backport of: http://review.gluster.org/8821 AFR_STACK_RESET previously didn't cleanup afr_local_t, leading to memory leaks. With this patch, cleanup is done. All credit goes to Pranith Kumar Karampuri. Change-Id: I26506dfd9273b917eff5127c3e0cf9421e60f228 BUG: 1145914 Reviewed-on: http://review.gluster.org/8831 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* Do not hardcode setfattr(1) pathEmmanuel Dreyfus2014-09-241-1/+13
| | | | | | | | | | | | | | Turn setfattr(1) absolute path into an OS-dependant macro. Let compiler option override it to fit custom installation if needed. Backport of I8f469c5741a85b6e8d8f6299a9540b3d64611d2f BUG: 1138897 Change-Id: I279752f2ec5db1abc25830cb9a23290cc401d517 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8828 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Add last successful glusterd lock backtraceKrishnan Parthasarathi2014-09-246-23/+146
| | | | | | | | | | | | | | | Also, moved the backtrace fetching logic to a separate function. Modified the backtrace fetching logic able to work under memory pressure conditions. Change-Id: Ie38bea425a085770f41831314aeda95595177ece BUG:1145093 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/8794 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Authenticate management handshake requestsKaushal M2014-09-245-4/+125
| | | | | | | | | | | | | | | | | | | | | | Backport of 371bb42 glusterd: Authenticate management handshake requests from master. Management handshake requests, which are used to validate op-version supported by the peers, are now only allowed if, - the glusterd doesn't have any other peer, or - the request was sent by another peer. This prevents the op-version of a peer being changed because of a connection attempt by an invalid peer. BUG: 1144978 Change-Id: I5a909dad37e9873efe8b75dad41b7af71ce91c3d Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/8819 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* ec: Add config information in an xattrXavier Hernandez2014-09-237-1/+185
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To simplify backward compatibility of the ec xlator when some parameter or the implementation itself is changed, a new xattr is added to each file with the configuration needed to recover it. The new attribute is called 'trusted.ec.config', and it's a 64-bit value containing the following information: 8 bits: version of the config information (currently always 0) 8 bits: algorithm used to encode the file (currently always 0) 8 bits: size of the galois field (currently always 8) 8 bits: number of bricks 8 bits: redundancy 24 bits: chunk size (currently 512) This new xattr could allow, in a future version, to have different configurations per file. This is a backport of http://review.gluster.org/8770/ Change-Id: I8c12d40ff546cc201fc66caa367484be3d48aeb4 BUG: 1140862 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/8825 Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Do not reset pending xattrs on gfid or type mismatch in entry-shKrutika Dhananjay2014-09-231-18/+79
| | | | | | | | | | | | Backport of: http://review.gluster.org/8816 Change-Id: I8463a579f542a2336b02edba8f5fbfea0edbbffe BUG: 1136829 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8823 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Don't start heal when lookup succeeds on < 2 childrenPranith Kumar K2014-09-236-8/+29
| | | | | | | | | | | | | | | | | | | | | | Backport of http://review.gluster.org/8698 Problem: When self-heal code doesn't see at least 2 successes on looking up children, then self-heal can't be done. What is happening now is if all the lookups fail then the pending changelog is all zeros in xattrs so all the children are becoming sources and leading to crashes when the code paths further assume that some data structures are populated properly Fix: Don't proceed with self-heals when < 2 children succeed lookups. BUG: 1145726 Change-Id: I65465843f0e554c8ccdd8fa930ab42ac123ec023 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/8824 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* features/marker: Fill loc->path before sending the control to healingVarun Shastry2014-09-232-24/+42
| | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/8296 Problem: The xattr healing part of the marker requires path to be present in the loc. Currently path is not filled while triggering from the readdirp_cbk. Solution: Current patch tries to fill the loc with path. Change-Id: I2e2589ecfa6b6a6e27407c9541fa90a314649bec BUG: 1145623 Signed-off-by: Varun Shastry <vshastry@redhat.com> Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8820 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* test : Fix for spurious failureSachin Pandit2014-09-232-6/+8
| | | | | | | | | | | | | | | | | Problem : Once the features.uss is enabled it does not wait for the process to be created. And if we try to check for the pid of the snapd then it will not be present which causes a failure. Solution : Adding a EXPECT_WITHIN which waits to get the pid until certain time period. Change-Id: I5fdda9beecf867b7544f2e4b830f698ddf6e3bec BUG: 1145189 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8809 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Set the rlimit for Open FDs to higher value.Vijaikumar M2014-09-231-0/+18
| | | | | | | | | | | | | | | | | | | | Default 'open FD limit' is 1024. As the number of volumes/bricks increases, brick-to-glusterd socket FDs also increases in glusterd and runs out of the limit. Solution is to set the 'Open FD' limit to higher value in glusterd Change-Id: Iaa60b2155df2fa5a0759e054bdebffbc09f63ec1 BUG: 1145095 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8578 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8807
* cli/snapshot : Add confirmation dialog to snapshot restore operation.Sachin Pandit2014-09-231-2/+19
| | | | | | | | | | | | | | | | | | | When restoring a volume, the user is not prompted for confirmation. Since restoring a volume rolls back the data to a previous point in time, there is the potential for updates to be lost. Hence it is better to display a confirmation dialogue during snapshot restore operation. Change-Id: I7b23eaeb43ad2aafa508e2ca5750d9b0fc7d6e36 BUG: 1145092 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8525 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8806
* cli/snapshot : update of a snapshot delete syntax in documentation.Sachin Pandit2014-09-231-2/+2
| | | | | | | | | | | | Change-Id: Id1a4b9684a8dd5750ee6eed841e3d5195407fb7e BUG: 1145084 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8534 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8805 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
* glusterd/snapshot : Fail the snapshot create operation if geo-rep is runningSachin Pandit2014-09-231-1/+11
| | | | | | | | | | | | | | | | | | | As one of the recommandations for taking a snapshot is not to have an active geo-replication session, its better to display an error saying session is active when snapshot create command is issued. Change-Id: I94593dbd2659610e033ca316176dda1ac8dc5ce6 BUG: 1145091 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8461 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8804
* glusterd: Parsing key/value pair with value containing '='.Vijaikumar M2014-09-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | Parsing key/value pair in the brickinfo file, where value contains '=' would truncate everything after '=' in value. For example: A key/value pair mnt-opts=rw,noatime,allocsize=1MiB,noattr2 is parsed as: mnt-opts=rw,noatime,allocsize Change-Id: I4e279c2a356a8a16eb20d4358d7c8a8cc5724b65 BUG: 1145090 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8507 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8803 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/snapshot: Inherit the mount options of a original brick when ↵Vijaikumar M2014-09-237-122/+250
| | | | | | | | | | | | | | | | | | | | | | | | creating snapshots. When creating a snapshot a LVM is created at the backend and is mounted under /var/run/gluster/snaps/... However, this mount does not inherit the mount options for the original brick acting as the parent for the snap. If the snap is restored, this could lead to performance degredations, functional limitations, or in extreme scenarios even potential data loss. Change-Id: I67d70fd83430d83dacc5380c6c928e27fb9c9e1b BUG: 1145088 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8394 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8802 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Error msg for snapshot status for no existing snaps should be ↵Vijaikumar M2014-09-231-35/+42
| | | | | | | | | | | | | | | | | | | | aligned with error messages of info and list. When a snapshot operation like status, info, list performed on a non-existing snapshot. For Status error message is displayed as 'Snap not found' For List and Info error message is displayed as 'Snapshot does not exist' Have the consistant error message all the places Change-Id: I7b241217dba62fda844481731a6858e4ecb12897 BUG: 1145087 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8309 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8801 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/snapshot: Proper err msg for snapshot create command.Rajesh Joseph2014-09-231-0/+77
| | | | | | | | | | | | | | | | | | | problem: Snapshot command fails if one or more bricks are not thinly provisioned. But the error message is a generic error message which is confusing to the user. fix: Provide correct error message in case of failure. Change-Id: Iad247f966423a8f73ef6da57cab7ed6cddc05861 BUG: 1145086 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/8377 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8800 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* doc : man page and admin-guide for newly introduced snapshot delete optionSachin Pandit2014-09-232-4/+7
| | | | | | | | | | | | | Change-Id: Iab5e7f63d673a2040e8d83ab7280121ff468836e BUG: 1145084 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8378 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8799 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
* feature/snapshot : Interface to delete all snapshots belonging to a system ↵Sachin Pandit2014-09-238-96/+588
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | as-well-as to a particular volume Problem : With the current design we can only delete a single snapshot. And the deletion of volume which contains snapshot is not allowed. Because of that user might be forced to delete all the snapshots manually before he is allowed to delete a volume. Solution: Following is the interface with which user can delete all the snapshots of a system or belonging to a particular volume. Syntax : gluster snapshot delete all *To delete all the snapshots present in a system Syntax : gluster snapshot delete volume <volname> *To deletes all the snapshot present in a volume specified. ======================================================================== Sample Output: Case 1 : Deleting a single snapshot. [root@snapshot-24 glusterfs]# gluster snapshot delete snap1 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: snap1: snap removed successfully ----------------------------------------------------------------- Case 2 : Deleting all the snapshots in a Volume. [root@snapshot-24 glusterfs]# gluster snapshot delete volume vol1 Volume (vol1) contains 9 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap2: snap removed successfully snapshot delete: snap3: snap removed successfully snapshot delete: snap4: snap removed successfully snapshot delete: snap5: snap removed successfully . . . ----------------------------------------------------------------- Case 3 : Deleting all the snapshots in a system. [root@snapshot-24 glusterfs]# gluster snapshot delete all System contains 4 snapshot(s). Do you still want to continue and delete them? (y/n) y snapshot delete: snap7: snap removed successfully snapshot delete: snap8: snap removed successfully snapshot delete: snap9: snap removed successfully snapshot delete: snap10: snap removed successfully ======================================================================== Change-Id: Ifec8e128ab2011cbbba208376b9c92cfbe7d8d71 BUG: 1145083 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8162 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8798 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
* doc : man page for snapshot commands.Sachin Pandit2014-09-232-61/+96
| | | | | | | | | | | | | | | This patch also contains few modifications in admin documentation. Change-Id: I7bc2a88e6cbcfe81dcfafc2956f5b7c5524b0f0b BUG: 1145084 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8357 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8797 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
* glusterd/snapshot: Print correct error message on cli for snapshot operation ↵Vijaikumar M2014-09-231-0/+10
| | | | | | | | | | | | | | | | | | | | | performed on a cluster with op-version less than 30600. Currently we get error message as on cli 'Another transaction is in progress Please try again after sometime' when a snapshot operation is performed on a cluster with op-version less than 30600. We need to print the correct error message in this case. Change-Id: I5f144428d928393c3796bde96ce6e3a40fca8141 BUG: 1145068 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/8371 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8796
* cli/snapshot : gluster volume info should not show the options which are not ↵Sachin Pandit2014-09-238-283/+289
| | | | | | | | | | | | | | | | | | | | | | | | | | | | set explicitly. Problem : Even though snap-max-hard-limit, snap-max-soft-limit and auto-delete values were not set explicitly, It was getting showed in the output of gluster volume info. Solution : Check if the value is already present in dictionary (That means, it is set), If value is not present then consider the default value, NOTE : This patch doesn't solve the problem where the values which is set globally are being displayed in gluster volume info Change-Id: I61445b3d2a12eb68c38a19bea53b9051ad028050 BUG: 1145020 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8191 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com> Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/8793 Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
* porting: Remove unnecessary code from mount_darwin.cHarshavardhana2014-09-231-279/+180
| | | | | | | | | | | | | | Backport of http://review.gluster.org/8564 - Cleanup mount_darwin.c to make it cleaner - Restructure the code to be more readable - Avoid unnecessary delays invoking `mount_osxfusefs` Change-Id: I7f28875b0ec872a08bf8e77dfc8ebe5eca750d0e BUG: 1144163 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/8772 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* tests: regression, can't run `prove $t` in subdirsv3.6.0beta1Kaleb S. KEITHLEY2014-09-211-2/+7
| | | | | | | | | | | | | | | | | | | In various tests we already use the pattern: . $(dirname $0)/../include.rc to locate various .rc files. Use the same pattern we already use to also find the new env.rc Change-Id: Ib7878c3f7f6491fcec401e9adbaa696ace392fae BUG: 1142420 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/8753 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Set all the xattrs needed by index xlatorAnuradha2014-09-213-41/+88
| | | | | | | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/8652 Index xlator removes the index file from indices xattrop directory in case the value for keys sent are zero. If all the required keys are not set by afr then index file might be removed in an invalid way. With this change all the keys required by index xlator are set by afr such that invalid removal of files does not occur. Change-Id: I1b77904920c8566057415c52242179aec6a015e2 BUG: 1144744 Signed-off-by: Anuradha <atalur@redhat.com> Reviewed-on: http://review.gluster.org/8788 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/dht: Fix dict_t leaks in rebalance process' execution pathKrutika Dhananjay2014-09-201-4/+7
| | | | | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/8763 Two dict_t objects are leaked for every file migrated in success codepath. It is the caller's responsibility to unref dict that it gets from calls to syncop_getxattr(); and rebalance performs two syncop_getxattr()s per file without freeing them. Also, syncop_getxattr() on GF_XATTR_LINKINFO_KEY doesn't seem to be using the response dict. Hence, NULL is now passed as opposed to @dict to syncop_getxattr(). Change-Id: I48926389db965e006da151bf0ccb6bcaf3585199 BUG: 1144640 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8785 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* syncop: Invoke dict_unref() in inodelk only if dictionary is not NULLVijay Bellur2014-09-201-2/+4
| | | | | | | | | | | | | | | | | | In the absence of this check, logs can get flooded with messages like this when rebalance is run: [2014-09-04 17:48:07.148262] W [dict.c:480:dict_unref] (-->/lib64/libc.so.6() [0x30daa47a00] (-->/usr/local/lib/libglusterfs.so.0(synctask_wrap+0x12) [0x7fa20b7c6ec2] (-->/usr/local/lib/glusterfs/3.7dev/xlator/cluster/distribute.so(dht_migrate_file+0x23f) [0x7fa200fdb58f]))) 0-dict: dict is NULL Change-Id: I4c93e4485293b35d86ba07df4d583d2758ec3f49 BUG: 1138395 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on-master: http://review.gluster.org/8601 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Reviewed-on: http://review.gluster.org/8782
* ec: Fix some size_t vars to 64 bits even on 32 bits machinesXavier Hernandez2014-09-196-21/+20
| | | | | | | | | | | | | | | | | | | The 64 bits 'trusted.ec.size' extended attribute was incorrectly computed on 32 bits machines due to an overflow on negative numbers. Also changed some potentially dangerous uses of size_t in other places. This is a backport of http://review.gluster.org/8738/ Change-Id: Id76cfe49a2f350e564b5c71d8c8644fb9ce86662 BUG: 1144407 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/8779 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* ec: Fix invalid inode lock in ftruncateXavier Hernandez2014-09-195-21/+21
| | | | | | | | | | | | | | | | | | | | | | The fops 'truncate' and 'ftruncate' share some code and inodelk() was always made against the inode inside the loc_t structure instead of that of fd_t. Since ftruncate has the loc initialized to NULL, this fop was executed without any lock, allowing some concurrent modifications in the file size. Also changed the way in which 'fop' and 'ffop' are differentiated in shared code. Now it uses 'id' field instead of checking if 'fd' is NULL. This is a backport of http://review.gluster.org/8695/ Change-Id: Ibd18accf2652193b395a841b9029729e5f4867c6 BUG: 1140847 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/8780 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: perform list-xattr during lookupRavishankar N2014-09-196-12/+273
| | | | | | | | | | | | | Detect and heal mismatching user extended attributes during lookup. Backport of: http://review.gluster.org/8558 Change-Id: Id03c9746f083ffd3014711d0b3a2e5a71a45eed4 BUG: 1144274 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/8773 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* storage/posix: Log when mkdir is on an existing gfid but non-existentRaghavendra G2014-09-191-1/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | path. consider following steps on a distribute volume 1. rename (src, dst) on hashed subvolume 2. snapshot taken 3. restore snapshots and do stat on src and dst Now, we end up with two directories src and dst having same gfid, because of distribute creating directories on non-existent subvolumes as part of directory healing. This can happen even with race between rename and directory healing in dht-lookup. This can lead to undefined behaviour while accessing any of both directories. Hence, we are logging paths of both directories, so that a sysadmin can take some corrective action when (s)he sees this log. One of the corrective action can be to copy contents of both directories from backend into a new directory and delete both directories. Since effort involved to fix this issue is non-trivial, giving this workaround till we come up with a fix. Change-Id: I38f4520e6787ee33180a9cd1bf2f36f46daea1ea BUG: 1144485 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on-master: http://review.gluster.org/8008 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/8783 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* osx: Remove legacy extras/MacOSX directoryJustin Clift2014-09-183-116/+6
| | | | | | | | | | | | | Backport of http://review.gluster.org/8736 BUG: 1141683 Change-Id: Ic37df769db856196727d2799396c1dbaf4bcdd1a Signed-off-by: Justin Clift <justin@gluster.org> Reviewed-on: http://review.gluster.org/8737 Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Do not assume sizeof(size_t)Emmanuel Dreyfus2014-09-182-2/+2
| | | | | | | | | | | | | | | | | This fixes an assumption that sizeof(size_t) == sizeof(uint64_t), which is not guaranteed. At least on NetBSD/i386, size_t is 32 bit long. Caught by tests/basics/file-snapshot.t This is a backport of Ib7620a2ffe8758521886af37bc280101a040d860 BUG: 1138897 Change-Id: Ie0b80ee9ddbcccaf9fd4f5d28d80fcd080b0ed40 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8631 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr : Mark pending changelog xattrs for new creationsAnuradha2014-09-189-90/+276
| | | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/8555 Based on type of file, set appropriate pending changelogs for new entries. Change-Id: Icf9af866fe9a9e511210e8ad097e968e2307d8ee BUG: 1141787 Reviewed-on: http://review.gluster.org/8555 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Signed-off-by: Anuradha <atalur@redhat.com> Reviewed-on: http://review.gluster.org/8748 Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* snapview-server: get the handle if its absent before doing any fopRaghavendra Bhat2014-09-183-34/+177
| | | | | | | | | | | | | | | | | | | | | | | | * Now that NFS server does inode linking in readdirp, it can resolve the gfid (i.e. find the right inode from its inode table) present in the filehandle sent by the NFS client on which a fop came. So instead of sending the lookup on that entry, it directly sends the fop. But snapview-server does not get the handle for the entries in readdirp (because doing a lookup on each entry via gfapi would be costly. So it waits till a lookup is done on that inode, to get the handle and the fs instance and fill it in the inode context). So when NFS resoves the gfid and directly sends the fop, snapview-server will not be able to perform the fop as the inode contet would not contain the fs instance and the handle. So fops should check for the handle before doing gfapi calls. If the handle and fs instance are not present in the inode context they should get them by doing an explicit lookup on the entry. rebase of the patch http://review.gluster.org/#/c/8324/ Change-Id: I70c9c8edb2e7ddad79cf6ade3e041b9d02241cd1 BUG: 1143961 Reviewed-on: http://review.gluster.org/8768 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* snapview-server: register a callback with glusterd to getRaghavendra Bhat2014-09-1812-1117/+1294
| | | | | | | | | | | | | | | | | | | notifications * As of now snapview-server is polling (sending rpc requests to glusterd) to get the latest list of snapshots at some regular time intervals (non configurable). Instead of that register a callback with glusterd so that glusterd sends notifications to snapd whenever a snapshot is created/deleted and snapview-server can configure itself. rebase of the patch http://review.gluster.org/#/c/8150/ Change-Id: Iee2582b1a823d50c79233a41cf2106f458b40691 BUG: 1143961 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/8767 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* USS: initialize a list before using it.Raghavendra G2014-09-181-1/+3
| | | | | | | | | | | | backport of the patch http://review.gluster.org/8569 by Raghavendra G <rgowdapp@redhat.com> Change-Id: I7b25fdf27c6d7ff66d24925bc73d9c6681259d37 BUG: 1143961 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/8764 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Cluster/DHT: Changing rename log severityNithya Balachandran2014-09-181-6/+5
| | | | | | | | | | | | | | Changing log level for a rename message from debug to info to improve debuggability Change-Id: I53031fcf97fffd62095692477330ecde0cf47dcd BUG: 1138395 Signed-off-by: Nithya Balachandran <nbalacha@redhat.com> Reviewed-on-master: http://review.gluster.org/8582 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on: http://review.gluster.org/8626
* cluster/dht: Changed log level to DEBUGVenkatesh Somyajulu2014-09-181-4/+4
| | | | | | | | | | | | Change-Id: I7a4ee0c5a6a94bd4f31aff510a2971750913ed45 BUG: 1142402 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on-master: http://review.gluster.org/8621 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-by: susant palai <spalai@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/8749
* cluster/dht: Fixed double UNWIND in lookup everywhere codeShyam2014-09-161-4/+4
| | | | | | | | | | | | | | | | | | | | | | | In dht_lookup_everywhere_done: Line: 1194 we call DHT_STACK_UNWIND and in the same if condition we go ahead and call, goto unwind_hashed_and_cached; which at Line 1371 calls another UNWIND. As is obvious, higher frames could cleanup their locals and on receiving the next unwind could cause a coredump of the process. Fixed the same by calling the required return post the first unwind Change-Id: Ic5d57da98255b8616a65b4caaedabeba9144fd49 BUG: 1142409 Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on-master: http://review.gluster.org/8666 Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: susant palai <spalai@redhat.com> Reviewed-on: http://review.gluster.org/8751
* cluster/dht: fix memory corruption in locking api.Raghavendra G2014-09-161-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | <man 3 qsort> The contents of the array are sorted in ascending order according to a comparison function pointed to by compar, which is called with two arguments that "point to the objects being compared". </man 3 qsort> qsort passes "pointers to members of the array" to comparision function. Since the members of the array happen to be (dht_lock_t *), the arguments passed to dht_lock_request_cmp are of type (dht_lock_t **). Previously we assumed them to be of type (dht_lock_t *), which resulted in memory corruption. Change-Id: Iee0758704434beaff3c3a1ad48d549cbdc9e1c96 BUG: 1142406 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on-master: http://review.gluster.org/8659 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/8750
* features/quota: fixes to dentry management code in rename.Raghavendra G2014-09-161-10/+6
| | | | | | | | | | | | | | | | | | | 1. After a successful rename (src, dst), the dentry <dst-parent, dst-basename> would be associated with src-inode. 2. Its src inode that survives if both of src and dst are present. The fixes are done based on the above two observation. Change-Id: I7492a512e3732b1455c243b02fae12d489532bfb BUG: 1142411 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on-master: http://review.gluster.org/8687 Reviewed-by: susant palai <spalai@redhat.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/8752
* cluster/dht: Rename should not fail post hardlink creationShyam2014-09-162-41/+99
| | | | | | | | | | | | | | | | | | | | | | | | | | | In the rename path, we wind the creation of newname hardlink and linkto file in dst hashed a the same time. If the linkto creation fails, but the link creation succeeds, we enter the failure code and cleanup the created newname hardlink. In the interim if another client looks up newname and finds it as a hardlink from FUSE, it could send an unlink for oldname instead of a rename. This combined with the above cleanup code could end up losing all the files copies, and thereby losing data. This fix separates these steps into 2 parts, creating the linkto first and then the link file, so that post link file creation no failures would cleanup the newname file. If linkto fails then link is not attempted, thereby not polluting the name space with newname. Change-Id: I61da8e906060da16a31ea1076eec2f01fd617f44 BUG: 1138395 Signed-off-by: Shyam <srangana@redhat.com> Reviewed-on-master: http://review.gluster.org/8570 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/8615
* ec: Optimize read/write performanceXavier Hernandez2014-09-1613-268/+706
| | | | | | | | | | | | | | | | | | | | This patch significantly improves performance of read/write operations on a dispersed volume by reusing previous inodelk/ entrylk operations on the same inode/entry. This reduces the latency of each individual operation considerably. Inode version and size are also updated when needed instead of on each request. This gives an additional boost. This is a backport of http://review.gluster.org/8369/ Change-Id: I4b98d5508c86b53032e16e295f72a3f83fd8fcac BUG: 1140844 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/8746 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* ec: Only heal data/metadata when inode has enough informationXavier Hernandez2014-09-161-0/+8
| | | | | | | | | | | | | | | | Sometimes loc_t structure in a heal request doesn't contain enough information to do an inodelk call (basically the gfid is missing). In these cases, self heal only recovers entry information. This is a backport of http://review.gluster.org/8368/ Change-Id: I459990c7df728ff4baf164df046672ddcde3efa5 BUG: 1140626 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/8747 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* Do not call rpc_transport_unref() on NULL transEmmanuel Dreyfus2014-09-161-2/+4
| | | | | | | | | | | | | | | | | | rpc_clnt_disable() sets rpc->conn->trans to NULL, hence we should not call rpc_transport_unref() afterwards. I moved it before the rpc_clnt_disable() call, but I am not sure it should be called at all, perhaps it should just go away. This is a backport of I488d0207494e3a3fad52e64e67b2e740b236b864 BUG: 1138897 Change-Id: I001ab73b37f74ba98463bd9c32c01c670e9c7ad8 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/8629 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* gNFS: Fix memory leak in setacl code pathNiels de Vos2014-09-161-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | If ACL is set on a file in Gluster NFS mount (setfacl command), and it succeed, then the NFS call state data is leaked. Though all the failure code path frees up the memory. Impact: There is a OOM kill i.e. vdsm invoked oom-killer during rebalance and Killed process 4305, UID 0, (glusterfs nfs process) FIX: Make sure to deallocate the memory for call state in acl3_setacl_cbk() using nfs3_call_state_wipe(); Cherry picked from commit 5c869aea79c0f304150eac014c7177e74ce0852e: > Change-Id: I9caa3f851e49daaba15be3eec626f1f2dd8e45b3 > BUG: 1139195 > Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> > Reviewed-on: http://review.gluster.org/8651 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Niels de Vos <ndevos@redhat.com> Change-Id: I9caa3f851e49daaba15be3eec626f1f2dd8e45b3 BUG: 1139244 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/8654 Reviewed-by: Santosh Pradhan <spradhan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Propagate EIO on inode's type mismatchKrutika Dhananjay2014-09-168-121/+461
| | | | | | | | | | | | | | | | Backport of: http://review.gluster.org/8574, and http://review.gluster.org/8586 Original author of the test script: Pranith Kumar K <pkarampu@redhat.com> Change-Id: I0c32bdd8e666f8175c0a8fbf940934e6ce469931 BUG: 1136830 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8706 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* statedump: Print curr_stdalloc in mempool statedump ...Krutika Dhananjay2014-09-161-0/+1
| | | | | | | | | | | | | | | | ...for, it is curr_stdalloc that is incremented for every mem_get() and decremented on every call to mem_put() and can be used to detect leaks, when cold_count is 0. Backport of: http://review.gluster.org/8557 Change-Id: I1f4aac3af1234b9a76be7cb86ef26d728788950c BUG: 1136831 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/8705 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>