summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt
Commit message (Collapse)AuthorAgeFilesLines
* libglusterfs: replace default functions with generated versionsJeff Darcy2015-10-221-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replacing repetitive code like this with code generated from a more compact "canonical" definition carries several advantages. * Ease the process of adding new fops (e.g. GF_FOP_IPC). * Ease the process of making global changes to existing fops (e.g. adding "xdata"). * Ensure strict consistency between all of the pieces that must be compatible with each other, through both kinds of changes. What we have right now is just a start. The above benefits will only truly be realized when we use the same definitions to generate stubs, syncops, and perhaps even parts of gfapi or glupy. This same infrastructure can also be used to reduce code duplication and potential for error in many of our translators. NSR already uses a similar technique, using a few hundred lines of templates to generate a few *thousand* lines of code. The ability to make a global "aspect" change (e.g. to quorum checking) in one place instead of seventy has already been demonstrated there. Other candidates for code generation include the AFR/EC transaction infrastructure, or stub creation/resumption in io-threads. Change-Id: If7d59de7a088848b557f5aea00741b4fe19017c1 BUG: 1271325 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/9411 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/tier: add pause tier for snapshotsDan Lambright2015-10-213-4/+201
| | | | | | | | | | | | | | | | | | | | Snaps of tiered volumes cannot handle files undergoing migration. We implement a helper mechanism to "pause" migration. Any files undergoing migration are aborted. Clean up is done to remove sticky bits and data at the destination. Migration is restarted after snap completes. For testing an internal switch is added. It is not exposed externally. gluster volume set vol1 tier-pause [true|false] Change-Id: Ia85bbf89ac142e9b7e73fcbef98bb9da86097799 BUG: 1267950 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/12304 Reviewed-by: N Balachandran <nbalacha@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* snapshot: Fix snapshot clone postvalidateAvra Sengupta2015-10-204-52/+62
| | | | | | | | | | | | | | | | | | | | | | | | | In glusterd_snapshot_clone_postvalidate(), we were deleting snap object and snap vol, by looking up snapname. Hence, it was deleting the orignal snapshot from which the clone was being created Instead it should fetch the clonename, the respective clone vol, and its corresponding snap object, and delete them. Also glusterd_snap_remove(), needs to differentiate a clone snap object from a snaphsot snap object, as in case of a clone snap object, we don't have any persisted data in /var/run/gluster/snaps/ and hence is shouldn't try to delete anything there. Change-Id: I02bb22a3898d5720e318a02d6cc32d25f75d317d BUG: 1272339 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/12364 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* cluster/tier: Changed tier xattr-name valueN Balachandran2015-10-192-13/+15
| | | | | | | | | | | | | | | Each tier layer (for future stacking implementations) must have a unique xattr name. We are currently using the name of the tier subvolume excluding the volume name. Change-Id: Id4adea61dc1c8473fb1d4d7364d1940278c6e129 BUG: 1259298 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/12350 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* libglusterfs: pass buffer size to gf_store_read_and_tokenize functionGaurav Kumar Garg2015-10-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | Previously if user set an option where length of key=value goes beyond PATH_MAX (4096) character then tokenzing the option at the time of reading configuration file will fail. This is because of the we was having restraction in fgets to read maximum of PATH_MAX (4096) length of character. Consequence of this is when user try to restart glusterd, after setting key=value length beyond PATH_MAX (4096) character, glusterd will not restart. With this fix instead of PATH_MAX, consumer of gf_store_read_and_tokenize function will decide the size of the buffer length. Change-Id: I655a8ce982effdfff8f3e785ea31f543dbe39301 BUG: 1271150 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/12346 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* glusterd: disabling enable-shared-storage option should not delete volumeGaurav Kumar Garg2015-10-132-6/+32
| | | | | | | | | | | | | | | | | | | Previously when you create volume with "glusterd_shared_storage" name and if user disable enable-shared-storage option then gluster will delete the "glusterd_shared_storage" volume. With this fix gluster will do appropriate validation of enable-shared-storage option and it will not delete volume with "glusterd_shared_storage" name if it is a user created volume. Change-Id: I2bd92f938fb3de6ef496a934933bdcea9f251491 BUG: 1266818 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/12232 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* tier/shd: inline warning when compiled with gcc v.5Mohammed Rafi KC2015-10-131-1/+1
| | | | | | | | | | | | Change-Id: I487a26263d6e940eed364a831e99f9b8390bc96a BUG: 1226881 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12342 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Anoop C S <anoopcs@redhat.com> Tested-by: Anoop C S <anoopcs@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* tier/shd: make shd commands compatible with tieringMohammed Rafi KC2015-10-126-126/+331
| | | | | | | | | | | | | | | | tiering volfiles may contain afr and disperse together or multiple time based on configuration. And the informations for those configurations are stored in tier_info. So most of the volgen code generation need to be changed to make compatible with it. Change-Id: I563d1ca6f281f59090ebd470b7fda1cc4b1b7e1d BUG: 1261276 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12135 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tier/shd: create shd volfile for tieringMohammed Rafi KC2015-10-113-20/+262
| | | | | | | | | | | | | | | Currently shd graph will only start if it is replicate or disperse volume. But in case of tiering, volume type will be tier. So we need to start shd if any of the cold or hot is compatible with shd volume. Change-Id: Ic689746ac7d2fc6a9eccdabd8518dc9139829de2 BUG: 1261276 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/11962 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* cluster/tier: add watermarks and policy driverDan Lambright2015-10-101-13/+122
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fix introduces infrastructure to support different policies for promotion and demotion. Currently the tier feature automatically promotes and demotes files periodically based on access. This is good for testing but too stringent for most real workloads. It makes it difficult to fully utilize a hot tier- data will be demoted before it is touched- its unlikely a 100GB hot SSD will have all its data touched in a window of time. A new parameter "mode" allows the user to pick promotion/demotion polcies. The "test mode" will be used for *.t and other general testing. This is the current mechanism. The "cache mode" introduces watermarks. The watermarks represent levels of data residing on the hot tier. "cache mode" policy: The % the hot tier is full is called P. Do not promote or demote more than D MB or F files. A random number [0-100] is called R. Rules for migration: if (P < watermark_low) don't demote, always promote. if (P >= watermark_low) && (P < watermark_hi) demote if R < P; promote if R > P. if (P > watermark_hi) always demote, don't promote. gluster volume set {vol} cluster.watermark-hi % gluster volume set {vol} cluster.watermark-low % gluster volume set {vol} cluster.tier-max-mb {D} gluster volume set {vol} cluster.tier-max-files {F} gluster volume set {vol} cluster.tier-mode {test|cache} Change-Id: I157f19667ec95aa1d53406041c1e3b073be127c2 BUG: 1257911 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/12039 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* cluster/ec: Implement gfid-hash read-policyPranith Kumar K2015-10-091-0/+8
| | | | | | | | | | | | | | Add a policy in ec to performs reads from same bricks as long as they are good. Based on the gfid of the file/directory it determines the bricks to be considered for reading. Change-Id: Ic97b5c54c086a28b5e07a330a4fd448551b49376 BUG: 1261260 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/12133 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
* tiering/glusterd: keep afr/ec xlators name constantMohammed Rafi KC2015-10-083-30/+112
| | | | | | | | | | | | | | | | | | | afr uses the translator name for locking purpose, so it is mandatory to keep afr/ec xlators name constant across graph change currently when a tier is attached, afr names are appended either with hot or cold. ie that breaks the above mentioned constraint. Change-Id: I3699dcdaa8190bab3ba81cbc01e8fa126d37ba0d BUG: 1261276 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12134 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* xlators: add JSON FOP statistics dumps every N secondsRichard Wareing2015-10-081-0/+5
| | | | | | | | | | | | | | | | | | | | | | | Summary: - Adds a thread to the io-stats translator which dumps out statistics every N seconds where N is configurable by an option called "diagnostics.stats-dump-interval" - Thread cleanly starts/stops when translator is unloaded - Updates macros to use "Atomic Builtins" (e.g. intel CPU extentions) to use memory barries to update counters vs using locks. This should reduce overhead and prevent any deadlock bugs due to lock contention. Test Plan: - Test on development machine - Run prove -v tests/basic/stats-dump.t Change-Id: If071239d8fdc185e4e8fd527363cc042447a245d BUG: 1266476 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/12209 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com>
* glusterd/add-brick: change add-brick implementation to v3 frameworkMohammed Rafi KC2015-10-072-17/+134
| | | | | | | | | | | | | | | | | | | | | add-brick commit first happens on local node and followed by peers. As part of the commit of local-host glusterd will send the updated volfiles to the clients connected to the local-host even before the commit of peers happen. If any of the newly added brick was hosted by any peer, that brick won't be started when client (connected to local-host) try to send fops. By changing to v3 framework we can send post validate ops after commit operation that helps to send volfile fetch request only after completing commits on all nodes. Change-Id: Ib7312e01143326128c010c11fc2ed206f37409ad BUG: 1263549 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12237 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* server/protocol: option for dynamic authorization of client permissionsPrasanna Kumar Kalever2015-10-041-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | problem: assuming gluster volume is already mounted (for gfapi: say client transport connection has already established), now if somebody change the volume permissions say *.allow | *.reject for a client, gluster should allow/terminate the client connection based on the fresh set of volume options immediately, but in existing scenario neither we have any option to set this behaviour nor we take any action until and unless we remount the volume manually solution: Introduce 'dynamic-auth' option (default: on). If 'dynamic-auth' is 'on' gluster will perform dynamic authentication to allow/terminate client transport connection immediately in response to *.allow | *.reject volume set options, thus if volume permissions have changed for a particular client (say client is added to auth.reject list), his transport connection to gluster volume will be terminated immediately. Change-Id: I6243a6db41bf1e0babbf050a8e4f8620732e00d8 BUG: 1245380 Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> Reviewed-on: http://review.gluster.org/12229 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* glusterd: validate function for replica volume optionsSakshi2015-10-011-12/+42
| | | | | | | | | | Change-Id: I5b4a28db101e9f7e07f4b388c7a2594051c9e8dd BUG: 1265479 Signed-off-by: Sakshi <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/12215 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd, dht: volume set for use-readdirp in dhtPranith Kumar K2015-10-011-0/+6
| | | | | | | | | | | | Change-Id: Icab246b1d02808864d878d949fa56f9f889b538a BUG: 1265677 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/12221 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* build: export minimum symbols from xlators for correct resolutionKaleb S. KEITHLEY2015-09-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | We've been lucky that we haven't had any symbol collisions until now. Now we have a collision between the snapview-client's svc_lookup() and libntirpc's svc_lookup() with nfs-ganesha's FSAL_GLUSTER and libgfapi. As a short term solution all the snapview-client's FOP methods were changed to static scope. See http://review.gluster.org/11805. This works in snapview-client because all the FOP methods are defined in a single source file. This solution doesn't work for other xlators with FOP methods defined in multiple source files. To address this we link with libtool's '-export-symbols $symbol-file' (a wrapper around `ld --version-script ...` --- on linux anyway) and only export the minimum required symbols from the xlator sharedlib. N.B. the libtool man page says that the symbol file should be named foo.sym, thus the rename of *.exports to *.sym. While foo.exports worked, we will follow the documentation. Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> BUG: 1248669 Change-Id: I1de68b3e3be58ae690d8bfb2168bfc019983627c Reviewed-on: http://review.gluster.org/11814 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* glusterfsd : newly added brick receives fops only after it is startedSakshi2015-09-221-1/+4
| | | | | | | | | | | | | | | | | | | When new bricks are added in the middle of an on-going fop like 'rm', the volfile changes without waiting for the newly added bricks to get port. Fops are sent to all bricks and may fail on some with ENOTCONN as these bricks may not have a port yet. This patch ensures that the volfile change happens only after all the bricks have a port. Change-Id: I7ed2413475f80d0cc8849fed33036ade8d75a191 BUG: 1233151 Signed-off-by: Sakshi <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/11342 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd : check if all bricks are started before performing remove-brickSakshi2015-09-221-1/+10
| | | | | | | | | | Change-Id: Ie9e24e037b7a39b239a7badb983504963d664324 BUG: 1225716 Signed-off-by: Sakshi <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/10954 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd/utils: glusterd_copy_file does not truncate target fileRajesh Joseph2015-09-221-1/+1
| | | | | | | | | | | | | | | | | | glusterd_copy_file function copies source file to target. If the target file already exists and is bigger than the source file then it can cause file corruption. Target file should be truncated before copying source content. Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Change-Id: Ie973f3e9fa06309ded6f69dcde41e1b60b3e028e BUG: 1261482 Reviewed-on: http://review.gluster.org/12141 Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* Tier/cli: Change detach-tier commit force to detach-tier forceMohammed Rafi KC2015-09-221-1/+1
| | | | | | | | | | | | | | | | | | Current detach-tier cli command support commit force. Deprecating the same to force. So the new syntax would be: volume detach-tier <VOLNAME> <start|stop|status|commit|force> Change-Id: Ie86dfd72341078c0a1be94767f523730911312ef BUG: 1261862 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12151 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* Tiering: change in status for remove brick and rebalancehari gowtham2015-09-214-12/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | when we trigger a detach tier start on a tier vol, it shows in the volume status task as "remove brick" instead of "Detach tier" Status of volume: vol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick 10.70.42.171:/data/gluster/hbr1 49154 0 Y 25098 Cold Bricks: Brick 10.70.42.171:/data/gluster/p1 49152 0 Y 25101 Brick 10.70.42.171:/data/gluster/p2 49153 0 Y 25112 NFS Server on localhost N/A N/A N N/A Task Status of Volume vol1 ------------------------------------------------------------------------------ Task : Tier migrate ID : e11d5a3d-b1ae-4c3f-8f95-b28993c60939 Status : in progress Status of volume: vol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick 10.70.42.171:/data/gluster/hbr1 49154 0 Y 25098 Cold Bricks: Brick 10.70.42.171:/data/gluster/p1 49152 0 Y 25101 Brick 10.70.42.171:/data/gluster/p2 49153 0 Y 25112 NFS Server on localhost N/A N/A N N/A Task Status of Volume vol1 ------------------------------------------------------------------------------ Task : Detach tier ID : 76d700b1-5bbd-43ed-95fd-1640b2b4af31 Status : completed Change-Id: I4bd3b340d4e700e8afed00e1478b8a8b54dfe2e2 BUG: 1261837 Signed-off-by: hari gowtham <hgowtham@redhat.com> Signed-off-by: Hari Gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12149 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* Tier/glusterd: Do not allow attach-tier if remove-brick is not committedMohammed Rafi KC2015-09-181-0/+24
| | | | | | | | | | | | | | | | When attaching a tier, if there is a pending remove-brick task, then should not allow attach-tier. Since we are not supporting add/remove brick on a tiered volume, we won't able to commit pending remove-brick after attaching the tier Change-Id: Ib434e2e6bc75f0908762f087ad1ca711e6b62818 BUG: 1261819 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12148 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* tier/glusterd: volume status failed after detach startMohammed Rafi KC2015-09-181-3/+4
| | | | | | | | | | | | | | | After triggering detach start on a tiered volume fails. This because of brick count was wrongly setting in rebal dictionary. Change-Id: I6a472bf2653a07522416699420161f2fb1746aef BUG: 1261757 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12146 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* Tiering:Changing error message as detach-tier instead of "remove-brick"hari gowtham2015-09-161-4/+13
| | | | | | | | | | | Change-Id: Id93424a08f601a8d7540d96a47ed2b0497d4a631 BUG: 1263177 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12177 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* tier/glusterd : Disable subvol match check during detach tierMohammed Rafi KC2015-09-141-5/+13
| | | | | | | | | | | | | | | For tiering, user does not have authorization to choose for bricks to detach, so we don't need to whether subvols match for the bricks or not. Change-Id: I7e777ccc1aa261f652f9b158718fcd55185c7794 BUG: 1261741 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12145 Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Do not allow "detach-tier commit" unnecessarilyGaurav Kumar Garg2015-09-071-9/+21
| | | | | | | | | | | | | | | | | Currently when user execute gluster v detach-tier commit command without starting detach-tier or without giving force option then gluster will success this operation. Detach-tier commit should not allow without giving "force" optioin. Change-Id: Id161c288f6f3e0f6b298878a5c35a49fcbd9c6e3 BUG: 1260185 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/12107 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* glusterd: volume status backward compatibilityHari Gowtham2015-09-071-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | volume status message of 3.7 does not display all the brick in a mixed cluster(3.6 and 3.7). it displays the bricks in 3.7 and misses bricks in 3.6 due to the key difference for ports. Status of volume: vol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.42.171:/data/gluster/tier/cbr2 49153 0 Y 13494 Brick 10.70.42.203:/data/gluster/tier/cbr2 49154 0 Y 27686 NFS Server on localhost N/A N/A N N/A NFS Server on dhcp42-203.lab.eng.blr.redhat .com N/A N/A N N/A Task Status of Volume vol1 ------------------------------------------------------------------------------ There are no active volume tasks Change-Id: Icf0dc01a3d21d0889c43e2868c646a0c7e07ff25 BUG: 1255694 Signed-off-by: Hari Gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/11986 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* all: reduce "inline" usageJeff Darcy2015-09-017-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | There are three kinds of inline functions: plain inline, extern inline, and static inline. All three have been removed from .c files, except those in "contrib" which aren't our problem. Inlines in .h files, which are overwhelmingly "static inline" already, have generally been left alone. Over time we should be able to "lower" these into .c files, but that has to be done in a case-by-case fashion requiring more manual effort. This part was easy to do automatically without (as far as I can tell) any ill effect. In the process, several pieces of dead code were flagged by the compiler, and were removed. Change-Id: I56a5e614735c9e0a6ee420dab949eac22e25c155 BUG: 1245331 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/11769 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com>
* glusterd: Return better error messages for probe and detach failuresBrad Hubbard2015-09-011-4/+4
| | | | | | | | | | | | | | | We handle some specific errors and return good error messages for those, but for the default case where the error code is not recognised we just report "unknown errno". This patch attempts to at least return the output of strerror to provide more informative errors. BUG: 1257149 Change-Id: I0027e74e41adac4ab0c0a929c6fff56878bf39c8 Signed-off-by: Brad Hubbard <bhubbard@redhat.com> Reviewed-on: http://review.gluster.org/12021 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* gluster/cli: snapshot delete all does not work with xmlRajesh Joseph2015-08-281-5/+6
| | | | | | | | | | | | | Problem: snapshot delete all command fails with --xml option Fix: Provided xml support for delete all command Change-Id: I77cad131473a9160e188c783f442b6a38a37f758 BUG: 1257533 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/12027 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com>
* glusterd: probing a new node, which is part of another cluster should give errorGaurav Kumar Garg2015-08-281-3/+5
| | | | | | | | | | | | | | | | If user try to add node to extant cluster using "gluster peer probe \ <ip/hostname>" command then command is failing but its not giving proper cause of failure. This fix will take control of proper error message during peer probe with already extant cluster. Change-Id: I4f993e78c0e1b3e061153b984ec5e9b70085aef5 BUG: 1252448 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/11884 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* snapshot:cleanup snaps during unprobeMohammed Rafi KC2015-08-264-22/+104
| | | | | | | | | | | | | | | When doing an unprobe, the volume that doesnot contain any brick of the particular node will be deleted. So the snaps associated with that volume should also delete Change-Id: I9f3d23bd11b254ebf7d7722cc1e12455d6b024ff BUG: 1203185 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/9930 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* glusterd: Don't allow remove brick start/commit if glusterd is down of the ↵Atin Mukherjee2015-08-262-31/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | host of the brick remove brick stage blindly starts the remove brick operation even if the glusterd instance of the node hosting the brick is down. Operationally its incorrect and this could result into a inconsistent rebalance status across all the nodes as the originator of this command will always have the rebalance status to 'DEFRAG_NOT_STARTED', however when the glusterd instance on the other nodes comes up, will trigger rebalance and make the status to completed once the rebalance is finished. This patch fixes two things: 1. Add a validation in remove brick to check whether all the peers hosting the bricks to be removed are up. 2. Don't copy volinfo->rebal.dict from stale volinfo during restore as this might end up in a incosistent node_state.info file resulting into volume status command failure. Change-Id: Ia4a76865c05037d49eec5e3bbfaf68c1567f1f81 BUG: 1245045 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/11726 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd : Display status of Self Heal Daemon for disperse volumeAshish Pandey2015-08-251-16/+22
| | | | | | | | | | | | | | | | | Problem : Status of Self Heal Daemon is not displayed in "gluster volume status" As disperse volumes are self heal compatible, show the status of self heal daemon in gluster volume status command Change-Id: I83d3e6a2fd122b171f15cfd76ce8e6b6e00f92e2 BUG: 1217311 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: http://review.gluster.org/10764 Reviewed-by: Xavier Hernandez <xhernandez@datalab.es> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd: stop all the daemons services on peer detachGaurav Kumar Garg2015-08-242-15/+41
| | | | | | | | | | | | | | | Currently glusterd is not stopping all the deamon service on peer detach With this fix it will do peer detach cleanup properlly and will stop all the daemon which was running before peer detach on the node. Change-Id: Ifed403ed09187e84f2a60bf63135156ad1f15775 BUG: 1255386 Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> Reviewed-on: http://review.gluster.org/11509 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* snapshot: Log deletion of snapshot, during auto-deleteAvra Sengupta2015-08-232-2/+8
| | | | | | | | | | | | | | | | | When auto-delete is enabled, and soft-limit is reached, on creation of a snapshot, the oldest snapshot for that volume is deleted. Displaying a warning log before deleting the oldest snapshot. Change-Id: I75f0366935966a223b63a4ec5ac13f9fe36c0e82 BUG: 1255310 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/11963 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* tiering/glusterd: start tier daemon during volume startMohammed Rafi KC2015-08-173-1/+76
| | | | | | | | | | | | | | | | | | | Tier daemon should always run with tier volume. If volume is stopped and started again, we manually need to start the tier-daemon, instead this patch will automatically trigger tier process along with volume start. A snapshot restored volume will not have node_state_info, so we need to create and store it dynamically Change-Id: I659387c914bec7a1b6929ee5cb61f7b406402075 BUG: 1238593 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/11525 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* rpc: add owner xlator argument to rpc_clnt_newKrishnan Parthasarathi2015-08-122-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The @owner argument tells RPC layer the xlator that owns the connection and to which xlator THIS needs be set during network notifications like CONNECT and DISCONNECT. Code paths that originate from the head of a (volume) graph and use STACK_WIND ensure that the RPC local endpoint has the right xlator saved in the frame of the call (callback pair). This guarantees that the callback is executed in the right xlator context. The client handshake process which includes fetching of brick ports from glusterd, setting lk-version on the brick for the session, don't have the correct xlator set in their frames. The problem lies with RPC notifications. It doesn't have the provision to set THIS with the xlator that is registered with the corresponding RPC programs. e.g, RPC_CLNT_CONNECT event received by protocol/client doesn't have THIS set to its xlator. This implies, call(-callbacks) originating from this thread don't have the right xlator set too. The fix would be to save the xlator registered with the RPC connection during rpc_clnt_new. e.g, protocol/client's xlator would be saved with the RPC connection that it 'owns'. RPC notifications such as CONNECT, DISCONNECT, etc inherit THIS from the RPC connection's xlator. Change-Id: I9dea2c35378c511d800ef58f7fa2ea5552f2c409 BUG: 1235582 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/11436 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* glusterd: log improvement in glusterd_peer_rpc_notifyAtin Mukherjee2015-08-111-4/+9
| | | | | | | | | | | | | | | If ping time out is enabled glusterd can receive a disconnect event from a peer which has been already deleted resulting into a critical log printed. This patch ensures that critical message is logged only when its a connect event. Change-Id: I67d9aa3f60195e08af7dfc8a42683422aaf90a00 BUG: 1212437 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/10272 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/rebalance: trusted rebalance volfileN Balachandran2015-08-111-9/+11
| | | | | | | | | | | | | | | | | | Creating the client volfiles with GF_CLIENT_OTHER overwrites the trusted rebalance volfile and causes rebalance to fail if auth.allow is set. Now, we always set the value of trusted-client to GF_CLIENT_TRUSTED for rebalance volfiles. Change-Id: I95eb510256d18dfa9048f96a1aeb71cca4811811 BUG: 1248415 Signed-off-by: N Balachandran <nbalacha@redhat.com> Reviewed-on: http://review.gluster.org/11819 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* Set nfs.disable to "on" when global NFS-Ganesha key is enabledMeghana M2015-08-101-0/+24
| | | | | | | | | | | | | | | | | | "nfs.disable" gets set to "on" for all the existing volumes, when the command "gluster nfs-ganesha enable" is executed. When a new volume is created,it gets exported via Gluster-NFS on the nodes outside the NFS-Ganesha. To fix this, the "nfs.disable" key is set to "on" before starting the volume, whenever the global option is set to "enable". Change-Id: I7ce58928c36eadb8c122cded5bdcea271a0a4ffa BUG: 1251857 Signed-off-by: Meghana M <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/11871 Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
* quota : volume-reset shouldn't remove quota-deem-statfsManikandan Selvaganesh2015-08-074-2/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Volume-reset shouldn't remove quota-deem-statfs, unless explicitly specified, when quota is enabled. 1) glusterd_op_stage_reset_volume () 'gluster volume set/reset <VOLNAME>' features.quota/ features.inode-quota' should not be allowed as it is deprecated. Setting and resetting quota/inode-quota features should be allowed only through 'gluster volume quota <VOLNAME> enable/disable'. 2) glusterd_enable_default_options () Option 'features.quota-deem-statfs' should not be turned off with 'gluster volume reset <VOLNAME>', since quota features can be set/reset only with 'gluster volume quota <VOLNAME> enable/disable'. But, 'gluster volume set features.quota-deem-statfs' can be turned on/off when quota is enabled. Change-Id: Ib5aa00a4d8c82819c08dfc23e2a86f43ebc436c4 BUG: 1250582 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/11839 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Stop/restart/notify to daemons(svcs) during reset/set on a volumeanand2015-08-0614-171/+420
| | | | | | | | | | | | | | | | | | | | problem : Reset/set commands were not working properly. reset command returns success but it not sending notification to svcs if corresponding graph modified. Fix: Whenever reset/set command issued, generate the temp graph and compare with original graph and do the fallowing actions 1.) If both graph are identical nothing to do with svcs. 2.) If any changes in graph topology restart/stop service by calling svc manager. 3) If changes in options send notify signal by calling glusterd_fetchspec_notify. Change-Id: I852c4602eafed1ae6e6a02424814fe3a83e3d4c7 BUG: 1209329 Signed-off-by: anand <anekkunt@redhat.com> Reviewed-on: http://review.gluster.org/10850 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* SSL improvements: ECDH, DH, CRL, and accessible optionsEmmanuel Dreyfus2015-08-053-77/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Introduce ssl.dh-param option to specify a file containinf DH parameters. If it is provided, EDH ciphers are available. - Introduce ssl.ec-curve option to specify an elliptic curve name. If unspecified, ECDH ciphers are available using the prime256v1 curve. - Introduce ssl.crl-path option to specify the directory where the CRL hash file can be found. Setting to NULL disable CRL checking, just like the default. - Make all ssl.* options accessible through gluster volume set. - In default cipher list, exclude weak ciphers instead of listing the strong ones. - Enforce server cipher preference. - introduce RPC_SET_OPT macro to factor repetitive code in glusterd-volgen.c - Add ssl-ciphers.t test to check all the features touched by this change. Change-Id: I7bfd433df6bbf176f4a58e770e06bcdbe22a101a BUG: 1247152 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/11735 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* glusterd: fix op-version bump up flowAtin Mukherjee2015-08-031-10/+16
| | | | | | | | | | | | | | | | | | | | | If a cluster is upgraded from 3.5 to latest version, gluster volume set all cluster.op-version <VERSION> will throw an error message back to the user saying unlocking failed. This is because of trying to release a volume wise lock in unlock phase as the lock was taken cluster wide. The problem surfaced because the op-version is updated in commit phase and unlocking works in the v3 framework where it should have used cluster unlock. Fix is to decide which lock/unlock is to be followed before invoking lock phase Change-Id: Iefb271a058431fe336a493c24d240ed833f279c5 BUG: 1248298 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/11798 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Do not log failure if glusterd_get_txn_opinfo fails in gluster ↵Atin Mukherjee2015-08-022-15/+15
| | | | | | | | | | | | | | | | | | | volume status The first RPC call of gluster volume status fetches the list of the volume names from GlusterD and during that time since no volume name is set in the dictionary gluserd_get_txn_opinfo fails resulting into a failure log which is annoying to the user considering this command is triggered frequently. Fix is to have callers log it depending on the need Change-Id: Ib60a56725208182175513c505c61bcb28148b2d0 BUG: 1238936 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/11520 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Kaushal M <kaushal@redhat.com>
* rebalance/glusterd: Refactor rebalance volfileMohammed Rafi KC2015-07-301-24/+74
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | performance xlator loaded in rebalance xlators are dummy translators, since all fops are starting with dht level. Removing the performance xlators from rebalance volfile will help to minimize the chance for a graph switch. The new rebalance xlators will look like->>> (io-stats) || || || (----DHT----) // \\ // \\ // \\ (replica-1) ... (replica-n) // \\ // \\ // \\ // \\ // \\ // \\ client client client client Change-Id: I3808e3b48fd0cb3e60ef386b8ac9fd994e2831e3 BUG: 1240621 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/11565 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* tiering: Error message change for detach-tier non tier volumeHari Gowtham2015-07-291-5/+13
| | | | | | | | | | | Change-Id: Ib350b201df14b105e475426d2ec20ff5da39a8a1 BUG: 1245935 Signed-off-by: Hari Gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/11745 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>