summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* quota/marker: dir_count accounting is not atomicvmallika2015-10-122-72/+148
| | | | | | | | | | | | | | | | | | | | | | | Consider below scenario: Quota enabled on pre-existing data Now quota-crawl process will start healing xattrs Now if write is performed where healing is not complete, there is a possibility that 'update txn' is started before 'create xattr txn', in this case dir count can be missed on a dir where quota size xattr is not yet created. Solution is to get size xattr and if xattr is missing, add 1 for dir_count, this requires one additional fop if done in marker during each update iteration Better solution is to us xattrop GF_XATTROP_ADD_ARRAY64_WITH_DEFAULT Change-Id: Idc8978860a3914e70c98f96effeff52e9a24e6ba BUG: 1243798 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/11694 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* tier/shd: create shd volfile for tieringMohammed Rafi KC2015-10-113-20/+262
| | | | | | | | | | | | | | | Currently shd graph will only start if it is replicate or disperse volume. But in case of tiering, volume type will be tier. So we need to start shd if any of the cold or hot is compatible with shd volume. Change-Id: Ic689746ac7d2fc6a9eccdabd8518dc9139829de2 BUG: 1261276 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/11962 Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tier/ctr: CTR DB named lookup heal of cold tier during attach tierJoseph Fernandes2015-10-105-6/+257
| | | | | | | | | | | | | | | | | | | | | Heal hardlink in the db for already existing data in the cold tier during attach tier. i.e during fix layout do lookup to files in the cold tier. CTR xlator on the brick/server side does db update/insert of the hardlink on a namelookup. Currently the namedlookup is done synchronous to the fixlayout that is triggered by attach tier. This is not performant, adding more time to fixlayout. The performant approach is record the hardlinks on a compressed datastore and then do the namelookup asynchronously later, giving the ctr db eventual consistency Change-Id: I4ffc337fffe7d447804786851a9183a51b5044a9 BUG: 1252586 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/11828 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* cluster/tier: add watermarks and policy driverDan Lambright2015-10-107-113/+589
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fix introduces infrastructure to support different policies for promotion and demotion. Currently the tier feature automatically promotes and demotes files periodically based on access. This is good for testing but too stringent for most real workloads. It makes it difficult to fully utilize a hot tier- data will be demoted before it is touched- its unlikely a 100GB hot SSD will have all its data touched in a window of time. A new parameter "mode" allows the user to pick promotion/demotion polcies. The "test mode" will be used for *.t and other general testing. This is the current mechanism. The "cache mode" introduces watermarks. The watermarks represent levels of data residing on the hot tier. "cache mode" policy: The % the hot tier is full is called P. Do not promote or demote more than D MB or F files. A random number [0-100] is called R. Rules for migration: if (P < watermark_low) don't demote, always promote. if (P >= watermark_low) && (P < watermark_hi) demote if R < P; promote if R > P. if (P > watermark_hi) always demote, don't promote. gluster volume set {vol} cluster.watermark-hi % gluster volume set {vol} cluster.watermark-low % gluster volume set {vol} cluster.tier-max-mb {D} gluster volume set {vol} cluster.tier-max-files {F} gluster volume set {vol} cluster.tier-mode {test|cache} Change-Id: I157f19667ec95aa1d53406041c1e3b073be127c2 BUG: 1257911 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/12039 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* Porting developer guide to source code repo from glusterdocs projectHumble Devassy Chirammal2015-10-1030-12/+3013
| | | | | | | | | | | | Change-Id: Ib8d9c668ebb05863918e6ec2b89908f206626f38 BUG: 1206539 Signed-off-by: Humble Devassy Chirammal <hchiramm@redhat.com> Reviewed-on: http://review.gluster.org/12227 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Tested-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Tested-by: Raghavendra Talur <rtalur@redhat.com>
* cluster/tier: fix transpoint endpoint not connected in tier.t (rare)Dan Lambright2015-10-091-0/+4
| | | | | | | | | | | | | | The script did not cleanly unmount/mount gluster and change the current working directory when stopping and starting the volume. Most of the time this problem would self-resolve before subsequent tests, but very occasionally races would lead to the errors/failures. Change-Id: I128b913a71e2745512ee81c3d71852311e3b4a1b BUG: 1270328 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/12327 Reviewed-by: Joseph Fernandes Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterfsd: Initialize ctx, cmd_argsPranith Kumar K2015-10-091-3/+3
| | | | | | | | | Change-Id: I9c71ae264665b7bba609c7f86cf42a52a6b47260 BUG: 1269696 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/12311 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
* cluster/ec: Implement gfid-hash read-policyPranith Kumar K2015-10-096-11/+137
| | | | | | | | | | | | | | Add a policy in ec to performs reads from same bricks as long as they are good. Based on the gfid of the file/directory it determines the bricks to be considered for reading. Change-Id: Ic97b5c54c086a28b5e07a330a4fd448551b49376 BUG: 1261260 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/12133 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
* gfapi: xattr key length check to avoid brick crashMilind Changire2015-10-093-0/+86
| | | | | | | | | | | | | | | | | | | Added check to test if xattr key length > max allowed for OS distribution and return: EINVAL if xattr name pointer is NULL or 0 length ENAMETOOLONG if xattr name length > max allowed for distribution Typically the VFS does this in the kernel for us. But since we are bypassing the VFS by providing the libgfapi to talk directly to the brick process, we need to add such checks. Change-Id: I610a8440871200ae4640351902b752777a3ec0c2 BUG: 1263056 Signed-off-by: Milind Changire <mchangir@redhat.com> Reviewed-on: http://review.gluster.org/12207 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
* features/shard: Regulate memory consumption by individual shards' inode_t ↵Krutika Dhananjay2015-10-082-18/+141
| | | | | | | | | | | | | | | | | objects Shard translator will now maintain an lru list of inodes associated with individual shards of constant size, and will make sure that at no point the number of these inodes will exceed the configured limit. This is to keep the memory consumption by the thousands of shards of every large file from exploding. Change-Id: I5e60eea5dcf3130257fb431ca70cfaba53cae7f3 BUG: 1252263 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/12254 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tiering/glusterd: keep afr/ec xlators name constantMohammed Rafi KC2015-10-083-30/+112
| | | | | | | | | | | | | | | | | | | afr uses the translator name for locking purpose, so it is mandatory to keep afr/ec xlators name constant across graph change currently when a tier is attached, afr names are appended either with hot or cold. ie that breaks the above mentioned constraint. Change-Id: I3699dcdaa8190bab3ba81cbc01e8fa126d37ba0d BUG: 1261276 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12134 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* feature/quota: Make message-id for quota start from 120000Susant Palai2015-10-081-23/+23
| | | | | | | | | | | Change-Id: I2076fcab51f4ecc529dffd89ca6ee9eb99d80f09 BUG: 1265531 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: http://review.gluster.org/12218 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* quota: fix crash in quota_fallocatevmallika2015-10-081-0/+2
| | | | | | | | | | | | | | | list head was not initialized and brick was crashing with fallocate. This patch fixes the issue Change-Id: I9757b88eab61054892f0fe3de63af2683cd4fef7 BUG: 1269754 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/12314 Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* tier/ctr: Solution for db locks for tier migrator and ctr using sqlite ↵Joseph Fernandes2015-10-0810-141/+999
| | | | | | | | | | | | | | | | | | | | | | | | | | | version less than 3.7 i.e rhel 6.7 Problem: On RHEL 6.7, we have sqlite version 3.6.2 which doesnt support WAL journaling mode, as this journaling mode is only available in sqlite 3.7 and above. As a result we cannot have to progreses concurrently accessing sqlite, without running into db locks! Well WAL is also need for performace on CTR side. Solution: This solution is to use CTR db connection for doing queries when WAL mode is absent. i,e tier migrator will send sync_op ipc calls to CTR, which in turn will do the query and create/update the query file suggested by tier migrator. Pending: Well this solution will stop the db locks but the performance is still an issue for CTR. We are developing an in-Memory Transaction Log (iMeTaL) which will help boost the CTR performance by doing in memory udpates on the IO path and later flush the updates to the db in a batch/segment flush. Change-Id: Ie3149643ded159234b5cc6aa6cf93b9022c2f124 BUG: 1240577 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Signed-off-by: Dan Lambright <dlambrig@redhat.com> Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/12191 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Luis Pabon <lpabon@redhat.com>
* gluster v status --xml for a replicated hot tier volumehari gowtham2015-10-081-11/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volStatus> <volumes> <volume> <volName>tiervol</volName> <nodeCount>11</nodeCount> <hotBricks> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b5_2</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49164</port> <ports> <tcp>49164</tcp> <rdma>N/A</rdma> </ports> <pid>8684</pid> </node> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b5_1</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49163</port> <ports> <tcp>49163</tcp> <rdma>N/A</rdma> </ports> <pid>8687</pid> </node> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b4_2</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49162</port> <ports> <tcp>49162</tcp> <rdma>N/A</rdma> </ports> <pid>8699</pid> </node> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b4_1</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49161</port> <ports> <tcp>49161</tcp> <rdma>N/A</rdma> </ports> <pid>8708</pid> </node> </hotBricks> <coldBricks> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b1_1</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49155</port> <ports> <tcp>49155</tcp> <rdma>N/A</rdma> </ports> <pid>8716</pid> </node> <node> <hostname>10.70.42.203</hostname> <path>/data/gluster/tier/b1_2</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>49156</port> <ports> <tcp>49156</tcp> <rdma>N/A</rdma> </ports> <pid>8724</pid> </node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid> <status>1</status> <port>2049</port> <ports> <tcp>2049</tcp> <rdma>N/A</rdma> </ports> <pid>8678</pid> </node> </coldBricks> <tasks> <task> <type>Tier migration</type> <id>975bfcfa-077c-4edb-beba-409c2013f637</id> <status>1</status> <statusStr>in progress</statusStr> </task> </tasks> </volume> </volumes> </volStatus> </cliOutput> Change-Id: I69252a36b6e6b2f3cbe5db06e9a716f504a1dba4 BUG: 1268810 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12302 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Nekkunti <anekkunt@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* fuse: resolve complete path after a graph switchMohammed Rafi KC2015-10-084-19/+156
| | | | | | | | | | | | | | | | | | If a graph switch has happended as part of a attach-tier, then there is a chance to hash fops to newly added brick before fix-layout. This causes on going i/o to fail. This patch will resolve a path, for graph switch by sending recursive lookup to the parent directories. Those lookups will help to heal the directory. Change-Id: Ia2bb4b43a21e5cc6875ba1205628744c3f0ce4e5 BUG: 1263549 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12184 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* xlators: add JSON FOP statistics dumps every N secondsRichard Wareing2015-10-0813-109/+581
| | | | | | | | | | | | | | | | | | | | | | | Summary: - Adds a thread to the io-stats translator which dumps out statistics every N seconds where N is configurable by an option called "diagnostics.stats-dump-interval" - Thread cleanly starts/stops when translator is unloaded - Updates macros to use "Atomic Builtins" (e.g. intel CPU extentions) to use memory barries to update counters vs using locks. This should reduce overhead and prevent any deadlock bugs due to lock contention. Test Plan: - Test on development machine - Run prove -v tests/basic/stats-dump.t Change-Id: If071239d8fdc185e4e8fd527363cc042447a245d BUG: 1266476 Signed-off-by: Jeff Darcy <jdarcy@redhat.com> Reviewed-on: http://review.gluster.org/12209 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com>
* cluster/afr: Handle stack reset failuresPranith Kumar K2015-10-072-0/+8
| | | | | | | | | | | | | | | When all the bricks go down in the middle of the self-heal, in AFR_STACK_RESET afr_local_init will fail because all the bricks are down. So local will remain NULL for the frame. This leads to crashes as this failure is not handled in both entry and data self-heals. Change-Id: I71a02f161f2c4dbfdc8bb7f2a6f32807191ed253 BUG: 1269470 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/12309 Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/add-brick: change add-brick implementation to v3 frameworkMohammed Rafi KC2015-10-072-17/+134
| | | | | | | | | | | | | | | | | | | | | add-brick commit first happens on local node and followed by peers. As part of the commit of local-host glusterd will send the updated volfiles to the clients connected to the local-host even before the commit of peers happen. If any of the newly added brick was hosted by any peer, that brick won't be started when client (connected to local-host) try to send fops. By changing to v3 framework we can send post validate ops after commit operation that helps to send volfile fetch request only after completing commits on all nodes. Change-Id: Ib7312e01143326128c010c11fc2ed206f37409ad BUG: 1263549 Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com> Reviewed-on: http://review.gluster.org/12237 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* quota: use copy_frame when creating new frame during quota_check_limitvmallika2015-10-062-3/+3
| | | | | | | | | | | | | | | | | DHT re-balance, sets frame root PID < 0 and quota_check_limit skips enforcement if this PID is less than 0. When creating new frame for quota_check_limit we need to use copy_frame instead of create_frame, so that all auth information are copied from original frame. Change-Id: Ib3b4a3744f8b0d72a8bc32826f6edae836d6faed BUG: 1267812 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/12265 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* Tier/cli: number of bricks remains the same in v info --xmlhari gowtham2015-10-061-1/+1
| | | | | | | | | | | | | | | | | | The number of bricks count remains one for the cold type. Actual result: <numberOfBricks>1 x 2 = 2</numberOfBricks> Expected result: <numberOfBricks>3 x 2 = 6</numberOfBricks> Change-Id: I31480a7808b248ef9ea805cb64f7663d44647ddf BUG: 1268822 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12303 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* dht/rebalance: fix layout and dict leaksSusant Palai2015-10-062-0/+11
| | | | | | | | | Change-Id: Ib3911dfa1f950ff9decbe249ad798e97226dd06d BUG: 1266877 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: http://review.gluster.org/12295 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/ec : Mark new entry changelog in entry self-healAshish Pandey2015-10-065-8/+148
| | | | | | | | | | | | | | | | | | | | Problem : When a new entry is created dirty mark xattrs are not created this will need full heal to be performed, even when there are partial failures. Solution : Marks new entry changelog in self-heal. PS: Also fixed erasing of dirty markers when no data heal is required. BUG: 1254121 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Change-Id: I156e3d3201afa77efe118e1aaace1d91c90a9613 Reviewed-on: http://review.gluster.org/11938 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
* features/shard: Use the xattr rsp dict to pick shard xattrs in xattrop cbkKrutika Dhananjay2015-10-052-2/+1
| | | | | | | | | | | | The change http://review.gluster.org/#/c/11938/ makes a fix in posix translator which would cause sharding to fail fops post xattrop without this patch. Change-Id: If096965b319f393608b0f763402b9b90acb61492 BUG: 1268796 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/12300 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tier/cli : throw a warning when user issues a detach-tier commit/forceManikandan Selvaganesh2015-10-053-3/+6
| | | | | | | | | | | | | command Change-Id: Idf7664d509156ce46ef4308ffc07fb556a0aedd2 BUG: 1268755 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/12297 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
* tests: Move tests/bugs/shard/bug-1245547.t to bad tests listKrutika Dhananjay2015-10-051-0/+1
| | | | | | | | | Change-Id: I389f88cefdeee87b99dcacbac48d2dcc70a97979 BUG: 1268796 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/12299 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* tests: Adding bug-1221481-allow-fops-on-dir-split-brain.t to bad testAnuradha Talur2015-10-051-0/+1
| | | | | | | | | | | | | Adding bug-1221481-allow-fops-on-dir-split-brain.t to bad test as it is failing spuriously. Will be removed after the failure is root caused and fixed. Change-Id: I26b634f01dfa2c60eed21a1286aa83ecaa75fa26 BUG: 1268790 Signed-off-by: Anuradha Talur <atalur@redhat.com> Reviewed-on: http://review.gluster.org/12298 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* server/protocol: option for dynamic authorization of client permissionsPrasanna Kumar Kalever2015-10-046-4/+65
| | | | | | | | | | | | | | | | | | | | | | | | | problem: assuming gluster volume is already mounted (for gfapi: say client transport connection has already established), now if somebody change the volume permissions say *.allow | *.reject for a client, gluster should allow/terminate the client connection based on the fresh set of volume options immediately, but in existing scenario neither we have any option to set this behaviour nor we take any action until and unless we remount the volume manually solution: Introduce 'dynamic-auth' option (default: on). If 'dynamic-auth' is 'on' gluster will perform dynamic authentication to allow/terminate client transport connection immediately in response to *.allow | *.reject volume set options, thus if volume permissions have changed for a particular client (say client is added to auth.reject list), his transport connection to gluster volume will be terminated immediately. Change-Id: I6243a6db41bf1e0babbf050a8e4f8620732e00d8 BUG: 1245380 Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> Reviewed-on: http://review.gluster.org/12229 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* rpc: Remove unused functionAnoop C S2015-10-011-8/+0
| | | | | | | | | | Change-Id: I0b96b83ad8d06de9b2f5fc14073b94777885a775 BUG: 1261927 Signed-off-by: Anoop C S <anoopcs@redhat.com> Reviewed-on: http://review.gluster.org/12153 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* glusterd: validate function for replica volume optionsSakshi2015-10-012-12/+109
| | | | | | | | | | Change-Id: I5b4a28db101e9f7e07f4b388c7a2594051c9e8dd BUG: 1265479 Signed-off-by: Sakshi <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/12215 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* cli/tier : fixes cli crash when user tries "gluster v tier" commandManikandan Selvaganesh2015-10-011-1/+1
| | | | | | | | | | | | Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Change-Id: I919d8935c849f9be6b2cb43e8332afb821778d89 BUG: 1267539 Reviewed-on: http://review.gluster.org/12258 Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* dht/rebalance: fix mem-leak in migration code pathSusant Palai2015-10-011-5/+21
| | | | | | | | | Change-Id: I37faf983fc02996541f3d96a17cb2a2c2cdb6781 BUG: 1266877 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: http://review.gluster.org/12235 Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
* storage/posix: Reduce number of getxattrs for internal xattrsPranith Kumar K2015-10-011-4/+53
| | | | | | | | | | | | | | | Most of the gluster internal xattrs don't exceed 256 bytes. So try getxattr with ~256 bytes. If it gives ERANGE then go the old way of getxattr with NULL 'buf' to find the length and then getxattr with allocated 'buf' to fill the data. This way we reduce lot of getxattrs. Change-Id: I716d484bc9ba67a81d0cedb5ee3e72a5ba661f6d BUG: 1265893 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/12240 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: N Balachandran <nbalacha@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* glusterd, dht: volume set for use-readdirp in dhtPranith Kumar K2015-10-014-1/+51
| | | | | | | | | | | | Change-Id: Icab246b1d02808864d878d949fa56f9f889b538a BUG: 1265677 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/12221 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* quota : xml output modified to give exact available space in bytesManikandan Selvaganesh2015-09-303-36/+36
| | | | | | | | | | | | | | | | Currrently, 'gluster v quota <VOLNAME> list' command rounds off the available space and shows it to the user. Now, 'gluster v quota <VOLNAME> list --xml' command is modified to show the exact available space in bytes. Change-Id: I3772e036a2537c1df12f22cf32dfe4ac7940988f BUG: 1261404 Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com> Reviewed-on: http://review.gluster.org/12137 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* quota/marker: marker code cleanupvmallika2015-09-303-2350/+3
| | | | | | | | | | | | | marker is re-factored with syncop approach, remove unused old code Change-Id: I36e670e63b6c166db5e64d3149d2978981e2f7c2 BUG: 1240581 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/11560 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* tests: Move georep-basic-dr-tarssh.t to bad testsKotresh HR2015-09-301-0/+1
| | | | | | | | | | | | | Geo-rep tests are failing spuriously in few regression machines. Hence moving it to bad till the issue is root caused and fixed. Change-Id: I25feb8d9c51e03aa9ac0fe70291dc9e54ad043f9 BUG: 1227624 Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/12248 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* cluster/tier re-enable tier.t in automatic testsDan Lambright2015-09-292-16/+29
| | | | | | | | | | | | | Re-enable tier.t in automatic tests. Disable check for BSD until recurring problem with SQLlite on it is understood. Change-Id: Ib13b269ab841a59a0a41d8478c8627b180b16c61 BUG: 1231268 Signed-off-by: Dan Lambright <dlambrig@redhat.com> Reviewed-on: http://review.gluster.org/12208 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* Tier/cli: tier related information in volume info xmlhari gowtham2015-09-292-28/+266
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gluster v info didnt differentiate the hot bricks and cold bricks and other few values <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volInfo> <volumes> <volume> <name>rmbr</name> <id>72d223fc-96ba-4f4a-ac6e-0d0bc16ef127</id> <status>1</status> <statusStr>Started</statusStr> <brickCount>3</brickCount> <distCount>1</distCount> <stripeCount>1</stripeCount> <replicaCount>1</replicaCount> <disperseCount>0</disperseCount> <redundancyCount>0</redundancyCount> <type>5</type> <typeStr>Tier</typeStr> <transport>0</transport> <xlators/> <bricks> <hotBricks> <hotBrickType>Distribute</hotBrickType> <numberOfBricks>1</numberOfBricks> <brick uuid="81">v1:/hb1<name>v1:/hb1</name><hostUuid>81</hostUuid></brick> </hotBricks> <coldBricks> <coldBrickType>Distribute</coldBrickType> <numberOfBricks>2</numberOfBricks> <brick uuid="81">v1:/br1<name>v1:/br1</name><hostUuid>81</hostUuid></brick> <brick uuid="81">v1:/br2<name>v1:/br2</name><hostUuid>81</hostUuid></brick> <count>0</count> </coldBricks> </bricks> </volume> </volumes> </volInfo> </cliOutput> Change-Id: I6e52541bb6d8a6a17e17bfcb42434beaac13db56 BUG: 1261837 Signed-off-by: hari gowtham <hgowtham@redhat.com> Reviewed-on: http://review.gluster.org/12158 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Dan Lambright <dlambrig@redhat.com> Tested-by: Dan Lambright <dlambrig@redhat.com>
* protocol/client: Remove dead code from client_rpc_notifyAnoop C S2015-09-283-20/+9
| | | | | | | | | | | | | | | | | | | | Normally GF_EVENT_CHILD_UP is dispatched after client handshake. But we have some dead code in client_rpc_notify which is assumed to do the same on receiving RPC_CLNT_CONNECT. This dispatch is based on a condition whether "disable-handshake" is enabled or not. Since we require client-handshake everytime we have a connect this check for "disable-handshake" is invalid and no longer required. Moreover this option is never handled in any of the translators. Change-Id: Ic862d6ac08cd3b18cf231f50140cd00e84e52ca0 BUG: 1227667 Signed-off-by: Anoop C S <anoopcs@redhat.com> Reviewed-on: http://review.gluster.org/12170 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* gfapi: transport and port are optional for glfs_set_volfile_serverRaghavendra Talur2015-09-282-12/+44
| | | | | | | | | | | | | | | | | | Only server is the required argument for glfs_set_volfile_server and both transport and port are optional. When glfs_set_volfile_server is invocated multiple times, only on the first invocation we replace port 0 with 24007 and transport NULL with "tcp". Hence, replacing the parameters at the entry function is the right way. Change-Id: If9f4a5f7fd9038eed140e2f47167a8fd11acc2f6 BUG: 1260561 Signed-off-by: Raghavendra Talur <rtalur@redhat.com> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> Reviewed-on: http://review.gluster.org/12114 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* posix: xattrop 'GF_XATTROP_ADD_ARRAY_WITH_DEFAULT' implementationvmallika2015-09-282-11/+121
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implementation of xattrop type: GF_XATTROP_ADD_ARRAY_WITH_DEFAULT GF_XATTROP_ADD_ARRAY64_WITH_DEFAULT These operations are similar to 'GF_XATTROP_ADD_ARRAY', except that it adds a default value if xattr is missing or its value is zero on disk. One use-case of this operation is in inode-quota. When a new directory is created, its default dir_count should be set to 1. So when a xattrop performed setting inode-xattrs, it should account initial dir_count 1 if the xattrs are not present Here is the usage of this operation value required in xdata for each key struct array { int32_t newvalue_1; int32_t newvalue_2; ... int32_t newvalue_n; int32_t default_1; int32_t default_2; ... int32_t default_n; }; or struct array { int32_t value_1; int32_t value_2; ... int32_t value_n; } data[2]; fill data[0] with new value to add fill data[1] with default value xattrop GF_XATTROP_ADD_ARRAY_WITH_DEFAULT for i from 1 to n { if (xattr (dest_i) is zero or not set in the disk) dest_i = newvalue_i + default_i else dest_i = dest_i + newvalue_i } value in xdata after xattrop is successful struct array { int32_t dest_1; int32_t dest_2; ... int32_t dest_n; }; Change-Id: Ic6a08473e99fd98299a839d4d8416081a7534efd BUG: 1243946 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/11702 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
* storage/posix: Prevent extra handle-pathPranith Kumar K2015-09-281-12/+2
| | | | | | | | | | | | | | In readdirp_fill we already have the path of the file/directory. No need to construct handle-path again. This saves two lstats and at least two readlink calls per directory. Change-Id: I8d1b2afeda3e053265a243d4e9a101192f5f509e BUG: 1265893 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/12222 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
* features/shard: Port log messages to new frameworkKrutika Dhananjay2015-09-275-93/+342
| | | | | | | | | | Change-Id: Iac01e6a89a0d0c37a12a5e47f17f7ced85a31590 BUG: 1265516 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/12217 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* build: export minimum symbols from xlators for correct resolutionKaleb S. KEITHLEY2015-09-2466-65/+250
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | We've been lucky that we haven't had any symbol collisions until now. Now we have a collision between the snapview-client's svc_lookup() and libntirpc's svc_lookup() with nfs-ganesha's FSAL_GLUSTER and libgfapi. As a short term solution all the snapview-client's FOP methods were changed to static scope. See http://review.gluster.org/11805. This works in snapview-client because all the FOP methods are defined in a single source file. This solution doesn't work for other xlators with FOP methods defined in multiple source files. To address this we link with libtool's '-export-symbols $symbol-file' (a wrapper around `ld --version-script ...` --- on linux anyway) and only export the minimum required symbols from the xlator sharedlib. N.B. the libtool man page says that the symbol file should be named foo.sym, thus the rename of *.exports to *.sym. While foo.exports worked, we will follow the documentation. Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> BUG: 1248669 Change-Id: I1de68b3e3be58ae690d8bfb2168bfc019983627c Reviewed-on: http://review.gluster.org/11814 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* extras/hookscripts : introducing additional check in S31ganesha-start.shJiffin Tony Thottan2015-09-241-1/+4
| | | | | | | | | | | | | | | | | New export file with default configuration will be created for a volume when it is started again. This patch will create new export file only when it is not present. This change is required for scenarios such as snapshot restore , node reboot etc. Change-Id: I34123911f176dcb29d5c016aa097af3a3b2c727b BUG: 1261444 Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com> Reviewed-on: http://review.gluster.org/12159 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: soumya k <skoduri@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* heal: remove glfsh_print_brick()Ravishankar N2015-09-231-34/+3
| | | | | | | | | | | | | | | | Use glfsh_print_brick_from_xl() instead so that the hostname:brickpath displayed when heal info is run is consistent with other gluster cli commands like `gluster volume info`. Change-Id: I30ee3d76d0f68991a25bd678d40ec3bf7e0538c7 BUG: 1265470 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/12212 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: Anuradha Talur <atalur@redhat.com>
* features/shard: Performance improvements in IO path - Part 2Krutika Dhananjay2015-09-221-0/+80
| | | | | | | | | | | | | | | | This is change 2/2 of the performance improvements for sharding. The changes are with respect to maintaining up-to-date values of file attributes in [f]stat, [f]setattr, link, and [f]truncate codepaths. Change-Id: Ia3ce4664fb33be869e4dc76494adbe9c314cc098 BUG: 1258905 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/12138 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
* features/shard: Performance improvements in IO pathKrutika Dhananjay2015-09-222-70/+233
| | | | | | | | | | | | | | | | | | | | | | | | This is patch 1/2 of the performance improvement work for sharding in the IO path. What this patch does: Since the primary use-case where sharding is targeted - VM store - is a single-writer workload, instead of performing lookup on the base file everytime to gather the size and block count from the backend in reads, writes and truncate, now the size and block count is also cached and kept up-to-date after every inode write in the inode ctx. TO-DO: Make changes in rename, link, unlink, [f]setattr and [f]stat to keep the relevant iatt members up-to-date in the inode ctx. Change-Id: Ica87d020dabc3a3dbccec814b26b01d6a629ff4d BUG: 1258905 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/12126 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterfsd : newly added brick receives fops only after it is startedSakshi2015-09-223-4/+27
| | | | | | | | | | | | | | | | | | | When new bricks are added in the middle of an on-going fop like 'rm', the volfile changes without waiting for the newly added bricks to get port. Fops are sent to all bricks and may fail on some with ENOTCONN as these bricks may not have a port yet. This patch ensures that the volfile change happens only after all the bricks have a port. Change-Id: I7ed2413475f80d0cc8849fed33036ade8d75a191 BUG: 1233151 Signed-off-by: Sakshi <sabansal@redhat.com> Reviewed-on: http://review.gluster.org/11342 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Atin Mukherjee <amukherj@redhat.com>