summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: fix tier-enabled flag op-version checkAtin Mukherjee2018-02-131-2/+2
| | | | | | | | | | | | | | tier-enabled flag in volinfo structure was introduced in 3.10, however while writing this value to the glusterd store was done with a wrong op-version check which results into volume checksum failure during upgrades. >Change-Id: I4330d0c4594eee19cba42e2cdf49a63f106627d4 >BUG: 1544600 >Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Change-Id: I4330d0c4594eee19cba42e2cdf49a63f106627d4 BUG: 1544637 Signed-off-by: hari gowtham <hgowtham@redhat.com>
* glusterd: process pmap sign in only when port is marked as freeAtin Mukherjee2018-02-021-0/+15
| | | | | | | | | | | | | | | | | | | Because of some crazy race in volume start code path because of friend handshaking with volumes with quorum enabled we might end up into a situation where glusterd would start a brick and get a disconnect and then immediately try to start the same brick instance based on another friend update request. And then if for the very first brick even if the process doesn't come up at the end sign in event gets sent and we end up having two duplicate portmap entries for the same brick. Since in brick start we mark the previous port as free, its better to consider a sign in request as no op if the corresponding port type is marked as free. >mainline patch : https://review.gluster.org/#/c/19263/ Change-Id: I995c348c7b6988956d24b06bf3f09ab64280fc32 BUG: 1537346 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> (cherry picked from commit 9d708a3739c8201d23f996c413d6b08f8b13dd90)
* glusterd: connect to an existing brick process when qourum status is ↵Atin Mukherjee2018-01-129-15/+41
| | | | | | | | | | | | | | | | NOT_APPLICABLE_QUORUM First of all, this patch reverts commit 635c1c3 as the same is causing a regression with bricks not coming up on time when a node is rebooted. This patch tries to fix the problem in a different way by just trying to connect to an existing running brick when quorum status is not applicable. >mainline patch : https://review.gluster.org/#/c/19134/ Change-Id: I0efb5901832824b1c15dcac529bffac85173e097 BUG: 1511301 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* quota: fixes issue in quota.conf when setting large number of limitsSanoj Unnikrishnan2018-01-101-12/+33
| | | | | | | | | | | | | | | | Problem: It was not possible to configure more than 7712 quota limits. This was because a stack buffer of size 131072 was used to read from quota.conf file. In the new format of quota.conf file each gfid entry takes 17bytes (16byte gfid + 1 byte type). So, the buf_size was not a multiple of gfid entry size and as per code this was considered as corruption. Solution: make buf size multiple of gfid entry size Change-Id: Id036225505a47a4f6fa515a572ee7b0c958f30ed BUG: 1489043 Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com> (cherry picked from commit 2899a4f125735636fe7cd8db73c0b8a13289df9b)
* glusterd: Nullify pmap entry for bricks belonging to same portAtin Mukherjee2018-01-101-1/+1
| | | | | | | | | | | | | Commit 30e0b86 tried to address all the stale port issues glusterd had in case of a brick is abruptly killed. For brick multiplexing case because of a bug the portmap entry was not getting removed. This patch addresses the same. >mainline patch : https://review.gluster.org/#/c/19119/ Change-Id: Ib020b967a9b92f1abae9cab9492f0cacec59aaa1 BUG: 1530448 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* glusterd: Free up svc->conn on volume deleteAtin Mukherjee2017-12-121-0/+4
| | | | | | | | | | | | | Daemons like snapd, tierd and gfproxyd are maintained on per volume basis and on a volume delete we should destroy the rpc connection established for them. >mainline patch : https://review.gluster.org/#/c/18957/ Change-Id: Id1440e39da07b990fdb9b207df18da04b1ca8014 BUG: 1523048 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> (cherry picked from commit 36ce4c614a3391043a3417aa061d0aa16e60b2d3)
* glusterd: display gluster volume status, when quorum type is serverSanju Rakonde2017-11-301-0/+6
| | | | | | | | | | | | | | Problem: when server-quorum-type is server, after restarting glusterd in the node which is up, gluster volume status is giving incorrect information. Fix: check whether server is blank, before adding other keys into the dictionary. Change-Id: I926ebdffab330ccef844f23f6d6556e137914047 BUG: 1511782 Signed-off-by: Sanju Rakonde <srakonde@redhat.com> (cherry picked from commit 046c7e3199fca715592762e271e6061ac99b0c4b)
* glusterd: restart the brick if qorum status is NOT_APPLICABLE_QUORUMAtin Mukherjee2017-11-101-1/+2
| | | | | | | | | | | | | | If a volume is not having server quorum enabled and in a trusted storage pool all the glusterd instances from other peers are down, on restarting glusterd the brick start trigger doesn't happen resulting into the brick not coming up. > mainline patch : https://review.gluster.org/#/c/18669/ Change-Id: If1458e03b50a113f1653db553bb2350d11577539 BUG: 1511301 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> (cherry picked from commit 635c1c3691a102aa658cf1219fa41ca30dd134ba)
* glusterd : introduce timer in mgmt_v3_lockGaurav Yadav2017-11-064-17/+241
| | | | | | | | | | | | | | | | | | Problem: In a multinode environment, if two of the op-sm transactions are initiated on one of the receiver nodes at the same time, there might be a possibility that glusterd may end up in stale lock. Solution: During mgmt_v3_lock a registration is made to gf_timer_call_after which release the lock after certain period of time >mainline patch : https://review.gluster.org/#/c/18437/ Change-Id: I16cc2e5186a2e8a5e35eca2468b031811e093843 BUG: 1503239 Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
* glusterd: clean up portmap on brick disconnectAtin Mukherjee2017-11-064-11/+46
| | | | | | | | | | | | | | | | | | | | | | | GlusterD's portmap entry for a brick is cleaned up when a PMAP_SIGNOUT event is initiated by the brick process at the shutdown. But if the brick process crashes or gets killed through SIGKILL then this event is not initiated and glusterd ends up with a stale port. Since GlusterD's portmap traversal happens both ways, forward for allocation and backward for registry search, there is a possibility that glusterd might end up running with a stale port for a brick which eventually will end up with clients to fail to connect to the bricks. Solution is to clean up the port entry in case the process is down as part of the brick disconnect event. Although with this the handling PMAP_SIGNOUT event becomes redundant in most of the cases, but this is the safeguard method to avoid glusterd getting into the stale port issues. > mainline patch : https://review.gluster.org/#/c/18541/ Change-Id: I04c5be6d11e772ee4de16caf56dbb37d5c944303 BUG: 1507747 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> (cherry picked from commit 30e0b86aae00430823f2523c6efa3c4ebbf0a478)
* glusterd: fix brick restart parallelismAtin Mukherjee2017-11-066-32/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | | glusterd's brick restart logic is not always sequential as there is atleast three different ways how the bricks are restarted. 1. through friend-sm and glusterd_spawn_daemons () 2. through friend-sm and handling volume quorum action 3. through friend handshaking when there is a mimatch on quorum on friend import. In a brick multiplexing setup, glusterd ended up trying to spawn the same brick process couple of times as almost in fraction of milliseconds two threads hit glusterd_brick_start () because of which glusterd didn't have any choice of rejecting any one of them as for both the case brick start criteria met. As a solution, it'd be better to control this madness by two different flags, one is a boolean called start_triggered which indicates a brick start has been triggered and it continues to be true till a brick dies or killed, the second is a mutex lock to ensure for a particular brick we don't end up getting into glusterd_brick_start () more than once at same point of time. Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2 BUG: 1508283 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> (cherry picked from commit 82be66ef8e9e3127d41a4c843daf74c1d8aec4aa)
* glusterd: delete source brick only once in reset-brick commit forceAtin Mukherjee2017-11-021-1/+1
| | | | | | | | | | | | | | While stopping the brick which is to be reset and replaced delete_brick flag was passed as true which resulted glusterd to free up to source brick before the actual operation. This results commit force to fail failing to find the source brickinfo. > mainline patch : https://review.gluster.org/#/c/18581/ Change-Id: I1aa7508eff7cc9c9b5d6f5163f3bb92736d6df44 BUG: 1507877 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> (cherry picked from commit 0fb8acaa6ff80c43e46deac0ce66b29ae0df0ca4)
* glusterd: persist brickinfo's port change into glusterd's storeGaurav Yadav2017-11-025-10/+61
| | | | | | | | | | | | | | | | | | Problem: Consider a case where node reboot is performed and prior to reboot brick was listening to 49153. Post reboot glusterd assigned 49152 to brick and started the brick process but the new port was never persisted. Now when glusterd restarts glusterd always read the port from its persisted store i.e 49153 however pmap signin happens with the correct port i.e 49152. Fix: Make sure when glusterd_brick_start is called, glusterd_store_volinfo is eventually invoked. Change-Id: Ic0efbd48c51d39729ed951a42922d0e59f7115a1 BUG: 1507748 Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
* glusterd: documenting server.allow-insecureSanju Rakonde2017-10-251-1/+1
| | | | | | | | | | | | | problem: "server.allow-insecure" is invisible in gluster volume set help. Fix: "server.allow-insecure" is defined as NO_DOC type, chainging it to DOC type solve the problem. Change-Id: I327f1e4c1684ff846deb8b7df07d4d8a09073274 BUG: 1505373 Signed-off-by: Sanju Rakonde <srakonde@redhat.com> (cherry picked from commit c0b08f10ed07bfe06309e31a7fff85cadb733ce2)
* glusterd:Marking all the brick status as stopped when a process goes down in ↵Sanju Rakonde2017-10-121-1/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | brick multiplexing In brick multiplexing environment, if a brick process goes down i.e., if we kill it with SIGKILL, the status of the brick for which the process came up for the first time is only changing to stopped. all other brick statuses are remain started. This is happening because the process was killed abruptly using SIGKILL signal and signal handler wasn't invoked and further cleanup wasn't triggered. When we try to start a volume using force, it shows error saying "Request timed out", since all the brickinfo->status are still in started state, we're waiting for one of the brick process to come up which never going to happen since the brick process was killed. To resolve this, In the disconnect event, We are checking all the processes that whether the brick which got disconnected belongs the process. Once we get the process we are calling a function named glusterd_mark_bricks_stopped_by_proc() and sending brick_proc_t object as an argument. From the glusterd_brick_proc_t we can get all the bricks attached to that process. but these are duplicated ones. To get the original brickinfo we are reading volinfo from brick. In volinfo we will have original brickinfo copies. We are changing brickinfo->status to stopped for all the bricks. >Change-Id: Ifb9054b3ee081ef56b39b2903ae686984fe827e7 >BUG: 1499509 >Signed-off-by: Sanju Rakonde <srakonde@redhat.com> >Reviewed-on: https://review.gluster.org/#/c/18444/ >Smoke: Gluster Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> >cherry picked from commit 9422446d72bc054962d72ace9912ecb885946d49) Change-Id: Ifb9054b3ee081ef56b39b2903ae686984fe827e7 BUG: 1501154 Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
* glusterd: disallow replace brick for dist only volumesAtin Mukherjee2017-10-121-1/+11
| | | | | | | | | | | | | | | | | | | | Allowing replace-brick on dist only volumes will lead to data loss. This patch blocks replace brick commit force to fail if a volume is dist only. Also removing tests/basic/pump.t as its of no use as per the discussion in http://lists.gluster.org/pipermail/gluster-devel/2017-September/053652.html >Reviewed-on: https://review.gluster.org/18226 >Smoke: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: N Balachandran <nbalacha@redhat.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >(cherry picked from commit 7f70d38b66ce755f848ff0197814457a28b321df) Change-Id: Iabb0c16f865f3fc361b64a19bfcf0c0fbb5c2682 BUG: 1493975 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* cli/afr: gluster volume heal info "healed" command output is not appropriateMohit Agrawal2017-10-111-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: "gluster volume heal info [healed] [heal-failed]" command output on terminal is not appropriate in case of down any volume. Solution: To make message more appropriate change the condition in function "gd_syncop_mgmt_brick_op". Test : To verify the fix followed below procedure 1) Create 2*3 distribute replicate volume 2) set self-heal daemon off 3) kill two bricks (3, 6) 4) create some file on mount point 5) bring brick 3,6 up 6) kill other two brick (2 and 4) 7) make self heal daemon on 8) run "gluster v heal <vol-name>" Note: After apply the patch options (healed | heal-failed) will deprecate from command line. > BUG: 1388509 > Change-Id: I229c320c9caeb2525c76b78b44a53a64b088545a > Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> > (Cherry pick from commit d1f15cdeb609a1b720a04a502f7a63b2d3922f41) BUG: 1500662 Change-Id: I229c320c9caeb2525c76b78b44a53a64b088545a Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
* glusterd: fix invalid memory reference returnedXavier Hernandez2017-10-101-2/+9
| | | | | | | | | | | | | > BUG: 1490897 > Reviewed-on: https://review.gluster.org/18263 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> > Reviewed-by: Gaurav Yadav <gyadav@redhat.com> Change-Id: I0823c7b33060b48040c1d86ad346a5f6e15bc190 BUG: 1491178 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
* glusterd: fix client io-threads option for replicate volumesRavishankar N2017-10-096-34/+92
| | | | | | | | | | | | | | | | | | | | | Backport of https://review.gluster.org/#/c/18430/ Problem: Commit ff075a3d6f9b142911d25c27fd209838782bfff0 disabled loading client-io-threads for replicate volumes (it was set to on by default in commit e068c1997314046658dd502e9118dab32decf879) due to performance issues but in doing so, inadvertently failed to load the xlator even if the user explicitly enabled the option using the volume set command. This was despite returning returning sucess for the volume set. Fix: Modify the check in perfxl_option_handler() and add checks in volume create/add-brick/remove-brick code paths, tying it all to GD_OP_VERSION_3_12_2. Change-Id: Ib612973a999a7da818cc926f5c2601b1f0794fcf BUG: 1499158 Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* glusterd: spelling errors reported by Debian maintainerKaleb S. KEITHLEY2017-10-062-4/+4
| | | | | | | | | | Reported-by: "Patrick Matthäi" <pmatthaei@debian.org> master https://review.gluster.org/18185 Change-Id: I0dd6b7d88ddf3c98e8083b75f8dd848babcfd30a BUG: 1494523 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
* glusterd: retrieve uuid under mutex lockAtin Mukherjee2017-10-051-7/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In a multi node cluster, if one of the glusterd instance goes down and comes back, then there might be a race situation where glusterd needs to retrieve its uuid (glusterd_retrieve_uuid) and at the same time as part of receiving a friend handshake from other peer, glusterd iterates over the volume information recieved from remote node and checks for if a brick is local or not by calling MY_UUID which in turn calls glusterd_retrieve_uuid. And the same applies for glusterd_store_global_info () function too. This could end up in a situation where for the same node glusterd ends up generating two UUID files in /var/lib/glusterd. Following is the log snippet which confirms the above: [2017-09-01 03:09:24.458030] I [glusterd.c:146:glusterd_uuid_init] 0-management: retrieved UUID: fd46a495-7e33-468f-88f6-63c815fac640 // thread 1 retrieve uuid from glusterd.info [2017-09-01 03:09:24.458034] E [glusterd-store.c:2109:glusterd_retrieve_uuid] 0-: No previous uuid is present //thread 2 can not retrieve uuid, because in thread1 the file pointer has already become eof. [2017-09-01 03:09:24.458041] E [glusterd-store.c:2117:glusterd_retrieve_uuid] 0-: Returning -1 [2017-09-01 03:09:24.458076] I [glusterd.c:176:glusterd_uuid_generate_save] 0-management: generated UUID: 190bb292-a296-4125-96da-42b247511cc4 [2017-09-01 03:09:24.458129] E [store.c:367:gf_store_save_value] 0-: Able to store key: UUID,value: 190bb292-a296-4125-96da-42b247511cc4 Fix is to retrieve the uuid under mutex lock. Credits : cynthia.zhou@nokia-sbell.com > Reviewed-on: : https://review.gluster.org/#/c/18333/ >(cherry picked from commit 898f0b7ce31ddf8ec02e572c5d22eff2e4205b4c) Change-Id: Ib9a5e159c3febf2aef13aa5e38f0a51fe409dadb BUG: 1495162 Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
* feature/posix: Enabled gfid2path by defaultv3.12.0Kotresh HR2017-08-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | Enable gfid2path feature by default. The basic performance tests are carried out and it doesn't show significant depreciation. The results are updated in issue. Updates: #139 Change-Id: I5f1949a608d0827018ef9d548d5d69f3bb7744fd > Signed-off-by: Kotresh HR <khiremat@redhat.com> > Reviewed-on: https://review.gluster.org/17950 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Aravinda VK <avishwan@redhat.com> > Reviewed-by: Amar Tumballi <amarts@redhat.com> (cherry picked from commit 3ec63650bb7fd874a5013e7be4a2def3b519c9b2) Reviewed-on: https://review.gluster.org/18133 Reviewed-by: Amar Tumballi <amarts@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: Shyamsundar Ranganathan <srangana@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd: replace-brick executing successfully when quorum does not metGaurav Yadav2017-08-291-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | Problem: replace-brick command on a setup where quorum does not met executing successfully. Fix: With the fix glusterd is validating whether server is in quorum or not during replace-brick staging >Reviewed-on: https://review.gluster.org/18068 >Smoke: Gluster Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Change-Id: I8017154bb62bdcc6c6490e720ecfe9cde090c161 BUG: 1486110 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Reviewed-on: https://review.gluster.org/18125 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Prashanth Pai <ppai@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: disable rpc_clnt_t after relalance process disconnectionMilind Changire2017-08-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: glusterd continues to connect to rebalance process even after the socket connection has disconnected. Solution: rpc_clnt_disable() disables the rpc_clnt_t object and disarms all relevant timers and drops refs to the rpc_clnt_t object and the transport as well. >Reviewed-on: https://review.gluster.org/18114 >Reviewed-by: MOHIT AGRAWAL <moagrawa@redhat.com> >Tested-by: Atin Mukherjee <amukherj@redhat.com> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> >Smoke: Gluster Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >(cherry picked from commit a894d44427649e99d4344a241dc2f9d584a9a691) Change-Id: I981d6f1cc0087037f1927062c2770a4d5026a619 BUG: 1484885 Signed-off-by: Milind Changire <mchangir@redhat.com> Reviewed-on: https://review.gluster.org/18117 Tested-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd : glusterd fails to start when peer's network interface is downGaurav Yadav2017-08-213-2/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: glusterd fails to start on nodes where glusterd tries to come up even before network is up. Fix: On startup glusterd tries to resolve brick path which is based on hostname/ip, but in the above scenario when network interface is not up, glusterd is not able to resolve the brick path using ip_address or hostname With this fix glusterd will use UUID to resolve brick path. >Reviewed-on: https://review.gluster.org/17813 >Smoke: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Prashanth Pai <ppai@redhat.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> >(cherry picked from commit 1477fa442a733d7b1a5ea74884cac8f29fbe7e6a) Change-Id: Icfa7b2652417135530479d0aa4e2a82b0476f710 BUG: 1482835 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Reviewed-on: https://review.gluster.org/18061 Tested-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd: disallow volume specific options to be set with all as volume nameAtin Mukherjee2017-08-211-0/+8
| | | | | | | | | | | | | | | | | | | | | | | All the .validate_fn defined in volume map entry table refers to volinfo object. And if we end up in trying to set a volume level option cluster wide glusterd results into a crash. >Reviewed-on: https://review.gluster.org/18052 >Smoke: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Prashanth Pai <ppai@redhat.com> >Reviewed-by: mohammed rafi kc <rkavunga@redhat.com> >Reviewed-by: Gaurav Yadav <gyadav@redhat.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >(cherry picked from commit 01abf7ee37702407403afcf9aa6c9019a0316e1d) Change-Id: I7c877aee0ff5c8c1d8c95662fdc8c8923355ae7b BUG: 1482804 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/18060 Reviewed-by: Prashanth Pai <ppai@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* mgmt/glusterd: Provide more information in command messageAshish Pandey2017-08-121-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: When more than one bricks are present on the same node, while creating a volume, we get a warning message that the setup is not optimal. We need to add more information in this error/warning. Solution: Add following line in current message. Bricks should be on different nodes to have best fault tolerant configuration. >Change-Id: Ica72bd6e68dff7e41c37617f3b775a981fa40c69 >BUG: 1480099 >Signed-off-by: Ashish Pandey <aspandey@redhat.com> >Reviewed-on: https://review.gluster.org/18014 >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Smoke: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> >Signed-off-by: Ashish Pandey <aspandey@redhat.com> Change-Id: Ica72bd6e68dff7e41c37617f3b775a981fa40c69 BUG: 1480448 Signed-off-by: Ashish Pandey <aspandey@redhat.com> Reviewed-on: https://review.gluster.org/18022 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: Gluster should keep PID file in correct locationGaurav Kumar Garg2017-08-126-24/+152
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently Gluster keeps process pid information of all the daemons and brick processes in Gluster configuration file directory (ie., /var/lib/glusterd/*). These pid files should be seperate from configuration files. Deletion of the configuration file directory might result into serious problems. Also, /var/run/gluster is the default placeholder directory for pid files. So, with this fix Gluster will keep all process pid information of all processes in /var/run/gluster/* directory. > BUG: 1258561 > Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com> > Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com> > Reviewed-on: https://review.gluster.org/13580 > Tested-by: MOHIT AGRAWAL <moagrawa@redhat.com> > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> > cherry pick from commit 220d406ad13d840e950eef001a2b36f87570058d BUG: 1480459 Change-Id: Idb09e3fccb6a7355fbac1df31082637c8d7ab5b4 Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/18023 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: Block brick attach request till the brick's ctx is setMohit Agrawal2017-08-123-24/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: In multiplexing setup in a container environment we hit a race where before the first brick finishes its handshake with glusterd, the subsequent attach requests went through and they actually failed and glusterd has no mechanism to realize it. This resulted into all the such bricks not to be active resulting into clients not able to connect. Solution: Introduce a new flag port_registered in glusterd_brickinfo to make sure about pmap_signin finish before the subsequent attach bricks can be processed. Test: To reproduce the issue followed below steps 1) Create 100 volumes on 3 nodes(1x3) in CNS environment 2) Enable brick multiplexing 3) Reboot one container 4) Run below command for v in ‛gluster v list‛ do glfsheal $v | grep -i "transport" done After apply the patch command should not fail. Note: A big thanks to Atin for suggest the fix. >Reviewed-on: https://review.gluster.org/17984 >Reviewed-by: Atin Mukherjee <amukherj@redhat.com> >Smoke: Gluster Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> >(cherry picked from commit c13d69babc228a2932994962d6ea8afe2cdd620a) BUG: 1479662 Change-Id: I8e1bd6132122b3a5b0dd49606cea564122f2609b Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/18004 Tested-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* tier: separation of attach-tier from add-brickhari gowtham2017-08-047-5/+352
| | | | | | | | | | | | | | | | | | | | | | | | | | | PROBLEM: Both attach tier and add brick have the same RPC and set of code. This becomes a hurdle while tring to implement add brick on a tiered volume. FIX: This patch separates the add brick and attach tier giving them separate RPCs. >Change-Id: Iec57e972be968a9ff00b15b507e56a4f6dc398a2 >BUG: 1376326 >Signed-off-by: hari gowtham <hgowtham@redhat.com> >Reviewed-on: https://review.gluster.org/15503 >Smoke: Gluster Build System <jenkins@build.gluster.org> >Tested-by: hari gowtham <hari.gowtham005@gmail.com> >Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Signed-off-by: hari gowtham <hgowtham@redhat.com> Change-Id: Iec57e972be968a9ff00b15b507e56a4f6dc398a2 BUG: 1478276 Reviewed-on: https://review.gluster.org/17974 Smoke: Gluster Build System <jenkins@build.gluster.org> Tested-by: hari gowtham <hari.gowtham005@gmail.com> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: Add geo-replication session details to get-state outputSamikshan Bairagya2017-08-043-1/+132
| | | | | | | | | | | | | | | | | | | | | | This commit adds support to the get-state CLI to capture details on geo-replication session as obtained in `gluster volume geo-replication status detail` in its output. > Reviewed-on: https://review.gluster.org/17941 > Smoke: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Shubhendu Tripathi <shtripat@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> (cherry picked from commit 2e7daeffef05c6100cbcc39f1be62935711db3eb) Fixes: #291 Change-Id: I2fbcba70bfdaf439522637234805545194777ed4 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17971 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* core: remove experimental xlators and associated testsKaleb S. KEITHLEY2017-08-031-22/+0
| | | | | | | | | | | | | | | | | | | | | | experimental xlators not included in 3.12 Cherry picked from 4231c40973c60999f5ef759db450d25e129ef6ba: > Change-Id: I547480ee5e7912664784643e436feb198b6d16d0 > BUG: 1447543 > Signed-off-by: Kaushal M <kaushal@redhat.com> > Reviewed-on: https://review.gluster.org/17154 > Smoke: Gluster Build System <jenkins@build.gluster.org> > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> > Reviewed-by: Amar Tumballi <amarts@redhat.com> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Change-Id: I34419ce22ca09b7626b8f9382c377a614fd9fed8 BUG: 1477381 Signed-off-by: ShyamsundarR <srangana@redhat.com> Reviewed-on: https://review.gluster.org/17953 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* logging: localtime logging, cmdline, volume set optionN Balachandran2017-08-038-4/+155
| | | | | | | | | | | | | | | | | | Despite the fact that appliances generally use UTC, some users really want log entries in localtime. fixes gluster/glusterfs#272 feature page: https://review.gluster.org/17807 Backport from master https://review.gluster.org/#/c/16911/ Change-Id: I5fbf2c3eedd9eb128fb3f851dd67b2f4081c8bba Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/17928 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd: add rebal estimates time in get-stateAtin Mukherjee2017-08-021-0/+2
| | | | | | | | | | | | | | | | >Reviewed-on: https://review.gluster.org/17862 >Smoke: Gluster Build System <jenkins@build.gluster.org> >Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >(cherry picked from commit 1431786305055e0fe90e012e03278f504a2d8d14) Fixes: #279 Change-Id: If62fa59042604c9450749d3012c7a962ed0eb374 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17871 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd: Add option to get all volume options through get-state CLISamikshan Bairagya2017-07-311-7/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | This commit makes the get-state CLI capable to returning the values for all volume options for all volumes. This is similar to what you get when you issue a `gluster volume get <volname> all` command. This is the new usage for the get-state CLI: # gluster get-state [<daemon>] [[odir </path/to/output/dir/>] \ [file <filename>]] [detail|volumeoptions] > Reviewed-on: https://review.gluster.org/17858 > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> > Reviewed-by: Gaurav Yadav <gyadav@redhat.com> > Smoke: Gluster Build System <jenkins@build.gluster.org> (cherry picked from commit 8dcf91660e0bd10eb75ef25a29ca02ec51c81be4) Change-Id: Ice52d936a5a389c6fa0ba5ab32416a65cdfde46d Fixes: #277 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17874 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org>
* glusterd: highlight arbiter brick in get-stateAtin Mukherjee2017-07-311-1/+25
| | | | | | | | | | | | | | | | | | >Reviewed-on: https://review.gluster.org/17864 >Reviewed-by: Ravishankar N <ravishankar@redhat.com> >Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> >Reviewed-by: Shubhendu Tripathi <shtripat@redhat.com> >Smoke: Gluster Build System <jenkins@build.gluster.org> >CentOS-regression: Gluster Build System <jenkins@build.gluster.org> >(cherry picked from commit 555990188ae7fabd4ca36c07ddaa92a39dccc813) Fixes: #278 Change-Id: I1af5255127457a70e6362a2c20c53ee533e27c29 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17875 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* storage/posix: Add virtual xattr to fetch path from gfidKotresh HR2017-07-311-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The gfid2path infra stores the "pargfid/bname" as on xattr value for each non directory entry. Hardlinks would have a separate xattr. This xattr key is internal and is not exposed to applications. A virtual xattr is exposed for the applications to fetch the path from gfid. Internal xattr: trusted.gfid2path.<xxhash> Virtual xattr: glusterfs.gfidtopath getfattr -h -n glusterfs.gfidtopath /<aux-mnt>/.gfid/<gfid> If there are hardlinks, it returns all the paths separated by ':'. A volume set option is introduced to change the delimiter to required string of max length 7. gluster vol set gfid2path-separator ":::" > Updates: #139 > Change-Id: Ie3b0c3fd8bd5333c4a27410011e608333918c02a > Signed-off-by: Kotresh HR <khiremat@redhat.com> > Reviewed-on: https://review.gluster.org/17785 > Smoke: Gluster Build System <jenkins@build.gluster.org> > CentOS-regression: Gluster Build System <jenkins@build.gluster.org> > Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Updates: #139 Change-Id: Ie3b0c3fd8bd5333c4a27410011e608333918c02a Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: https://review.gluster.org/17921 CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* posix: option to handle the shared bricks for statvfs()Amar Tumballi2017-07-318-12/+97
| | | | | | | | | | | | | | | | | | | | | | | | Currently 'storage/posix' xlator has an option called option `export-statfs-size no`, which exports zero as values for few fields in `struct statvfs`. In a case of backend brick shared between multiple brick processes, the values of these variables should be `field_value / number-of-bricks-at-node`. This way, even the issue of 'min-free-disk' etc at different layers would also be handled properly when the statfs() sys call is made. Fixes #241 > Reviewed-on: https://review.gluster.org/17618 > Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> > Reviewed-by: Atin Mukherjee <amukherj@redhat.com> > (cherry picked from commit febf5ed4848ad705a34413353559482417c61467) Change-Id: I2e320e1fdcc819ab9173277ef3498201432c275f Signed-off-by: Amar Tumballi <amarts@redhat.com> Reviewed-on: https://review.gluster.org/17903 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
* glusterd: fix brick start raceAtin Mukherjee2017-07-201-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Another race where glusterd was restarted glusterd_brick_start () is called multiple times due to friend handshaking and in one instance when one of the brick was attempted to be attached to the existing brick process, send_attach_req failed as the first brick itself was still not up and then we did a synlock_unlock () followed by a sleep of 1 sec, before the same thread woke up, another thread tried to start the same brick process and then it assumed that it has to start a fresh brick process. Solution: 1. If brick is in starting phase (brickinfo->status == GF_BRICK_STARTING), no need for a reattempt to start the brick. 2. While initiating attach_req set brickinfo->status to GF_BRICK_STARTING Change-Id: Ib007b6199ec36fdab4214a1d37f99d7f65ef64da BUG: 1465559 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17840 Reviewed-by: Amar Tumballi <amarts@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* glusterd: Set default value for cluster.max-bricks-per-process to 0Samikshan Bairagya2017-07-193-15/+31
| | | | | | | | | | | | | | | | | | | When brick-multiplexing is enabled, and "cluster.max-bricks-per-process" isn't explicitly set, multiplexing happens without any limit set. But the default value set for that tunable is 1, which is confusing. This commit sets the default value to 0, and prevents the user from being able to set this value to 1 when brick-multiplexing is enbaled. The default value of 0 denotes that brick-multiplexing can happen without any limit on the number of bricks per process. Change-Id: I4647f7bf5837d520075dc5c19a6e75bc1bba258b BUG: 1472417 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17819 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* libglusterfs: Name threads on creationRaghavendra Talur2017-07-191-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Set names to threads on creation for easier debugging. Output of top -H -p <PID-OF-GLUSTERFSD> Before: 19773 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19774 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19775 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19776 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19777 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19778 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19779 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19780 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19781 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19782 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19783 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19784 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19785 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd 19786 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd 19787 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd 19789 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19790 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 25178 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 5398 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 7881 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd After: 19773 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19774 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustertimer 19775 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd 19776 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustermemsweep 19777 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustersproc0 19778 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustersproc1 19779 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll0 19780 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusteridxwrker 19781 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusteriotwr0 19782 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterbrssign 19783 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterbrswrker 19784 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterclogecon 19785 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd0 19786 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd1 19787 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd2 19789 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixjan 19790 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixfsy 25178 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll1 5398 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll2 7881 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixhc Change-Id: Id5f333755c1ba168a2ffaa4fce6e71c375e10703 BUG: 1254002 Updates: #271 Signed-off-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-on: https://review.gluster.org/11926 Reviewed-by: Prashanth Pai <ppai@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: Add description field to global options for brick-muxSamikshan Bairagya2017-07-171-2/+12
| | | | | | | | | | | | | | | | Currently the "cluster.brick-multiplex" and "cluster.max-bricks-per-process" options do not show anything in the description field when gluster volume set help is called. This commit adds the description fields for these 2 options. Change-Id: I3d162c61fa2774dd994f046e305d457f0fd43192 BUG: 1471790 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17790 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Gaurav Yadav <gyadav@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* core: miscellaneous cleanupKaleb S. KEITHLEY2017-07-144-12/+11
| | | | | | | | | | | | | | | | | | | | | | | clean up things that I tripped over doing other changes. 1) fix mishmash of random spacing in struct decls in glusterfs.h. Not technically a problem, just ugly to look at. 2) replace open-coded strings constants with existing #define constants. A disaster waiting to happen. 3) Use sys_access() instead of sys_stat() or sys_lstat() to test simple existence of file. Why copy dozens of bytes from kernel to user space that aren't going to be used by anything? There are probably more instances like these. Change-Id: I28089bef4cc93d5e4e4213045fb1a2649d110f82 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/17769 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Prashanth Pai <ppai@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
* storage/posix: New gfid2path infraKotresh HR2017-07-101-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | With this infra, a new xattr is stored on each entry creation as below. trusted.gfid2path.<xxhash> = <pargfid>/<basename> If there are hardlinks, multiple xattrs would be present. Fops which are impacted: create, mknod, link, symlink, rename, unlink Option to enable: gluster vol set <VOLNAME> storage.gfid2path on Updates: #139 Change-Id: I369974cd16703c45ee87f82e6c2ff5a987a6cc6a Signed-off-by: Kotresh HR <khiremat@redhat.com> Reviewed-on: https://review.gluster.org/17488 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Aravinda VK <avishwan@redhat.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* glusterd: Introduce option to limit no. of muxed bricks per processSamikshan Bairagya2017-07-1011-58/+483
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit introduces a new global option that can be set to limit the number of multiplexed bricks in one process. Usage: `# gluster volume set all cluster.max-bricks-per-process <value>` If this option is not set then multiplexing will happen for now with no limitations set; i.e. a brick process will have as many bricks multiplexed to it as possible. In other words the current multiplexing behaviour won't change if this option isn't set to any value. This commit also introduces a brick process instance that contains information about brick processes, like the number of bricks handled by the process (which is 1 in non-multiplexing cases), list of bricks, and port number which also serves as an unique identifier for each brick process instance. The brick process list is maintained in 'glusterd_conf_t'. Updates: #151 Change-Id: Ib987d14ab0a4f6034dac01b73a4b2839f7b0b695 Signed-off-by: Samikshan Bairagya <samikshan@gmail.com> Reviewed-on: https://review.gluster.org/17469 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* core: assorted typos and spelling mistakes from Debian lintianKaleb S. KEITHLEY2017-07-031-4/+5
| | | | | | | | | | | | | | Plus minor readability improvements. Reported-by: pmatthaei@debian.org Change-Id: I5393819a2fc9f240a19811143bb57b127df717cf BUG: 1466785 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: https://review.gluster.org/17660 Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: mark brickinfo to started on successful attachAtin Mukherjee2017-06-281-5/+4
| | | | | | | | | | | | | | brickinfo's port & status should be filled up only when attach brick is successful. Change-Id: I68b181be37cb94d176f0f4692e8d9dac5493181c BUG: 1465559 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: https://review.gluster.org/17640 Reviewed-by: Jeff Darcy <jeff@pl.atyp.us> Smoke: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
* glusterd: brick process fails to restart after gluster pod failureMohit Agrawal2017-06-271-10/+31
| | | | | | | | | | | | | | | | | | | | Problem: In container environment sometime after delete gluster pod and created new gluster pod brick process doesn't seem to come up. Solution: On the basis of logs it seems glusterd is trying to attach with non glusterfs process.Change the code of function glusterd_get_sock_from_brick_pid to fetch socketpath from argument of running brick process. BUG: 1464072 Change-Id: Ida6af00066341b683bbb4440d7a0d8042581656a Signed-off-by: Mohit Agrawal <moagrawa@redhat.com> Reviewed-on: https://review.gluster.org/17601 Smoke: Gluster Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
* index: Do not proceed with init if brick is not mountedRavishankar N2017-06-193-4/+30
| | | | | | | | | | | | | | | | | | | | | ..or else when a volume start force is given, we end up creating /brick-path/.glusterfs/indices folder and various subdirs under it and eventually starting the brick process. As a part of this patch, glusterd_get_index_basepath() is added in glusterd, who will then use it to create the basepath during volume-create, add-brick, replace-brick and reset-brick. It also uses this function to set the 'index-base' xlator option for the index translator. Change-Id: Id018cf3cb6f1e2e35b5c4cf438d1e939025cb0fc BUG: 1457202 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: https://review.gluster.org/17426 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* debug/io-stats: Provide option to select stats output formatKrutika Dhananjay2017-06-151-0/+5
| | | | | | | | | | | | | | ... as opposed to hardcoding it to "json" always. Change-Id: I5e79473a514373145ad764f24bb6219a6983a4c6 BUG: 1458197 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: https://review.gluster.org/17451 Smoke: Gluster Build System <jenkins@build.gluster.org> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org> Reviewed-by: Niels de Vos <ndevos@redhat.com> CentOS-regression: Gluster Build System <jenkins@build.gluster.org> Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>