| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I75bca55901849cf725e02c782f75ff1e6054fddd
BUG: 1294448
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/13097
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When user execute bitrot scrub status command and scrubber is
pending to do scrubbing then value of last_scrub time will be NULL.
Currently cli is dereferencing NULL pointer in this case, That might
lead to crash.
Fix is to use proper check condition while printing scrub status.
Change-Id: I3c4be8e25d089451c6ab77b16737c01d0348ee70
BUG: 1293558
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/13060
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The start command doesnt restart the tier deamon if the deamon
is running at one node. hence to bring up the tierd on the nodes
where the deamon is down, the force command is implemented.
It skips the check for tierd running.
Change-Id: I0037d3e5ecfe56637d0da201a97903c435d26436
BUG: 1292112
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12983
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently When hot tier type is distributed-replicate and cold tier
type is disperse volume then #gluster volume info --xml command is
not giving its correct output. In case of HOT tier case its displaying
wrong volume type.
With this fix it will show correct xml output for tier volume
irrespective of all the type of the volume's.
Change-Id: If1de8d52d1e0ef3d0523163abed37b2b571715e8
BUG: 1292084
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/12982
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kotresh HR <khiremat@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For detach tier, the validation was done using the string "detach-tier"
but the new commands used has the string "tier". Making the string use
"tier" to compare, creates problem as the tier status and tier detach
have the keyword "tier". So tier detach and tier status were separated.
and strtok was used to prevent the condition from passing when the
volume name has a substring of "tier". (only the second word from the
string is got and checked if the feature is tier)
Problem: new detach tier command doesnt throw warnings like
"not a tier volume" or " detach tier not started" respectively
instead it prints empty output.
Fix: while validate the volume is checked if its a tiered volume
if yes it is checked if the detach tier is started, else a warning
is thrown respectively.
Change-Id: I94246d53b18ab0e9406beaf459eaddb7c5b766c2
BUG: 1288517
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12883
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When user execute bitrot scrub status command then gluster
is not giving correct value of Number of Scrubbed files,
Number of Unsigned files, Last completed scrub time,
Duration of last scrub.
With this patch scrub status will give correct value for
all the above fields.
Change-Id: Ic966f76d22db5b0c889e6386a1c2219afbda1f49
BUG: 1285989
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/12776
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
As of now quota 'list/list-objects' will list the usage only if limit is
set for every directory else it will fail with ENOATTR(If inode/inode-quota
is already configured for the first time).
Feature:
With the patch we are enhancing this command to list the usage even
if quota limit is not set but still the user has to configure
inode/inode-quota for the first time.
Example:
Consider we have /client/dir and /client1(absolute path from mount point):
Quota limit is set only on /client. when we try listing /client/dir or /client1,
it shows "Limit not set".
Fix:
The patch fixes this by showing "used space" in case of list command and
shows "file_count" & "dir_count" in case of list-objects command. This works
fine with xml output as well.
Change-Id: I68b08ec77a583b3c7f39fe4d6b15d3d77adb095a
BUG: 1284752
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/12741
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When glusterd is binded to specific IP quota fails, since the server is
hardcoded to localhost. IP can be assigned in the glusterd part of quota,
but IP is not populated in cli part. So Quota makes use of glusterfsd's unix
domain socket transport type.
Change-Id: Ib03332cc203795456ee6087017cea08eed3d7417
BUG: 1277105
Signed-off-by: Mohamed Ashiq <mliyazud@redhat.com>
Signed-off-by: Humble Devassy Chirammal <hchiramm@redhat.com>
Reviewed-on: http://review.gluster.org/12489
Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I8a8e27b4d6c35ea5e57bd0b556fd2c6ab7b496ab
BUG: 1285968
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/12771
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Saravanakumar Arumugam <sarumuga@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the volume was not a tiered volume then empty status was being
printed instead of an error message.
Change-Id: I13ccb16e1562966976a48d9365ced4c8a124de59
BUG: 1284357
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12713
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enhances the cli output for arbiter volumes as requested in the BZ.
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Change-Id: I28cc34d7d19def043d54291cede25a58dbcc5051
BUG: 1285288
Reviewed-on: http://review.gluster.org/12747
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently scrub status command is not displaying list of all the bad files. All
the bad files are avaliable in the bitd daemon.
With this patch it will dispaly list of all the bad file's in the scrub
status command.
Change-Id: If09babafaf5d7cf158fa79119abbf5b986027748
BUG: 1207627
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/12720
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CLI command for bitrot scrub status will be :
gluster volume bitrot <volname> scrub status
Above command will show the statistics of bitrot scrubber.
Upon execution of this command it will show some common
scrubber tunable value of volume <VOLNAME> followed by
statistics of scrubber statistics of individual nodes.
sample ouput for single node:
Volume name : <VOLNAME>
State of scrub: Active
Scrub frequency: biweekly
Bitrot error log location: /var/log/glusterfs/bitd.log
Scrubber error log location: /var/log/glusterfs/scrub.log
=========================================================
Node name:
Number of Scrubbed files:
Number of Unsigned files:
Last completed scrub time:
Duration of last scrub:
Error count:
=========================================================
This is just infrastructure. list of bad file, last scrub
time, error count value will be taken care by
http://review.gluster.org/#/c/12503/ and
http://review.gluster.org/#/c/12654/ patches.
Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1
BUG: 1207627
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10231
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Geo-replication uses default ssh port 22 for setup.
i.e., to distribute ssh keys to slaves. In container
environments, custom port number might be used.
Hence to support custom port number for ssh, option
is provided in geo-rep create command to take the
same.
Change-Id: I0fb61959b1c085342b8e4c21ac4e076fba5462f1
BUG: 1276028
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/12504
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Aravinda VK <avishwan@redhat.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Id6d5263eb7b1c53e72a7668e716e9cc4e34b82cd
Reported-by: Milind Changire <mchangir@redhat.com>
BUG: 1198849
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/12553
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Milind Changire <mchangir@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I15a1a637090f1cc2f200d5c3582317e4aa3cf334
BUG: 1278927
Signed-off-by: Mohamed Ashiq <mliyazud@redhat.com>
Reviewed-on: http://review.gluster.org/12532
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
1) Glusterd doesn't remember about arbiter information of replica volume in
store. When glusterd goes down and comes backup, arbiter volumes will
become replica volumes.
2) Glusterd doesn't import/export arbiter information to/from the other peers.
3) Volume info doesn't show any arbiter count in the output.
Fix:
1) Persist arbiter information in glusterd-store
2) Import/Export arbiter information of the volume
3) Change volume info output to show arbiter count.
Change-Id: I2db81e73d2694b01f7d07b08a17b41ad5a55c361
BUG: 1276675
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/12475
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the message after attach tier is saying rebalance.
It is changed according to tiering.
Change-Id: I1834511f86483fa60f404d7defe5be59c025e9d6
BUG: 1277081
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12488
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a quota is disable and the clean-up process terminated
without completely cleaning-up the quota xattrs.
Now when quota is enabled again, this can mess-up the accounting
A version number is suffixed for all quota xattrs and this version
number is specific to marker xaltor, i.e when quota xattrs are
requested by quotad/client marker will remove the version suffix in the
key before sending the response
Change-Id: I1ca2c11460645edba0f6b68db70d476d8d26e1eb
BUG: 1272411
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/12386
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
'gluster volume help' output is not sorted alphabetically.
This makes little harder for the user to search or get to know of
few gluster volume commands usage just from gluster cli.
Change-Id: I855da2e4748a5c2ff3be319c50fa9548d676ee8a
BUG: 1242894
Signed-off-by: Mohamed Ashiq <mliyazud@redhat.com>
Reviewed-on: http://review.gluster.org/11663
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
various xlators and other components are invoking system calls
directly instead of using the libglusterfs/syscall.[ch] wrappers.
If not using the system call wrappers there should be a comment
in the source explaining why the wrapper isn't used.
Change-Id: I1f47820534c890a00b452fa61f7438eb2b3f667c
BUG: 1267967
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/12276
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The warning message for tiering being under experimental staus is removed.
Change-Id: I7d1d535d380b672c70f03ecc0d24a113600ea43f
BUG: 1273726
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12407
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, when 'gluster v quota <VOLNAME> list' command is issued
after an rm -rf on /run/gluster/vol/<directory>, quota output header is
not shown. It is because the list_count was properly calculated with
'gluster v quota <VOLNAME> remove /path' and not with an rm -rf. The patch
fixes this issue.
Change-Id: I5266a8b0b9322b7db1b9e1d6b0327065931f4bcb
BUG: 1269375
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/12345
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ibcbad94c091a9c24fe5aff2d7e8bcd9ac88da7bf
BUG: 1248521
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/12337
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volStatus>
<volumes>
<volume>
<volName>tiervol</volName>
<nodeCount>11</nodeCount>
<hotBricks>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b5_2</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49164</port>
<ports>
<tcp>49164</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8684</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b5_1</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49163</port>
<ports>
<tcp>49163</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8687</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b4_2</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49162</port>
<ports>
<tcp>49162</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8699</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b4_1</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49161</port>
<ports>
<tcp>49161</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8708</pid>
</node>
</hotBricks>
<coldBricks>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b1_1</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49155</port>
<ports>
<tcp>49155</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8716</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b1_2</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49156</port>
<ports>
<tcp>49156</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8724</pid>
</node>
<node>
<hostname>NFS Server</hostname>
<path>localhost</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>2049</port>
<ports>
<tcp>2049</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8678</pid>
</node>
</coldBricks>
<tasks>
<task>
<type>Tier migration</type>
<id>975bfcfa-077c-4edb-beba-409c2013f637</id>
<status>1</status>
<statusStr>in progress</statusStr>
</task>
</tasks>
</volume>
</volumes>
</volStatus>
</cliOutput>
Change-Id: I69252a36b6e6b2f3cbe5db06e9a716f504a1dba4
BUG: 1268810
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12302
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The number of bricks count remains one for the cold type.
Actual result:
<numberOfBricks>1 x 2 = 2</numberOfBricks>
Expected result:
<numberOfBricks>3 x 2 = 6</numberOfBricks>
Change-Id: I31480a7808b248ef9ea805cb64f7663d44647ddf
BUG: 1268822
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12303
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
command
Change-Id: Idf7664d509156ce46ef4308ffc07fb556a0aedd2
BUG: 1268755
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/12297
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Change-Id: I919d8935c849f9be6b2cb43e8332afb821778d89
BUG: 1267539
Reviewed-on: http://review.gluster.org/12258
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currrently, 'gluster v quota <VOLNAME> list' command rounds off the
available space and shows it to the user. Now, 'gluster v quota
<VOLNAME> list --xml' command is modified to show the exact available
space in bytes.
Change-Id: I3772e036a2537c1df12f22cf32dfe4ac7940988f
BUG: 1261404
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/12137
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gluster v info didnt differentiate the hot bricks and cold bricks
and other few values
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volInfo>
<volumes>
<volume>
<name>rmbr</name>
<id>72d223fc-96ba-4f4a-ac6e-0d0bc16ef127</id>
<status>1</status>
<statusStr>Started</statusStr>
<brickCount>3</brickCount>
<distCount>1</distCount>
<stripeCount>1</stripeCount>
<replicaCount>1</replicaCount>
<disperseCount>0</disperseCount>
<redundancyCount>0</redundancyCount>
<type>5</type>
<typeStr>Tier</typeStr>
<transport>0</transport>
<xlators/>
<bricks>
<hotBricks>
<hotBrickType>Distribute</hotBrickType>
<numberOfBricks>1</numberOfBricks>
<brick uuid="81">v1:/hb1<name>v1:/hb1</name><hostUuid>81</hostUuid></brick>
</hotBricks>
<coldBricks>
<coldBrickType>Distribute</coldBrickType>
<numberOfBricks>2</numberOfBricks>
<brick uuid="81">v1:/br1<name>v1:/br1</name><hostUuid>81</hostUuid></brick>
<brick uuid="81">v1:/br2<name>v1:/br2</name><hostUuid>81</hostUuid></brick>
<count>0</count>
</coldBricks>
</bricks>
</volume>
</volumes>
</volInfo>
</cliOutput>
Change-Id: I6e52541bb6d8a6a17e17bfcb42434beaac13db56
BUG: 1261837
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12158
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current detach-tier cli command support commit force.
Deprecating the same to force.
So the new syntax would be:
volume detach-tier <VOLNAME> <start|stop|status|commit|force>
Change-Id: Ie86dfd72341078c0a1be94767f523730911312ef
BUG: 1261862
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/12151
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volStatus>
<volumes>
<volume>
<volName>v1</volName>
<nodeCount>5</nodeCount>
<hotBrick>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/hbr1</path>
<peerid>137e2a4f-2bde-4a97-b3f3-470a2e092155</peerid>
<status>1</status>
<port>49154</port>
<ports>
<tcp>49154</tcp>
<rdma>N/A</rdma>
</ports>
<pid>6535</pid>
</node>
</hotBrick>
<coldBrick>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/cb1</path>
<peerid>137e2a4f-2bde-4a97-b3f3-470a2e092155</peerid>
<status>1</status>
<port>49152</port>
<ports>
<tcp>49152</tcp>
<rdma>N/A</rdma>
</ports>
<pid>6530</pid>
</node>
</coldBrick>
<coldBrick>
<node>
<hostname>NFS Server</hostname>
<path>10.70.42.203</path>
<peerid>137e2a4f-2bde-4a97-b3f3-470a2e092155</peerid>
<status>1</status>
<port>2049</port>
<ports>
<tcp>2049</tcp>
<rdma>N/A</rdma>
</ports>
<pid>6519</pid>
</node>
</coldBrick>
<tasks>
<task>
<type>Rebalance</type>
<id>8da729f2-f1b2-4f55-9945-472130be93f7</id>
<status>4</status>
<statusStr>failed</statusStr>
</task>
</tasks>
</volume>
<tasks/>
</volume>
</volumes>
</volStatus>
</cliOutput>
Change-Id: Idfdbce47d03ee2cdbf407c57159fd37a2900ad2c
BUG: 1263100
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12176
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, 'gluster v tier/attach-tier/detach-tier help' command
shows the usage, and then prints 'Tier command failed'. With this
patch the error message is removed.
Change-Id: I1679fe3303d73ba6b6fdbb7ee18028062d446f39
BUG: 1263224
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/12181
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently the tier feature piggy backs off the rebalance command
syntax to obtain status and this is clumsy. Introduce a new
tier command that can do tier specific operations, starting
with volume status to display counters.
Old commands:
gluster volume attach-tier <vol> [replica count] {bricklist..}
gluster volume detach-tier <vol> {start|stop|commit}
New commands:
gluster volume tier <vol> attach [replica count] {bricklist} |
detach {start|stop|commit} |
status
Change-Id: Ic07b3c6260588162de7d34380f8cbd3d8a7f35d3
BUG: 1255693
Signed-off-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-on: http://review.gluster.org/11984
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volRebalance>
<task-id>34f47e29-2193-4a86-9b1e-c7e56bdae3d4</task-id>
<op>7</op>
<nodeCount>1</nodeCount>
<node>
<nodeName>localhost</nodeName>
<promotedfiles>0</promotedfiles>
<demotedfiles>0</demotedfiles>
<statusStr>in progress</statusStr>
</node>
</volRebalance>
</cliOutput>
Change-Id: I61083f7b9b0b3bd840982b8c5d6ea4b42e27c9b3
BUG: 1252737
Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/11890
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are three kinds of inline functions: plain inline, extern inline,
and static inline. All three have been removed from .c files, except
those in "contrib" which aren't our problem. Inlines in .h files, which
are overwhelmingly "static inline" already, have generally been left
alone. Over time we should be able to "lower" these into .c files, but
that has to be done in a case-by-case fashion requiring more manual
effort. This part was easy to do automatically without (as far as I can
tell) any ill effect.
In the process, several pieces of dead code were flagged by the
compiler, and were removed.
Change-Id: I56a5e614735c9e0a6ee420dab949eac22e25c155
BUG: 1245331
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/11769
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: snapshot delete all command fails with --xml option
Fix: Provided xml support for delete all command
Change-Id: I77cad131473a9160e188c783f442b6a38a37f758
BUG: 1257533
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-on: http://review.gluster.org/12027
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a problem in current CLI framework
CLI holds the lock when processing command.
When processing quota list command, below sequence of steps executed in the
same thread and causing deadlock
1) CLI holds the lock
2) Send rpc_clnt_submit request to quotad for quota usage
3) If quotad is down, rpc_clnt_submit invokes cbk function with error
4) cbk function cli_quotad_getlimit_cbk tries to hold lock to broadcast
the results and hangs, because same thread has already holding the lock
This patch fixes the problem by creating seperate thread for
broadcasting the result
Change-Id: I53be006eadf6aaf348083d9168535530d70a8ab3
BUG: 1242819
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/11990
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Display description field with (null) if
no description is present for the snapshot, instead
of removing the field altogether.
Change-Id: I965b08cd6e54eea56c32e2712fab7daa8a663f11
BUG: 1250387
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/11834
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Display the size equivalent to the soft limit percentage
in gluster v quota <volname> list <path> and
gluster v quota <volname> list-objects <path> command
Change-Id: I31ee82e9e836068348cf9458dcaf13f043d9fd87
BUG: 1248521
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/11808
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Display <opErrstr/> in case of no operrstr for
all xml output of gluster commands.
Change-Id: Ie16f749f90b4642357c562012408c434cd38661f
BUG: 1245895
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/11835
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CID: 1124702
Change-Id: I6366834224a8176824070150b7f2af76b4d65b7f
BUG: 789278
Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/11665
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com>
Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The @owner argument tells RPC layer the xlator that owns
the connection and to which xlator THIS needs be set during
network notifications like CONNECT and DISCONNECT.
Code paths that originate from the head of a (volume) graph and use
STACK_WIND ensure that the RPC local endpoint has the right xlator saved
in the frame of the call (callback pair). This guarantees that the
callback is executed in the right xlator context.
The client handshake process which includes fetching of brick ports from
glusterd, setting lk-version on the brick for the session, don't have
the correct xlator set in their frames. The problem lies with RPC
notifications. It doesn't have the provision to set THIS with the xlator
that is registered with the corresponding RPC programs. e.g,
RPC_CLNT_CONNECT event received by protocol/client doesn't have THIS set
to its xlator. This implies, call(-callbacks) originating from this
thread don't have the right xlator set too.
The fix would be to save the xlator registered with the RPC connection
during rpc_clnt_new. e.g, protocol/client's xlator would be saved with
the RPC connection that it 'owns'. RPC notifications such as CONNECT,
DISCONNECT, etc inherit THIS from the RPC connection's xlator.
Change-Id: I9dea2c35378c511d800ef58f7fa2ea5552f2c409
BUG: 1235582
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/11436
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, if absolute path is not entered in
"gluster volume quota <vol-name> list <path>",
it just shows the header (Path Hard-limit Soft-limit...)
instead of showing an error message.
With this patch, it shows an error to enter the absolute path.
Change-Id: I2c3d34bfdc7b924d00b11f8649b73a5069cbc2dc
BUG: 1245558
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/11738
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I74417471d7d2a86f198037d88dbf7d072c4349c3
BUG: 1218960
Signed-off-by: Sakshi <sabansal@redhat.com>
Reviewed-on: http://review.gluster.org/10475
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The gluster peer probe command with invalid ips dont report that
IP's can also be a valid in usage.
Change-Id: I8f58341a2b76369ccf62f88ca0ecd8a9a9529af6
BUG: 1242742
Signed-off-by: Mohamed Ashiq Liyazudeen <mliyazud@redhat.com>
Reviewed-on: http://review.gluster.org/11657
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I14c049c84c468b6415a1de45441b2fed94e8ed4b
BUG: 1240654
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/11566
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since logbuf_pool was not created via glusterfs_ctx_defaults_init(),
the following error was present in cli logs repeateadly for each
and every execution of a gluster command.
E [mem-pool.c:417:mem_get0] (-->/usr/local/lib/libglusterfs.so.0(+0x7e262)
[0x7fdbc0b1f262] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x804)
[0x7fdbc0ac7844] -->/usr/local/lib/libglusterfs.so.0(mem_get0+0x78)
[0x7fdbc0af5b48] ) 0-mem-pool: invalid argument [Invalid argument]
This change creates ctx->logbuf_pool via glusterfs_ctx_defaults_init()
in cli.c so that the above error is no longer logged in cli logs.
Change-Id: I3fcd9cfefa06ddd52e1989b039ff5637372c3235
BUG: 1243753
Signed-off-by: Anoop C S <anoopcs@redhat.com>
Reviewed-on: http://review.gluster.org/11691
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During quota-update process if inode info is present in size-xattr and
missing in contri-xattrs, then in function '_mq_get_metadata', we set
contri-size as zero (on error -2, which means usage info present, but inode info missing).
With this we are calculating wrong delta and updating the same.
With this patch we are ignoring errors if inode info in xattrs are missing
Change-Id: I7940a0e299b8bb425b5b43746b1f13f775c7fb92
BUG: 1241153
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/11583
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Resource create for the added node referenced a variable
new_node that was never passed. This led to a wrong schema
type in the cib file and hence the added node always ended
up in failed state. And also, resources were wrongly
created twice and led to more errors. I have fixed the variable
name and deleted the repetitive invocation of the recreate-resource
function.
The new node has to be added to the existing ganesha-ha config
file for correct behaviour during subsequent add-node operations.
This edited file has to be copied to all the other cluster nodes.
I have added a fix for this as well.
Change-Id: Ie55138e2657d22298d89db1c08f2e17930686bd6
BUG: 1233246
Signed-off-by: Meghana M <mmadhusu@redhat.com>
Reviewed-on: http://review.gluster.org/11316
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: soumya k <skoduri@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|