| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following are added:
1. "<arbiterCount>1</arbiterCount>" and
"<coldarbiterCount>1</coldarbiterCount>"
2. "<isArbiter>0</isArbiter>" on the brick info, like so:
<brick
uuid="cafa8612-d7d4-4007-beea-72ae7477f3bb">127.0.0.2:/home/ravi/bricks/brick1
<name>127.0.0.2:/home/ravi/bricks/brick1</name>
<hostUuid>cafa8612-d7d4-4007-beea-72ae7477f3bb</hostUuid>
<isArbiter>0</isArbiter>
</brick>
Also fix a bug in gluster vol info where the abiter brick was shown the
wrong brick of the cold tier after performing a tier-attach.
Change-Id: Id978325d02b04f1a08856427827320e169169810
BUG: 1297750
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/13229
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently When hot tier type is distributed-replicate and cold tier
type is disperse volume then #gluster volume info --xml command is
not giving its correct output. In case of HOT tier case its displaying
wrong volume type.
With this fix it will show correct xml output for tier volume
irrespective of all the type of the volume's.
Change-Id: If1de8d52d1e0ef3d0523163abed37b2b571715e8
BUG: 1292084
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/12982
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kotresh HR <khiremat@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
As of now quota 'list/list-objects' will list the usage only if limit is
set for every directory else it will fail with ENOATTR(If inode/inode-quota
is already configured for the first time).
Feature:
With the patch we are enhancing this command to list the usage even
if quota limit is not set but still the user has to configure
inode/inode-quota for the first time.
Example:
Consider we have /client/dir and /client1(absolute path from mount point):
Quota limit is set only on /client. when we try listing /client/dir or /client1,
it shows "Limit not set".
Fix:
The patch fixes this by showing "used space" in case of list command and
shows "file_count" & "dir_count" in case of list-objects command. This works
fine with xml output as well.
Change-Id: I68b08ec77a583b3c7f39fe4d6b15d3d77adb095a
BUG: 1284752
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/12741
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volStatus>
<volumes>
<volume>
<volName>tiervol</volName>
<nodeCount>11</nodeCount>
<hotBricks>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b5_2</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49164</port>
<ports>
<tcp>49164</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8684</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b5_1</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49163</port>
<ports>
<tcp>49163</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8687</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b4_2</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49162</port>
<ports>
<tcp>49162</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8699</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b4_1</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49161</port>
<ports>
<tcp>49161</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8708</pid>
</node>
</hotBricks>
<coldBricks>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b1_1</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49155</port>
<ports>
<tcp>49155</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8716</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b1_2</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49156</port>
<ports>
<tcp>49156</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8724</pid>
</node>
<node>
<hostname>NFS Server</hostname>
<path>localhost</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>2049</port>
<ports>
<tcp>2049</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8678</pid>
</node>
</coldBricks>
<tasks>
<task>
<type>Tier migration</type>
<id>975bfcfa-077c-4edb-beba-409c2013f637</id>
<status>1</status>
<statusStr>in progress</statusStr>
</task>
</tasks>
</volume>
</volumes>
</volStatus>
</cliOutput>
Change-Id: I69252a36b6e6b2f3cbe5db06e9a716f504a1dba4
BUG: 1268810
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12302
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The number of bricks count remains one for the cold type.
Actual result:
<numberOfBricks>1 x 2 = 2</numberOfBricks>
Expected result:
<numberOfBricks>3 x 2 = 6</numberOfBricks>
Change-Id: I31480a7808b248ef9ea805cb64f7663d44647ddf
BUG: 1268822
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12303
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currrently, 'gluster v quota <VOLNAME> list' command rounds off the
available space and shows it to the user. Now, 'gluster v quota
<VOLNAME> list --xml' command is modified to show the exact available
space in bytes.
Change-Id: I3772e036a2537c1df12f22cf32dfe4ac7940988f
BUG: 1261404
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/12137
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gluster v info didnt differentiate the hot bricks and cold bricks
and other few values
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volInfo>
<volumes>
<volume>
<name>rmbr</name>
<id>72d223fc-96ba-4f4a-ac6e-0d0bc16ef127</id>
<status>1</status>
<statusStr>Started</statusStr>
<brickCount>3</brickCount>
<distCount>1</distCount>
<stripeCount>1</stripeCount>
<replicaCount>1</replicaCount>
<disperseCount>0</disperseCount>
<redundancyCount>0</redundancyCount>
<type>5</type>
<typeStr>Tier</typeStr>
<transport>0</transport>
<xlators/>
<bricks>
<hotBricks>
<hotBrickType>Distribute</hotBrickType>
<numberOfBricks>1</numberOfBricks>
<brick uuid="81">v1:/hb1<name>v1:/hb1</name><hostUuid>81</hostUuid></brick>
</hotBricks>
<coldBricks>
<coldBrickType>Distribute</coldBrickType>
<numberOfBricks>2</numberOfBricks>
<brick uuid="81">v1:/br1<name>v1:/br1</name><hostUuid>81</hostUuid></brick>
<brick uuid="81">v1:/br2<name>v1:/br2</name><hostUuid>81</hostUuid></brick>
<count>0</count>
</coldBricks>
</bricks>
</volume>
</volumes>
</volInfo>
</cliOutput>
Change-Id: I6e52541bb6d8a6a17e17bfcb42434beaac13db56
BUG: 1261837
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12158
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volStatus>
<volumes>
<volume>
<volName>v1</volName>
<nodeCount>5</nodeCount>
<hotBrick>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/hbr1</path>
<peerid>137e2a4f-2bde-4a97-b3f3-470a2e092155</peerid>
<status>1</status>
<port>49154</port>
<ports>
<tcp>49154</tcp>
<rdma>N/A</rdma>
</ports>
<pid>6535</pid>
</node>
</hotBrick>
<coldBrick>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/cb1</path>
<peerid>137e2a4f-2bde-4a97-b3f3-470a2e092155</peerid>
<status>1</status>
<port>49152</port>
<ports>
<tcp>49152</tcp>
<rdma>N/A</rdma>
</ports>
<pid>6530</pid>
</node>
</coldBrick>
<coldBrick>
<node>
<hostname>NFS Server</hostname>
<path>10.70.42.203</path>
<peerid>137e2a4f-2bde-4a97-b3f3-470a2e092155</peerid>
<status>1</status>
<port>2049</port>
<ports>
<tcp>2049</tcp>
<rdma>N/A</rdma>
</ports>
<pid>6519</pid>
</node>
</coldBrick>
<tasks>
<task>
<type>Rebalance</type>
<id>8da729f2-f1b2-4f55-9945-472130be93f7</id>
<status>4</status>
<statusStr>failed</statusStr>
</task>
</tasks>
</volume>
<tasks/>
</volume>
</volumes>
</volStatus>
</cliOutput>
Change-Id: Idfdbce47d03ee2cdbf407c57159fd37a2900ad2c
BUG: 1263100
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12176
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volRebalance>
<task-id>34f47e29-2193-4a86-9b1e-c7e56bdae3d4</task-id>
<op>7</op>
<nodeCount>1</nodeCount>
<node>
<nodeName>localhost</nodeName>
<promotedfiles>0</promotedfiles>
<demotedfiles>0</demotedfiles>
<statusStr>in progress</statusStr>
</node>
</volRebalance>
</cliOutput>
Change-Id: I61083f7b9b0b3bd840982b8c5d6ea4b42e27c9b3
BUG: 1252737
Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/11890
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: snapshot delete all command fails with --xml option
Fix: Provided xml support for delete all command
Change-Id: I77cad131473a9160e188c783f442b6a38a37f758
BUG: 1257533
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-on: http://review.gluster.org/12027
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Display description field with (null) if
no description is present for the snapshot, instead
of removing the field altogether.
Change-Id: I965b08cd6e54eea56c32e2712fab7daa8a663f11
BUG: 1250387
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/11834
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Display <opErrstr/> in case of no operrstr for
all xml output of gluster commands.
Change-Id: Ie16f749f90b4642357c562012408c434cd38661f
BUG: 1245895
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/11835
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix for handling cli output for attach-tier and
detach-tier
Change-Id: I4d17f4b09612754fe1b8cec6c2e14927029b9678
BUG: 1211562
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/10284
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace-brick operation with data migration support have been
deprecated from gluster.
With this fix replace brick command will support only one commad
gluster volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force}
Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b
BUG: 1094119
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10101
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Discussion in gluster-devel
http://www.gluster.org/pipermail/gluster-devel/2015-April/044301.html
MASTER NODE - Master Volume Node
MASTER VOL - Master Volume name
MASTER BRICK - Master Volume Brick
SLAVE USER - Slave User to which Geo-rep session is established
SLAVE - <SLAVE_NODE>::<SLAVE_VOL> used in Geo-rep Create command
SLAVE NODE - Slave Node to which Master worker is connected
STATUS - Worker Status(Created, Initializing, Active, Passive, Faulty,
Paused, Stopped)
CRAWL STATUS - Crawl type(Hybrid Crawl, History Crawl, Changelog Crawl)
LAST_SYNCED - Last Synced Time(Local Time in CLI output and UTC in XML output)
ENTRY - Number of entry Operations pending.(Resets on worker restart)
DATA - Number of Data operations pending(Resets on worker restart)
META - Number of Meta operations pending(Resets on worker restart)
FAILURES - Number of Failures
CHECKPOINT TIME - Checkpoint set Time(Local Time in CLI output and UTC
in XML output)
CHECKPOINT COMPLETED - Yes/No or N/A
CHECKPOINT COMPLETION TIME - Checkpoint Completed Time(Local Time in CLI
output and UTC in XML output)
XML output:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
cliOutput>
geoRep>
volume>
name>
sessions>
session>
session_slave>
pair>
master_node>
master_brick>
slave_user>
slave/>
slave_node>
status>
crawl_status>
entry>
data>
meta>
failures>
checkpoint_completed>
master_node_uuid>
last_synced>
checkpoint_time>
checkpoint_completion_time>
BUG: 1212410
Change-Id: I944a6c3c67f1e6d6baf9670b474233bec8f61ea3
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: http://review.gluster.org/10121
Tested-by: NetBSD Build System
Reviewed-by: Kotresh HR <khiremat@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
xml parsing of voltype should be inline with the cli
Change-Id: I41ddddac00d07f03b56a041e1c3f5a132fbd7393
BUG: 1212398
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/10271
Tested-by: NetBSD Build System
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
==========================================================================
Inode quota
==========================================================================
= Currently, the only way to retrieve the number of files/objects in a =
= directory or volume is to do a crawl of the entire directory/volume. =
= This is expensive and is not scalable. =
= =
= The proposed mechanism will provide an easier alternative to determine =
= the count of files/objects in a directory or volume. =
= =
= The new mechanism proposes to store count of objects/files as part of =
= an extended attribute of a directory. Each directory's extended =
= attribute value will indicate the number of files/objects present =
= in a tree with the directory being considered as the root of the tree. =
= =
= The count value can be accessed by performing a getxattr(). =
= Cluster translators like afr, dht and stripe will perform aggregation =
= of count values from various bricks when getxattr() happens on the key =
= associated with file/object count. =
A new interface is introduced:
------------------------------
limit-objects : limit the number of inodes at directory level
list-objects : list the directories where the limit is set
remove-objects : remove the limit from the directory
==========================================================================
CLI COMMAND:
gluster volume quota <volname> limit-objects <path> <number> [<percent>]
* <number> is a hard-limit for number of objects limitation for path "<path>"
If hard-limit is exceeded, creation of file/directory is no longer
permitted.
* <percent> is a soft-limit for number of objects creation for path "<path>"
If soft-limit is exceeded, a warning is issued for each creation.
CLI COMMAND:
gluster volume quota <volname> remove-objects [path]
==========================================================================
CLI COMMAND:
gluster volume quota <volname> list-objects [path] ...
Sample output:
------------------
Path Hard-limit Soft-limit Used Available
Soft-limit exceeded?
Hard-limit exceeded?
------------------------------------------------------------------------
--------------------------------------
/dir 10 80% 10 0
Yes
Yes
==========================================================================
[root@snapshot-28 dir]# ls
a b file11 file12 file13 file14 file15 file16 file17
[root@snapshot-28 dir]# touch a1
touch: cannot touch `a1': Disk quota exceeded
* Nine files are created in directory "dir" and directory is included in
* the
count too. Hence the limit "10" is reached and further file creation
fails
==========================================================================
Note: We have also done some re-factoring in cli for volume name
validation. New function cli_validate_volname is created
==========================================================================
Change-Id: I1823497de4f790a2a20ebb1770293472ea33ee2b
BUG: 1190108
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/9769
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sample output:
---------------
Sample 1)
----------
[root@snapshot-28 glusterfs]# gluster volume quota vol1 list /dir1 /dir4 /dir5 --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volQuota>
<limit>
<path>/dir1</path>
<hard_limit>10.0MB</hard_limit>
<soft_limit>80%</soft_limit>
<used_space>0Bytes</used_space>
<avail_space>10.0MB</avail_space>
<hl_exceeded>No</hl_exceeded>
<sl_exceeded>No</sl_exceeded>
</limit>
<limit>
<path>/dir4</path>
<path>No such file or directory</path>
</limit>
<limit>
<path>/dir5</path>
<path>No such file or directory</path>
</limit>
</volQuota>
</cliOutput>
Sample 2)
---------
gluster volume quota vol1 list --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volQuota/>
</cliOutput>
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<volQuota>
<limit>
<path>/dir</path>
<hard_limit>10.0MB</hard_limit>
<soft_limit>80%</soft_limit>
<used_space>0Bytes</used_space>
<avail_space>10.0MB</avail_space>
<hl_exceeded>No</hl_exceeded>
<sl_exceeded>No</sl_exceeded>
</limit>
<limit>
<path>/dir1</path>
<hard_limit>10.0MB</hard_limit>
<soft_limit>80%</soft_limit>
<used_space>0Bytes</used_space>
<avail_space>10.0MB</avail_space>
<hl_exceeded>No</hl_exceeded>
<sl_exceeded>No</sl_exceeded>
</limit>
</volQuota>
</cliOutput>
Change-Id: I8a8d83cff88f778e5ee01fbca07d9f94c412317a
BUG: 1185259
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/9481
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For tcp,rdma type voumes, there will be two ports, one for tcp
and one for rdma. But volume status command only display tcp port.
By this change, adding an extra column for rdma port and changing
the port to tcp port.
Eg:
>gluster volume status pathy
>For tcp,rdma type volume
Status of volume: patchy
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick brickname 49152 49153 Y 14158
>For rdma type volume
Status of volume: patchy
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick brickname 0 49153 Y 14158
For tcp type volume
Status of volume: patchy
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick brickname 49152 0 Y 14158
>gluster volume status patchy detail
Status of volume: xcube2
------------------------------------------------------------------------------
Brick : Brick brickname
TCP Port : 49152
RDMA Port : 49153
Online : Y
Pid : 14158
File System : ext4
Device :
/dev/mapper/luks-2099dd4a-0050-4cae-ad7b-c6a0498c4e88
Mount Options : rw,seclabel,relatime,data=ordered
Inode Size : 256
Disk Space Free : 31.1GB
Total Disk Space : 47.9GB
Inode Count : 3203072
Free Inodes : 2926789
>gluster volume status xcube --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr>(null)</opErrstr>
<volStatus>
<volumes>
<volume>
<volName>xcube</volName>
<nodeCount>2</nodeCount>
<node>
<hostname>hostname</hostname>
<path>/home/brick1</path>
<peerid>2d7bcb95-3d26-4d4f-b3c6-e2ee01b71662</peerid>
<status>1</status>
<port>49152</port>
<ports>
<tcp>49152</tcp>
<rdma>N/A</rdma>
</ports>
<pid>5657</pid>
</node>
<node>
<hostname>NFS Server</hostname>
<path>localhost</path>
<peerid>2d7bcb95-3d26-4d4f-b3c6-e2ee01b71662</peerid>
<status>1</status>
<port>2049</port>
<ports>
<tcp>2049</tcp>
<rdma>N/A</rdma>
</ports>
<pid>5665</pid>
</node>
<tasks/>
</volume>
</volumes>
</volStatus>
</cliOutput>
Change-Id: I81aab226edbd400d29cd3f510af4f344dd99ba51
BUG: 1164079
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/9191
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New column introduced in Status output, "SLAVE USER",
Slave user is not "root" in non root Geo-replication setup.
Added additional tag in XML output <slave_user>
BUG: 1180459
Change-Id: Ia48a5a8eb892ce883b9ec114be7bb2d46eff8535
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: http://review.gluster.org/9409
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kotresh HR <khiremat@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. cli used to log messages at all log levels. This is related to
a peculiar behavior of gcc which treats any enum variable whose type
has non-negative only defined values to be an unsigned integer.
This would cause cli's loglevel (defaulted from state->loglevel) to be
set as (uint)-1 previously and hence all log messages across
levels would appear in the cli log file. Now cli's loglevel defaults to
log level INFO.
2. Changed logging level of messages of type "Returning %d".. to DEBUG.
Change-Id: I05094e523fc277d9ad32cc9d76aee873c0fd3859
BUG: 1168809
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/9383
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By default snapshot should be deactivated and this should be a
configurable option.
This behaviour can be configured by the command below:
gluster snapshot config activate-on-create <enable|disable>
Change-Id: I1911595c32beed43bb2fca4bf99f0d264b422513
BUG: 1157991
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/8985
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Do not fail gluster --xml volume status detail with an empty output
if some field is missing.
Required for NetBSD to pass tests/bugs/bug-861542.t
BUG: 1129939
Change-Id: I7e2097dee5e18a5f3bb8fe6f14be53b33f4aa6b1
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/9024
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Use a system-dependent macro for umount(8) location instead of
relying on $PATH to find it, for security and portability sake.
2) Introduce gf_umount_lazy() to replace umount -l (-l for lazy) invocations,
which is only supported on Linux; On Linux behavior in unchanged. On other
systems, we fork an external process (umountd) that will take care of
periodically attempt to unmount, and optionally rmdir.
BUG: 1129939
Change-Id: Ia91167c0652f8ddab85136324b08f87c5ac1e51d
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8649
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Csaba Henk <csaba@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces a cli command to display a specific volume option/all
volume options of a specific volume with the following usage:
Usage: volume get <VOLNAME> <key|all>
Change-Id: Ic88edb33c5509d7a37cd5ade6341e45e3cdbf59d
BUG: 983317
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/8305
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds xml output for geo-replication status
and status detail command.
sample:
--------------------------------------------------------------
<geoRep>
<volume>
<name>master</name>
<sessions>
<session>
<session_slave>:2a301d66-b9d2-44b4-b827-d680d67123eb:ssh://XXXXXXXXXX::slave</session_slave>
<pair>
<master_node>localhost.localdomain</master_node>
<master_node_uuid>2a301d66-b9d2-44b4-b827-d680d67123eb</master_node_uuid>
<master_brick>/root/master_b1</master_brick>
<slave>ssh://XXXXXXXXXXX::slave</slave>
<status>faulty</status>
<checkpoint_status>N/A</checkpoint_status>
<crawl_status>N/A</crawl_status>
</pair>
</session>
</sessions>
</volume>
</geoRep>
-------------------------------------------------------------
Change-Id: Ia19dbe751c3ab1ec7cb8923cdd6c8b99c374072f
BUG: 1121518
Signed-off-by: ndarshan <dnarayan@redhat.com>
Reviewed-on: http://review.gluster.org/8089
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
BUG: 1122186
Change-Id: Ib887f2194258e85d40f65a758b6a963a17911395
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/8363
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds xml output for geo-replication config
command.
sample:
---------------------------------------------------------------------
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<geoRep>
<config>
<parameter1_name>value</parameter1_name>
<parameter2_name>value</parameter2_name>
...
...
...
</config>
</geoRep>
</cliOutput>
---------------------------------------------------------------------
Change-Id: Iac0451983ae5d0e65b95604eb1c29b968e1ee22f
BUG: 1121518
Signed-off-by: ndarshan <dnarayan@redhat.com>
Reviewed-on: http://review.gluster.org/8270
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch improves the peer identification mechanism in glusterd and
lays down the framework for further improvements, including better multi
network support in glusterd.
This patch mainly does two things,
1. Extend the peerinfo object to store a list of addresses instead of a
single hostname as it does now. This also includes changes to make the
peer update behaviour of 'peer probe' to add to the list.
2. Improve glusterd_friend_find_by_hostname() to perform better matching
of hostnames. glusterd_friend_find_by_hostname() now does and initial
quick string compare against all the peer addresses known to glusterd,
after which it tries a more thorough search using address resolution and
matching the struc sockaddr's.
The above two changes together improve the peer identification situation
in glusterd a lot.
More information regarding the problem this patch attempts to resolve
and the approach chosen can be found at
http://www.gluster.org/community/documentation/index.php/Features/Better_peer_identification
This commit is a squashed commit of the following changes, the
development branch of which can be viewed at,
https://github.com/kshlm/glusterfs/tree/better-peer-identification or,
https://forge.gluster.org/~kshlm/glusterfs-core/kshlms-glusterfs/commits/better-peer-identification
commit 198f86e60fab74faf082eaa02657a4d8f60b92f0
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 15 14:34:06 2014 +0530
Update gluster.8
commit 35d597f3a6b3248373e727f7b7e889c92554d56c
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 15 09:01:01 2014 +0530
Address review comments
https://review.gluster.org/#/c/8238/3
commit 47b5331e17304477322bd2daed5bbed503c34ca1
Merge: c71b12c 78128af
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 15 08:41:39 2014 +0530
Merge branch 'master' into better-peer-identification
commit c71b12c164330e8d19d1df4734ab34ef9a8caad2
Merge: 57bc9de 0f5719a
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jul 10 19:50:19 2014 +0530
Merge branch 'master' into better-peer-identification
commit 57bc9de9e4f49ff2b1620df9906cda50a3527a25
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jul 10 19:49:08 2014 +0530
More fixes to review comments
commit 5482cc363a687a9e246a0780ec88acd53e218501
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jul 10 18:36:40 2014 +0530
Code refactoring in peer-utils based on review comments
https://review.gluster.org/#/c/8238/2/xlators/mgmt/glusterd/src/glusterd-peer-utils.c
commit 89b22c34757178f64d5fbaffa31e6302f841c060
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jul 10 12:30:00 2014 +0530
Hostnames in peer status
commit 63ebf9485cf50d736cf640238a1ab241671fcaf1
Merge: c8c8fdd f5f9721
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jul 10 12:06:33 2014 +0530
Merge remote-tracking branch 'origin/master' into better-peer-identification
commit c8c8fdd2104b5b6b8a1af739b1dd952b74e6dd66
Author: Kaushal M <kaushal@redhat.com>
Date: Wed Jul 9 18:35:27 2014 +0530
Hostnames in xml output
commit 732a92a0167ad7b1d70edbc35ebd8307c2766ae1
Author: Kaushal M <kaushal@redhat.com>
Date: Wed Jul 9 15:12:10 2014 +0530
Add hostnames to cli rsp dict during list-friends
commit fcf43e3e317508f0c225024738a988a4af8e9205
Merge: c0e2624 72d96e2
Author: Kaushal M <kaushal@redhat.com>
Date: Wed Jul 9 12:53:03 2014 +0530
Merge branch 'master' into better-peer-identification
commit c0e262416728a3c536a8347a216e471eb2251535
Author: Kaushal M <kaushal@redhat.com>
Date: Mon Jul 7 16:11:19 2014 +0530
Use list_for_each_entry_safe when cleaning peer hostnames
commit 6132e60224eb592f3657e535a12a3e72c772da42
Author: Kaushal M <kaushal@redhat.com>
Date: Mon Jul 7 15:52:19 2014 +0530
Fix crash in gd_add_friend_to_dict
commit 88ffa9a508fd5aac0b2a76e6e76487ce0cab786a
Author: Kaushal M <kaushal@redhat.com>
Date: Mon Jul 7 13:19:44 2014 +0530
gd_peerinfo_destroy -> glusterd_peerinfo_destroy
commit 4b36930a715b1e13cd1a77d136ef1cf78a06d574
Author: Kaushal M <kaushal@redhat.com>
Date: Mon Jul 7 12:50:12 2014 +0530
More refactoring
commit ee559b081d608c6501c10ae22166f26eeb65690e
Author: Kaushal M <kaushal@redhat.com>
Date: Mon Jul 7 12:14:40 2014 +0530
Major refactoring of code based on review comments at
https://review.gluster.org/#/c/8238/1/xlators/mgmt/glusterd/src/glusterd-peer-utils.h
commit e96dbc7bbb05fad2a9c424de41a394b8023fe48d
Merge: 2613d1d 83c09b7
Author: Kaushal M <kaushal@redhat.com>
Date: Mon Jul 7 09:47:05 2014 +0530
Merge remote-tracking branch 'origin/master' into better-peer-identification
commit 2613d1daebff0c56812de821c06ed4c16bb9d447
Merge: b242cf6 9a50211
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 15:28:57 2014 +0530
Merge remote-tracking branch 'origin/master' into better-peer-identification
commit b242cf66d95dd3dd5e3975aa430baa6bd74b8a29
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 15:08:18 2014 +0530
Fix a silly mistake, if (ctx->req) => if (ctx->req == NULL)
commit c835ed26433830ceed57289143f596cf60421558
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 14:58:23 2014 +0530
Fix reverse probe.
commit 9ede17f9329b854b02e8ad159f173244789fd08c
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 13:31:32 2014 +0530
Fix friend import for existing peers
commit 891bf74c7350064dfb008d1b7294bcec28d680fd
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 13:08:36 2014 +0530
Set first hostname in peerinfo->hostnames to peerinfo->hostname
commit 9421d6a217381a7427a7d84f369280883ca4297a
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 12:21:40 2014 +0530
Fix gf_asprintf return val check in glusterd_store_peer_write
commit defac978c1d94011ce8195e311839b9ffce057e7
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 11:16:13 2014 +0530
Fix store_retrieve_peers to correctly cleanup.
commit 00a799f5de1121b0cb7421da8285f9407063e1bd
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 10:52:11 2014 +0530
Update address list in glusterd_probe_cbk only when needed.
commit 7a628e8a9c562d85709c69cfa13fb1774c521b75
Merge: d191985 dc46d5e
Author: Kaushal M <kaushal@redhat.com>
Date: Fri Jul 4 09:24:12 2014 +0530
Merge remote-tracking branch 'origin/master' into better-peer-identification
commit d1919858e6639d2b54d716a61f662d9752ec5ff1
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 1 18:59:49 2014 +0530
gf_compare_addrinfo -> gf_compare_sockaddr
commit 31d8ef730d408f8d9ba8f504fa648f7dcd59da87
Merge: 93bbede 86ee233
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 1 18:16:13 2014 +0530
Merge remote-tracking branch 'origin/master' into better-peer-identification
commit 93bbedeac5181e29f59b2acd08f638146812ec41
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 1 18:15:16 2014 +0530
Improve glusterd_friend_find_by_hostname
glusterd_friend_find_by_hostname will now do an initial quick search for
the peerinfo performing string comparisions on the given host string. It
follows it with a more thorough match, by resolving the addresses and
comparing addrinfos instead of strings.
commit 2542cdbc45aa9cfcaf1f174686158d5565cdd07b
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 1 17:21:10 2014 +0530
New utility gf_compare_addrinfo
commit 338676e8389a44bd91136eebd110197429c2566c
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 1 14:55:56 2014 +0530
Use gd_peer_has_address instead of strcmp
commit 28d45be51f594328741c44455bd80ac9d64ca501
Merge: 728266e 991dd5e
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 1 14:54:40 2014 +0530
Merge branch 'master' into better-peer-identification
commit 728266eb16d5f5a4bf36266044425ae164337f99
Merge: 7d9b87b 2417de9
Author: Kaushal M <kaushal@redhat.com>
Date: Tue Jul 1 09:55:13 2014 +0530
Merge remote-tracking branch 'origin/master' into better-peer-identification
commit 7d9b87b84955ec17daeaf88a3e7462914039430f
Merge: b890625 e02275c
Author: Kaushal M <kshlmster@gmail.com>
Date: Tue Jul 1 08:41:40 2014 +0530
Merge pull request #4 from vpshastry/better-peer-identification
Better peer identification
commit e02275c52fb83c72ad082c098fd3e432c2b9c526
Merge: 75ee90d b890625
Author: Varun Shastry <vshastry@redhat.com>
Date: Mon Jun 30 16:44:29 2014 +0530
Merge branch 'better-peer-identification' of https://github.com/kshlm/glusterfs into better-peer-identification-kaushal-github
commit 75ee90d2f272e49b94d24c9ca4571e89a83055ff
Author: Varun Shastry <vshastry@redhat.com>
Date: Mon Jun 30 15:36:10 2014 +0530
glusterd: add to the list if the probed uuid pre-exists
Signed-off-by: Varun Shastry <vshastry@redhat.com>
commit b890625d8164c660695daef3285c67979eef723e
Merge: 04c5d60 187a7a9
Author: Kaushal M <kaushal@redhat.com>
Date: Mon Jun 30 11:44:13 2014 +0530
Merge remote-tracking branch 'origin/master' into better-peer-identification
commit 04c5d60cb938c8d94b214689580b40abb1b0ffcd
Merge: 3a5bfa1 e01edb6
Author: Kaushal M <kshlmster@gmail.com>
Date: Sat Jun 28 19:23:33 2014 +0530
Merge pull request #3 from vpshastry/better-peer-identification
glusterd: search through the list of hostnames in the peerinfo
commit 0c64f3346a977f9165ac55a84a1e03c40a7573a7
Merge: e01edb6 3a5bfa1
Author: Varun Shastry <vshastry@redhat.com>
Date: Sat Jun 28 10:43:29 2014 +0530
Merge branch 'better-peer-identification' of https://github.com/kshlm/glusterfs into better-peer-identification-kaushal-github
commit e01edb63153a1008db70b8fa76ae5b535e099326
Author: Varun Shastry <vshastry@redhat.com>
Date: Fri Jun 27 12:29:36 2014 +0530
glusterd: search through the list of hostnames in the peerinfo
Signed-off-by: Varun Shastry <vshastry@redhat.com>
commit 3a5bfa15855e660db2bfde644727371dd2d618cc
Merge: cda6d31 371ea35
Author: Kaushal M <kshlmster@gmail.com>
Date: Fri Jun 27 11:31:17 2014 +0530
Merge pull request #1 from vpshastry/better-peer-identification
glusterd: Add hostname to list instead of replaceing upon update
commit 371ea354f198b4182382d5403c5960c0b2add6b6
Author: Varun Shastry <vshastry@redhat.com>
Date: Fri Jun 27 11:24:54 2014 +0530
glusterd: Add hostname to list instead of replaceing upon update
Signed-off-by: Varun Shastry <vshastry@redhat.com>
commit cda6d3152886623ecbf46baf0048ebe0119b30b6
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jun 26 19:52:52 2014 +0530
Import address lists
commit 6649b54aa0440130c08e827e0a1d1bbfb840eca9
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jun 26 19:15:37 2014 +0530
Implement export address list
commit 55990034eead92bc9b936240029e460a4bf152d5
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jun 26 18:11:59 2014 +0530
Use first address in list to when setting up the peer RPC.
commit a35fde8d19b9988eb04c652fb3a5e4f84d90ad00
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jun 26 18:03:04 2014 +0530
Properly free addresses on glusterd_peer_destroy
commit 1988081db09ac9205f3dc7268cef8be267f3ce8b
Author: Kaushal M <kaushal@redhat.com>
Date: Thu Jun 26 17:52:35 2014 +0530
Restore peerinfo with address list implemented.
commit 66f524d5749a12f4910dd6b06c9d91f37e1d831e
Author: Kaushal M <kaushal@redhat.com>
Date: Mon Jun 23 13:02:23 2014 +0530
Move out all peer related utilities from glusterd-utils to glusterd-peer-utils
commit 14a2a326a4dff11b55490dca2a14f39320931340
Author: Kaushal M <kaushal@redhat.com>
Date: Tue May 27 12:16:41 2014 +0530
Compilation fix
commit c59cd351d0a102d0d5f3ea9001fd33c4edcb262f
Author: Kaushal M <kaushal@redhat.com>
Date: Mon May 5 12:51:11 2014 +0530
Add store support for hostname list
commit b70325f0beb884ad12645ef40185f0bf6cedd741
Author: Kaushal M <kaushal@redhat.com>
Date: Fri May 2 15:58:07 2014 +0530
Add a hostnames list to glusterd_peerinfo_t
glusterd_peerinfo_new will now init this list and add the given hostname
as the lists first member.
Signed-off-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: Varun Shastry <vshastry@redhat.com>
Change-Id: Ief3c5d6d6f16571ee2fab0a45e638b9d6506a06e
BUG: 1119547
Reviewed-on: http://review.gluster.org/8238
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now --xml option can be used with all snapshot command. It
will form the cli output in xml form.
Change-Id: Ifc0ac31d2a9f91e136e87f3b51a629df7dba94e8
BUG: 1096610
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-on: http://review.gluster.org/7663
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Two new options have been added to the 'create' command of the cli
interface:
disperse [<count>] redundancy <count>
Both are optional. A dispersed volume is created by specifying, at
least, one of them. If 'disperse' is missing or it's present but
'<count>' does not, the number of bricks enumerated in the command
line is taken as the disperse count.
If 'redundancy' is missing, the lowest optimal value is assumed. A
configuration is considered optimal (for most workloads) when the
disperse count - redundancy count is a power of 2. If the resulting
redundancy is 1, the volume is created normally, but if it's greater
than 1, a warning is shown to the user and he/she must answer yes/no
to continue volume creation. If there isn't any optimal value for
the given number of bricks, a warning is also shown and, if the user
accepts, a redundancy of 1 is used.
If 'redundancy' is specified and the resulting volume is not optimal,
another warning is shown to the user.
A distributed-disperse volume can be created using a number of bricks
multiple of the disperse count.
Change-Id: Iab93efbe78e905cdb91f54f3741599f7ea6645e4
BUG: 1118629
Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-on: http://review.gluster.org/7782
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With initial design where the snap volume used to be displayed in
gluster volume info,
we used "Snap Volume: yes/on" to distinguish the volume whether its a
snap volume or the original volume.
But with new design the snap volumes are not listed in the volume info,
hence this entry (snap volume:) doesn't make sense to show.
Change-Id: Ic5b9948bf4ef74e89a611742c74a8989cb406866
BUG: 1098910
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/7794
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces pause and resume cli command
for geo-replication.
Signed-off-by: Kotresh H R <khiremat@redhat.com>
Change-Id: I4f5e58e9175fe85077d56088473252391fb57de7
BUG: 1093602
Signed-off-by: Kotresh H R <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/7643
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs
Working functionality on MacOSX
- GlusterD (management daemon)
- GlusterCLI (management cli)
- GlusterFS FUSE (using OSXFUSE)
- GlusterNFS (without NLM - issues with rpc.statd)
Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac
BUG: 1089172
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Signed-off-by: Dennis Schafroth <dennis@schafroth.com>
Tested-by: Harshavardhana <harsha@harshavardhana.net>
Tested-by: Dennis Schafroth <dennis@schafroth.com>
Reviewed-on: http://review.gluster.org/7503
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial patch for the Snapshot feature. Current patch
includes following features:
* Snapshot create
* Snapshot delete
* Snapshot restore
* Snapshot list
* Snapshot info
* Snapshot status
* Snapshot config
Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9
BUG: 1061685
Signed-off-by: shishir gowda <sgowda@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7128
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
remove-brick operation.
Fix: For remove-brick operation skipped count is included into
failure count.
clixml-output : skipped count would be zero always for remove-brick
status.
Change-Id: Ic0bb23b89e0cf5b884b6d1ae42bbf98deedc9173
BUG: 1060209
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/6889
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
stats
"volume profile info" automatically clears incremental stats. There
isn't a command to:
- fetch stats without clearing incremental stats and
- clear cumulative and incremental stats
This change introduces two arguments (i.e. peek and clear). 'clear'
will wipe both incremental and cumulative stats. 'peek' fetches stats
without wiping incremental stats.
'volume profile info peek' - fetches incremental and cumulative stats
without wiping incremental stats
'volume profile info incremental peek' - fetches incremental stats
without wiping incremental stats
'volume profile info clear' - clears both incremental and cumultiave
stats
Change-Id: I91834515ad672eca5f882809941147d7d997c4c9
BUG: 1047416
Signed-off-by: Dawit Alemu <dalemu@redhat.com>
Reviewed-on: http://review.gluster.org/6620
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added new child elements; name and hostUuid under brick in
volume info xml where name and host uuid of the bricks are stored.
This does not break backward compatibility as the old value under
brick is not removed.
Change-Id: Ib9e388889c8dc0c7cd34dcc1871a59003f982f36
Signed-off-by: ndarshan <dnarayan@redhat.com>
Reviewed-on: http://review.gluster.org/6604
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The XML output for volume status was malformed when one of the nodes is
down, leading to outputs like
-------
<node>
<node>
<hostname>NFS Server</hostname>
<path>localhost</path>
<peerid>63ca3d2f-8c1f-4b84-b797-b4baddab81fb</peerid>
<status>1</status>
<port>2049</port>
<pid>2130</pid>
</node>
-----
This was happening because we were starting the <node> element before
determining if node was present, and were not closing it or clearing it
when not finding the node in the dict.
To fix this, the <node> element is only started once a node has been
found in the dict.
Change-Id: I6b6205f14b27a69adb95d85db7b48999aa48d400
BUG: 1046020
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6571
Reviewed-by: Aravinda VK <avishwan@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, glusterd used to just send back the local status of a task
in a 'volume status [tasks]' command. As the rebalance operation is
distributed and asynchronus, this meant that different peers could give
different status values for a rebalance or remove-brick task.
With this patch, all the peers will send back the tasks status as a part
of the 'volume status' commit op, and the origin peer will aggregate
these to arrive at a final status for the task.
The aggregation is only done for rebalance or remove-brick tasks. The
replace-brick task will have the same status on all the peers (see
comment in glusterd_volume_status_aggregate_tasks_status() for more
information) and need not be aggregated.
The rebalance process has 5 states,
NOT_STARTED - rebalance process has not been started on this node
STARTED - rebalance process has been started and is still running
STOPPED - rebalance process was stopped by a 'rebalance/remove-brick
stop' command
COMPLETED - rebalance process completed successfully
FAILED - rebalance process failed to complete successfully
The aggregation is done using the following precedence,
STARTED > FAILED > STOPPED > COMPLETED > NOT_STARTED
The new changes make the 'volume status tasks' command a distributed
command as we need to get the task status from all peers.
The following tests were performed,
- Start a remove-brick task and do a status command on a peer which
doesn't have the brick being removed. The remove-brick status was
given correctly as 'in progress' and 'completed', instead of 'not
started'
- Start a rebalance task, run the status command. The status moved to
'completed' only after rebalance completed on all nodes.
Also, change the CLI xml output code for rebalance status to use the
same algorithm for status aggregation.
Change-Id: Ifd4aff705aa51609a612d5a9194acc73e10a82c0
BUG: 1027094
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6230
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
information
'volume profile info' fetches both cumulative and incremental
I/O statistics. There isn't a way to fetch just cumulative or
incremental statistics.
This change introduces two optional arguments, namely "incremental"
and "cumulative", that can be tacked on to 'volume profile info'.
In other words, the new command format is
volume profile <VOLNAME> {start | info [incremental | cumulative]
| stop} [nfs]
'volume profile info incremental' - fetches incremental stats
'volume profile info cumulative' - fetches cumulative stats
'volume profile info' - fetches incremental and cumulative stats
Change-Id: I5ddb45d990542ea611d23d251efebfec46f472d0
BUG: 1030580
Signed-off-by: Dawit Alemu <dalemu@redhat.com>
Reviewed-on: http://review.gluster.org/6264
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a glusterd is down in cluster rebalance/remove-brick status
--xml will fail to get status and returns null.
This patch skips collecting status if glusterd is down, and
collects status from all the other up nodes.
Change-Id: I6df0feef41b5cc817cc8d7820ee2acac95176a98
BUG: 1036564
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: http://review.gluster.org/6391
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Listing the nodes on which rebalance hasn't been started is just giving
out extraneous information.
Also, refactor the rebalance status printing code into a single function
and use it for both rebalance and remove-brick status.
BUG: 1031887
Change-Id: I47bd561347dfd6ef76c52a1587916d6a71eac369
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6300
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Compiling GlusterFS without xml package results in following build error
cli-rpc-ops.o: In function `gf_cli_status_cbk':
/home/mohan/Work/glusterfs/cli/src/cli-rpc-ops.c:6430: undefined
reference to `cli_xml_output_vol_status_tasks_detail'
Change-Id: I49b3c46ac3340c40e372bef4690cedb41df20e8a
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/6295
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds <peerid> tag to bricks and nfs/shd like services to
volume status xml output.
BUG: 955548
Change-Id: I9aaa9266e4d56f632235eaeef565e92d757c0694
Signed-off-by: Bala.FA <barumuga@redhat.com>
Reviewed-on: http://review.gluster.org/6162
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current BD xlator (block backend) has a few limitations such as
* Creation of directories not supported
* Supports only single brick
* Does not use extended attributes (and client gfid) like posix xlator
* Creation of special files (symbolic links, device nodes etc) not
supported
Basic limitation of not allowing directory creation is blocking
oVirt/VDSM to consume BD xlator as part of Gluster domain since VDSM
creates multi-level directories when GlusterFS is used as storage
backend for storing VM images.
To overcome these limitations a new BD xlator with following
improvements is suggested.
* New hybrid BD xlator that handles both regular files and block device
files
* The volume will have both POSIX and BD bricks. Regular files are
created on POSIX bricks, block devices are created on the BD brick (VG)
* BD xlator leverages exiting POSIX xlator for most POSIX calls and
hence sits above the POSIX xlator
* Block device file is differentiated from regular file by an extended
attribute
* The xattr 'user.glusterfs.bd' (BD_XATTR) plays a role in mapping a
posix file to Logical Volume (LV).
* When a client sends a request to set BD_XATTR on a posix file, a new
LV is created and mapped to posix file. So every block device will
have a representative file in POSIX brick with 'user.glusterfs.bd'
(BD_XATTR) set.
* Here after all operations on this file results in LV related
operations.
For example opening a file that has BD_XATTR set results in opening
the LV block device, reading results in reading the corresponding LV
block device.
When BD xlator gets request to set BD_XATTR via setxattr call, it
creates a LV and information about this LV is placed in the xattr of the
posix file. xattr "user.glusterfs.bd" used to identify that posix file
is mapped to BD.
Usage:
Server side:
[root@host1 ~]# gluster volume create bdvol host1:/storage/vg1_info?vg1 host2:/storage/vg2_info?vg2
It creates a distributed gluster volume 'bdvol' with Volume Group vg1
using posix brick /storage/vg1_info in host1 and Volume Group vg2 using
/storage/vg2_info in host2.
[root@host1 ~]# gluster volume start bdvol
Client side:
[root@node ~]# mount -t glusterfs host1:/bdvol /media
[root@node ~]# touch /media/posix
It creates regular posix file 'posix' in either host1:/vg1 or host2:/vg2 brick
[root@node ~]# mkdir /media/image
[root@node ~]# touch /media/image/lv1
It also creates regular posix file 'lv1' in either host1:/vg1 or
host2:/vg2 brick
[root@node ~]# setfattr -n "user.glusterfs.bd" -v "lv" /media/image/lv1
[root@node ~]#
Above setxattr results in creating a new LV in corresponding brick's VG
and it sets 'user.glusterfs.bd' with value 'lv:<default-extent-size'
[root@node ~]# truncate -s5G /media/image/lv1
It results in resizig LV 'lv1'to 5G
New BD xlator code is placed in xlators/storage/bd directory.
Also add volume-uuid to the VG so that same VG can't be used for other
bricks/volumes. After deleting a gluster volume, one has to manually
remove the associated tag using vgchange <vg-name> --deltag
<trusted.glusterfs.volume-id:<volume-id>>
Changes from previous version V5:
* Removed support for delayed deleting of LVs
Changes from previous version V4:
* Consolidated the patches
* Removed usage of BD_XATTR_SIZE and consolidated it in BD_XATTR.
Changes from previous version V3:
* Added support in FUSE to support full/linked clone
* Added support to merge snapshots and provide information about origin
* bd_map xlator removed
* iatt structure used in inode_ctx. iatt is cached and updated during
fsync/flush
* aio support
* Type and capabilities of volume are exported through getxattr
Changes from version 2:
* Used inode_context for caching BD size and to check if loc/fd is BD or
not.
* Added GlusterFS server offloaded copy and snapshot through setfattr
FOP. As part of this libgfapi is modified.
* BD xlator supports stripe
* During unlinking if a LV file is already opened, its added to delete
list and bd_del_thread tries to delete from this list when a last
reference to that file is closed.
Changes from previous version:
* gfid is used as name of LV
* ? is used to specify VG name for creating BD volume in volume
create, add-brick. gluster volume create volname host:/path?vg
* open-behind issue is fixed
* A replicate brick can be added dynamically and LVs from source brick
are replicated to destination brick
* A distribute brick can be added dynamically and rebalance operation
distributes existing LVs/files to the new brick
* Thin provisioning support added.
* bd_map xlator support retained
* setfattr -n user.glusterfs.bd -v "lv" creates a regular LV and
setfattr -n user.glusterfs.bd -v "thin" creates thin LV
* Capability and backend information added to gluster volume info (and
--xml) so
that management tools can exploit BD xlator.
* tracing support for bd xlator added
TODO:
* Add support to display snapshots for a given LV
* Display posix filename for list-origin instead of gfid
Change-Id: I00d32dfbab3b7c806e0841515c86c3aa519332f2
BUG: 1028672
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/4809
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
oVirt's Gluster Integration needs an inexpensive command that can be
executed every 10 seconds to monitor async tasks and their parameters,
for all volumes.
The solution involves adding a 'tasks' sub-command to 'volume status'
to fetch only the async task IDs, type and other relevant parameters.
Only the originator glusterd participates in this command as all the
information needed is available on all the nodes. This is to make the
command suitable for being executed every 10 seconds.
Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1
BUG: 1012346
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/6006
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds node uuid in rebalance/remove-brick status xml output.
Output XML will look like
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volRebalance>
<op>3</op>
<nodeCount>1</nodeCount>
<node>
<nodeName>localhost</nodeName>
==>> <id>883626f8-4d29-4d02-8c5d-c9f48c5b2445</id>
<files>0</files>
<size>0</size>
<lookups>0</lookups>
<failures>0</failures>
<status>3</status>
<statusStr>completed</statusStr>
</node>
<aggregate>
<files>0</files>
<size>0</size>
<lookups>0</lookups>
<failures>0</failures>
<status>3</status>
<statusStr>completed</statusStr>
</aggregate>
</volRebalance>
</cliOutput>
Change-Id: I5a1d4f9043b33b9e88150647a243ddb16154e843
BUG: 1012296
Signed-off-by: Bala.FA <barumuga@redhat.com>
Reviewed-on: http://review.gluster.org/6005
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"runtime in secs" is available in the CLI output of
rebalance status and remove-brick status, but not available
in xml output when --xml is passed.
runtime in aggregate section will be max of all nodes runtimes.
Example output:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volRebalance>
<op>3</op>
<nodeCount>1</nodeCount>
<node>
<nodeName>localhost</nodeName>
<files>0</files>
<size>0</size>
<lookups>0</lookups>
<failures>0</failures>
<skipped>0</skipped>
<runtime>1.00</runtime>
<status>3</status>
<statusStr>completed</statusStr>
</node>
<aggregate>
<files>0</files>
<size>0</size>
<lookups>0</lookups>
<failures>0</failures>
<skipped>0</skipped>
<runtime>1.00</runtime>
<status>3</status>
<statusStr>completed</statusStr>
</aggregate>
</volRebalance>
</cliOutput>
BUG: 1012773
Change-Id: I8deaba08922a53cd2d3b411e097a7b3cf591b127
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: http://review.gluster.org/5997
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Skipped files count is available in CLI output of rebalance status
and remove-brick status, but not available in xml output.
Example output:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volRebalance>
<op>3</op>
<nodeCount>1</nodeCount>
<node>
<nodeName>localhost</nodeName>
<files>0</files>
<size>0</size>
<lookups>0</lookups>
<failures>0</failures>
<skipped>0</skipped>
<status>0</status>
<statusStr>completed</statusStr>
</node>
<aggregate>
<files>0</files>
<size>0</size>
<lookups>0</lookups>
<failures>0</failures>
<skipped>0</skipped>
<status>0</status>
<statusStr>completed</statusStr>
</aggregate>
</volRebalance>
</cliOutput>
BUG: 1012772
Change-Id: I05191293403e66e0d681f0cd0422aa3c78a2d91d
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: http://review.gluster.org/6000
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|