| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ie9e24e037b7a39b239a7badb983504963d664324
BUG: 1225716
Signed-off-by: Sakshi <sabansal@redhat.com>
Reviewed-on: http://review.gluster.org/10954
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
be same.
Problem:
After replacing the brick using "replace-brick" command and running "heal
full", the version of the root directory of the newly added brick is not
getting healed. heal starts running on the dentries of the root but does not
run on root directory.
Solution:
Run heal on root directory.
Change-Id: Ifd42a3fb341b049c895817e892e5b484a5aa6f80
BUG: 1243382
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
Reviewed-on: http://review.gluster.org/11676
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently glusterd is not stopping all the deamon service on peer detach
With this fix it will do peer detach cleanup properlly and will stop all
the daemon which was running before peer detach on the node.
Change-Id: Ifed403ed09187e84f2a60bf63135156ad1f15775
BUG: 1255386
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/11509
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
problem : Reset/set commands were not working properly. reset command returns
success but it not sending notification to svcs if corresponding graph modified.
Fix: Whenever reset/set command issued, generate the temp graph and compare
with original graph and do the fallowing actions
1.) If both graph are identical nothing to do with svcs.
2.) If any changes in graph topology restart/stop service by calling
svc manager.
3) If changes in options send notify signal by calling glusterd_fetchspec_notify.
Change-Id: I852c4602eafed1ae6e6a02424814fe3a83e3d4c7
BUG: 1209329
Signed-off-by: anand <anekkunt@redhat.com>
Reviewed-on: http://review.gluster.org/10850
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Below command is wrong way of executing
mutilple command with | (pipe)
local cmd="$CLI volume quota $V0 list $QUOTA_PATH | grep $QUOTA_PATH |
awk '{print \$$FIELD}'"
$cmd
This patch fixes the issue
This patch also fixes testcase inode-quota.t, which checking
quota values in wrongs fields
Change-Id: If28732e6a76ea4bf75560f6496c8f56670915cf9
BUG: 1229297
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/11673
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On restarting glusterd quota daemon is not started when more than one
volumes are configured and quota is enabled only on 2nd volume.
This is because of while restarting glusterd it will restart all the bricks.
During brick restart it will start respective daemon by passing volinfo of
first volume. Passing volinfo to glusterd_svc_manager will imply daemon
managers will take action based on the same volume's configuration which
is incorrect for per node daemons.
Fix is to pass volinfo NULL while restarting bricks.
Change-Id: I2602002a8ba7762fc1eb08123e79fbcf568ecab4
BUG: 1242875
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/11658
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem : Basically, in this test case a file is created
which exceeds the quota limit. Once the limit is reached
that file will be deleted. At the same moment we are
testing inode-quota. It can so happen that before the
marker updates the information related to deletion of
file, a new file creation operation comes and sees that
quota limit is still exceeded.
Solution : Inducing a check to see if marker updation
completed successfully.
Updated all the test case which has the similar
machanism and also moved the "usage" function
to a common place "volume.rc"
Change-Id: I36ddbc5ebbf1b74c9d326a0d1d5f3b32f20a906a
BUG: 1229297
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/11125
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During quota-update process if inode info is present in size-xattr and
missing in contri-xattrs, then in function '_mq_get_metadata', we set
contri-size as zero (on error -2, which means usage info present, but inode info missing).
With this we are calculating wrong delta and updating the same.
With this patch we are ignoring errors if inode info in xattrs are missing
Change-Id: I7940a0e299b8bb425b5b43746b1f13f775c7fb92
BUG: 1241153
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/11583
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Provide options to control number of active background heal count and qlen.
Change-Id: Idc2419219d881f47e7d2e9bbc1dcdd999b372033
BUG: 1237381
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/11473
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test fails with:
not ok 28 Got "Binary file (standard input) matches" instead of "qwerty"
FAILED COMMAND: qwerty get_text_xattr user.test
/d/backends/patchy1_new/file5.txt
not ok 29 Got "Binary file (standard input) matches" instead of "qwerty"
FAILED COMMAND: qwerty get_text_xattr user.test
/d/backends/patchy0/file5.txt
Failed 2/29 subtests
Fix:
Pass -a flag to grep
Change-Id: I69626fbf95a9ff756046363c5627cf98ea3f1df8
BUG: 1207829
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/11416
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ibb4b503e7d723c86ac381ad3747b1198334bd6ad
BUG: 1231619
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/11290
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Avoid hangs on unmounting NFS on NetBSD
NetBSD umount(8) on a NFS mount whose server is gone will wait forever
because umount(8) calls realpath(3) and tries to access the mount before
it calls unmount(2). The non-portable, NetBSD-specific umount -R flag
prevent that behavior.
We therefore introduce UMOUNT_F, defined as "umount -f" on Linux and
"umount -f -R" on NetBSD to take care of forced unmounts, especially
in the NFS case.
2) Enforce usage of force_umount wrapper with timeout
Whenever umount is used it should be wrapped in force_umount with
tiemout handling. That saves us timing issues, and it handles the
NetBSD NFS case.
3) Cleanup kernel cache flush.
We used (cd $M0 && umount $M0 ) as a portable kernel cache flush
trick, but it does not flush everything we need on Linux. Introduce
a drop_cache() shell function that reverts to previously used
echo 3 > /proc/sys/vm/drop_caches on Linux, and keeps
(cd $M0 && umount $M0 ) on other systems.
BUG: 1129939
Change-Id: Iab1f5a023405f1f7270c42b595573702ca1eb6f3
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/11114
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a (f)xattrop is issued with a value that only contains 0's,
then we don't modify or create the extended attribute. This
is useful to avoid ctime modifications when the only purpose
of the xattrop was to get the current value.
Change-Id: Ia62494e9009962e683c8276783f671da17a8b03a
BUG: 1211123
Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-on: http://review.gluster.org/10886
Tested-by: NetBSD Build System
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Wait for AFR's children to be up in glustershd process before attempting heal.
Also, grep (version 2.21) is detecting statedump files as binary, causing tests
to succeed incorrectly. Hence adding the -a switch to force it to treat it as a
text file. Thanks to Vijay Bellur for identifying the issue
(http://lists.gnu.org/archive/html/bug-grep/2015-05/msg00000.html) and the
workaround.
Change-Id: Ie3d9591ffaf44baa0cd8c2baa327aed24378e3df
BUG: 1163543
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/10833
Tested-by: NetBSD Build System
Tested-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch solves problems caused by XFS with speculative preallocation feature on :
Test EXPECT "1" has_holes $B0/${V0}0/big2bigger would fall when XFS has not freed the preallocated blocks.
It is caused by XFS speculative preallocation feature. The test would pass if this feature is disabled.
Speculative preallocation can speed up under linux 3.8(and later).
Otherwise, the test would pass by dropping cache manually to speed up speculative preallocation.
As in http://review.gluster.org/#/c/10411/, using "( cd $M0 ; umount $M0 )" to drop caches, which is
better than "echo 3 > /proc/sys/vm/drop_caches".
drop caches operation was added in test:
tests/basic/afr/sparse-file-self-heal.t
BUG: 1206461
Change-Id: Ie2c9d1b92fa8307c44498752fdd100eb86f9689c
Signed-off-by: zhoushicheng <madaozhou@gmail.com>
Reviewed-on: http://review.gluster.org/10253
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I55bd62480b7ee38cf7b29aeba67b19b0c5bbe2fb
BUG: 1220016
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/10702
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: If57d08f3446755ea41f66ca258efcc8ea5a89063
BUG: 1217701
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/10480
Tested-by: NetBSD Build System
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier, both chagelog on/off and brick restart were considered
to be changelog breakage and treated as changelog not being
continuous. As a result, new HTIME.TSTAMP file was created on
both the above cases. Now the change is made such that only
on changelog enable/disable, the changelog is considered to be
discontinuous. New HTIME.TSTAMP file is not created on brick
restart, the changelogs files are appended to last HTIME.TSTAMP
file.
Treating changelog as continuous in above scenario is important
as changelog history API will fail otherwise. It can successfully
get changes between start and end timestamps only when changelog
is continuous (Changelogs in single HTIME.TSTAMP file are treated
as continuous). Without this change, changelog history API would
fail, and it would become necessary to fallback to other mechanisms
like xsync FSCrawl in case geo-rep to detect changes in this time
window. But Xsync FSCrawl would not be applicable to other
consumers like glusterfind.
Rationale:
1. In plain distributed volume, if brick goes down, no I/O can
happen onto the brick. Hence changelog is intact with data
on disk.
2. In distributed replicate volume, if brick goes down, since
self-heal traffic is captured in changelog. Eventually,
I/O happened whend brick down is captured in changelog.
Change-Id: I2eb66efe6ee9a9228fb1fcb38d6e7696b9559d5b
BUG: 1211327
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/10222
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Venky Shankar <vshankar@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In performance/open-behind.t, a test for open fop reaching the brick is done by
switching off open-behind and performing a read operation. If the read operation
is performed before a graph switch, the read happens on the old graph and hence
open does not get accounted in the brick.
To overcome this EXPECT_WITHIN 10 seconds has now been added to ensure that a
graph switch has happened. The read operation happens subsequently after the
graph switch.
Cleaned up a "No volumes present" message from stderr while doing this.
Change-Id: I1e1c0d7e4bd2057520b4dd46157d18f30837b8c9
BUG: 1213066
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/10293
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Tested-by: Raghavendra Bhat <raghavendra@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces upstream regression suit for geo-replication
* Modifies cleanup (tests/include.rc) to remove everything but
hook-scripts.
Prerequisites:
* Passwordless SSH from root to root of current host.
* Export /build/install/sbin and /build/install/bin to PATH
variable for root user.
Change-Id: I433dd8bbb17edba9baaf516fe0dce3133ba39184
BUG: 1101111
Signed-off-by: Vijaykumar Koppad <vkoppad@redhat.com>
Signed-off-by: Ajeet Jha <ajha@redhat.com>
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/7392
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Aravinda VK <avishwan@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes the spurious volume-snapshot-clone.t regression failures. In
brief, the problem is that the script wasn't waiting for config commands
to complete, and would *sometimes* query the status of a volume while
that volume was still being deleted.
It turns out that "!" doesn't work properly from EXPECT_WITHIN, so there
was a choice between changing that or changing volume_exists. This
seemed less risky. Because of code duplication, two instances of the
function had to be changed, and the other caller (volume-snapshot.t) did
too.
Change-Id: I766d4dc7c5b11038ede8e45d9d1f29cd02a622a0
BUG: 1163543
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/10053
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It turns out that "pidof" is unreliable on some platforms (e.g. Fedora
21) because it will show spurious entries for processes using the same
inode under a different name. Use "pgrep" instead because it's
name-based and doesn't get confused by glusterd/glusterfs being links
to glusterfsd.
Also changed bug-913555.t because it had the same mistake in its own
version of the same function. Now it uses the common version.
Change-Id: I5d70edd5655faa5470e0f378b8c16a6adacbd4b4
BUG: 1163543
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/9948
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Changed the implementation of marker xattr handling to take just a
function which populates important data that is different from
default 'gauge' values and subvolumes where the call needs to be
wound.
- Removed duplicate code I found while reading the code and moved it to
cluster_marker_unwind. Removed unused structure members.
- Changed dht/afr/stripe implementations to follow the new implementation
- Implemented marker xattr handling for ec.
Change-Id: Ib0c3626fe31eb7c8aae841eabb694945bf23abd4
BUG: 1200372
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9892
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Part 2/2 patch to enable users analyze and resolve
split-brain.
This patch enables :
1) Users to inspect the files in data and metadata split-brain.
2) Resolve the split-brain.
Both using a series of setfattr commands.
Consider a volume "test" with 2 bricks.
1) To inspect a file f1:
setfattr -n replica.split-brain-choice -v test-client-0 f1
After the execution of this command, if no read_subvol
is found, reads will be served from test-client-0 (corresponding
to brick-0).
2) To resolve split-brain :
setfattr -n replica.split-brain-heal-finalize -v test-client-0 f1
Execution of this command will lead to the resolution
of data and metadata split-brain with subvol mentioned in the
command (test-client-0 here) as the source and the rest as sink.
Change-Id: Ia20f3ee5abd3119e3d54fcc599f1e55ac65fd179
BUG: 1191396
Signed-off-by: Anuradha <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/9743
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Provide a way of disabling reads when quorum is not met.
Change-Id: Ic4f57c2b87a0b8514600759de3a7a47e217fe3b5
BUG: 1187885
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9543
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For tcp,rdma type voumes, there will be two ports, one for tcp
and one for rdma. But volume status command only display tcp port.
By this change, adding an extra column for rdma port and changing
the port to tcp port.
Eg:
>gluster volume status pathy
>For tcp,rdma type volume
Status of volume: patchy
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick brickname 49152 49153 Y 14158
>For rdma type volume
Status of volume: patchy
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick brickname 0 49153 Y 14158
For tcp type volume
Status of volume: patchy
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick brickname 49152 0 Y 14158
>gluster volume status patchy detail
Status of volume: xcube2
------------------------------------------------------------------------------
Brick : Brick brickname
TCP Port : 49152
RDMA Port : 49153
Online : Y
Pid : 14158
File System : ext4
Device :
/dev/mapper/luks-2099dd4a-0050-4cae-ad7b-c6a0498c4e88
Mount Options : rw,seclabel,relatime,data=ordered
Inode Size : 256
Disk Space Free : 31.1GB
Total Disk Space : 47.9GB
Inode Count : 3203072
Free Inodes : 2926789
>gluster volume status xcube --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr>(null)</opErrstr>
<volStatus>
<volumes>
<volume>
<volName>xcube</volName>
<nodeCount>2</nodeCount>
<node>
<hostname>hostname</hostname>
<path>/home/brick1</path>
<peerid>2d7bcb95-3d26-4d4f-b3c6-e2ee01b71662</peerid>
<status>1</status>
<port>49152</port>
<ports>
<tcp>49152</tcp>
<rdma>N/A</rdma>
</ports>
<pid>5657</pid>
</node>
<node>
<hostname>NFS Server</hostname>
<path>localhost</path>
<peerid>2d7bcb95-3d26-4d4f-b3c6-e2ee01b71662</peerid>
<status>1</status>
<port>2049</port>
<ports>
<tcp>2049</tcp>
<rdma>N/A</rdma>
</ports>
<pid>5665</pid>
</node>
<tasks/>
</volume>
</volumes>
</volStatus>
</cliOutput>
Change-Id: I81aab226edbd400d29cd3f510af4f344dd99ba51
BUG: 1164079
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/9191
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
entry->inode to NULL
That way a lookup would be forced on the entry, and its attributes will
always be selected from its read subvol.
Change-Id: Iaba25e2cd5f83e983fc8b1a1f48da3850808e6b8
BUG: 1179169
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/9477
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implementation of heal info split-brain command with
glfs-heal.
Change-Id: I233eb790de6eb5468a4cbb12a1cef0f97db2a1d2
BUG: 1183019
Signed-off-by: Anuradha <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/9459
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For volumes with replicate, disperse xlators, self-heal daemon should do
healing. This patch provides enable/disable functionality for the xlators to be
part of self-heal-daemon. Replicate already had this functionality with
'gluster volume set cluster.self-heal-daemon on/off'. But this patch makes it
uniform for both types of volumes. Internally it still does 'volume set' based
on the volume type.
Change-Id: Ie0f3799b74c2afef9ac658ef3d50dce3e8072b29
BUG: 1177601
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9358
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some scripts (e.g. features/weighted-rebalance.t) try to unmount
multiple mountpoints at once, using UMOUNT_LOOP. This dutifully
passes the $* list through to force_umount, which (prior to this
fix) would only unmount $1 instead of the whole set. This would
leave those devices mounted, which would not only be a resource
leak itself but would cause other cleanup actions to fail.
Change-Id: I2e3379c85792765025540f10be7cb37b8a4c1bcf
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/9386
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I74d08797b791ea6649d9aba585996e9ec680e3f8
BUG: 1128721
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/8538
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I4504f3050674dde217e79af28cb4d2b5370fe2d5
BUG: 1148010
Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-on: http://review.gluster.org/8891
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org>
Tested-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Use 'getfattr' properly avoid redundant options during xattr query
- Untabify certain parts of tests (remove tabs)
- Avoid backtick evaluation for certain values to make code more portable.
- Use awk on FreeBSD/Darwin, since 'wc' implementation is broken and adds
spurious spaces in its output.
Change-Id: I7dcc0b70874e43b4cda8c306ed18a31b7a3f990a
BUG: 1131713
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/8520
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org>
Tested-by: Emmanuel Dreyfus <manu@netbsd.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Linux uses stat -c, stat --printf= or stat --printf
NetBSD uses stat -f with different format strings. This change set
changes all stat usage to stat -c and introduce a shell stat()
fonction to perform the format string translation.
BUG: 764655
Change-Id: I024fca7c1b736b053f5888cbf21da0a72489ef63
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8424
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Tested-by: Harshavardhana <harsha@harshavardhana.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Break-way from '/var/lib/glusterd' hard-coded previously,
instead rely on 'configure' value from 'localstatedir'
- Provide 's/lib/db' as default working directory for gluster
management daemon for BSD and Darwin based installations
- loff_t is really off_t on Darwin
- fix-off the warnings generated by clang on FreeBSD/Darwin
- Now 'tests/*' use GLUSTERD_WORKDIR a common variable for all
platforms.
- Define proper environment for running tests, define correct PATH
and LD_LIBRARY_PATH when running tests, so that the desired version
of glusterfs is used, regardless where it is installed.
(Thanks to manu@netbsd.org for this additional work)
Change-Id: I2339a0d9275de5939ccad3e52b535598064a35e7
BUG: 1111774
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8246
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ps aux is truncated to the terminal width on NetBSD, Use ps auxww to
avoid that
BUG: 764655
Change-Id: I28a2fc23e2823dd6524a72da30111b86fc4bfa7b
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8281
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Tested-by: Harshavardhana <harsha@harshavardhana.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Write operations on directories with quota enabled used to fail with
EINVAL on stripe volumes. This was due to assert failure in
stripe_lookup(), meant to ensure loc->path is not NULL. However,
in nameless lookup (in this particular case triggered by quotad, which
has stripe xlator in its graph), loc->path can be legitimately NULL.
The fix involves removing this check in stripe_lookup().
Change-Id: Ibbd4f68763fdd8a85f29da78b3937cef1ee4fd1e
BUG: 1100050
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/8145
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I2b5784c48eedcccb17690de438addd29075926bd
BUG: 1092850
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/8104
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I9266bbf4f67a2135f9a81b32fe88620be11af6ea
BUG: 1109889
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/8084
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Removed sleep with EXPECT_WITHIN
Heal full doesn't generate indices until the files/dirs are
recreated. So wait until they are re-created and then
wait for heal completion.
Change-Id: I82399f6a17f94ecc101db45b83d8ef7bfa9c64dd
BUG: 1092850
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/8069
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Directory rename while a brick is down can cause gfid handle of that directory
to be deleted until next lookup happens on that directory.
*) Self-heal does not have intelligence to detect renames at the moment. So it
has to delete the directory 'd' using special flags, because it has to perform
'rm -rf' of that directory as it is not empty. Posix xlator implements this by
renaming the directory deleted to 'landfill' directory in '.glusterfs' where
janitor thread will perform actual rm -rf by traversing the directory. Janitor
thread wakes up every 10 minutes to check if there are any directories to be
deleted and deletes them. As part of deleting it also deletes the gfid-handles.
Steps to hit the problem:
1) On a replicate volume create a directory 'd', file in 'd' called 'f' so the
directory 'd' is not empty.
2) bring one of the bricks down (lets call it brick-a, the other one is brick-b
3) Rename d to d1
4) When brick-a comes online again, self-heal deletes directory 'd' and creates
directory 'd1' on brick-a for performing self-heal. So on brick-a,
gfid-handle of 'd' pointing to 'da is deleted and recreated to point to 'd1'.
5) This directory 'b' with all its directory hierarchy (for now just the file
'f') will be under 'landfill' directory.
6) When janitor thread wakes up and deletes directory 'd' and gfid-handle of
'd' without realizing that it is now pointing to 'd1'. Thus 'd1' loses its
gfid-handle
Fix:
Delete gfid-handle for a directory only when the gfid-handle is stale.
Change-Id: I21265b3bd3852f0967d916aaa21108ae5c9e7373
BUG: 1101143
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/7879
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I635fc0fa955b33590f1c5b4dfec22d591ea8575c
BUG: 1032894
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/6592
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial patch for the Snapshot feature. Current patch
includes following features:
* Snapshot create
* Snapshot delete
* Snapshot restore
* Snapshot list
* Snapshot info
* Snapshot status
* Snapshot config
Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9
BUG: 1061685
Signed-off-by: shishir gowda <sgowda@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7128
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Fix boundary condition for offset
- Honour data-self-heal-algorithm option
- Added tests for sparse file self-healing
Change-Id: I14bb1c9d04118a3df4072f962fc8f2f197391d95
BUG: 1080707
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/7339
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Remove client side self-healing completely (opendir, openfd, lookup)
- Re-work readdir-failover to work reliably in case of NFS
- Remove unused/dead lock recovery code
- Consistently use xdata in both calls and callbacks in all FOPs
- Per-inode event generation, used to force inode ctx refresh
- Implement dirty flag support (in place of pending counts)
- Eliminate inode ctx structure, use read subvol bits + event_generation
- Implement inode ctx refreshing based on event generation
- Provide backward compatibility in transactions
- remove unused variables and functions
- make code more consistent in style and pattern
- regularize and clean up inode-write transaction code
- regularize and clean up dir-write transaction code
- regularize and clean up common FOPs
- reorganize transaction framework code
- skip setting xattrs in pending dict if nothing is pending
- re-write self-healing code using syncops
- re-write simpler self-heal-daemon
Change-Id: I1e4080c9796c8a2815c2dab4be3073f389d614a8
BUG: 1021686
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/6010
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
stats
"volume profile info" automatically clears incremental stats. There
isn't a command to:
- fetch stats without clearing incremental stats and
- clear cumulative and incremental stats
This change introduces two arguments (i.e. peek and clear). 'clear'
will wipe both incremental and cumulative stats. 'peek' fetches stats
without wiping incremental stats.
'volume profile info peek' - fetches incremental and cumulative stats
without wiping incremental stats
'volume profile info incremental peek' - fetches incremental stats
without wiping incremental stats
'volume profile info clear' - clears both incremental and cumultiave
stats
Change-Id: I91834515ad672eca5f882809941147d7d997c4c9
BUG: 1047416
Signed-off-by: Dawit Alemu <dalemu@redhat.com>
Reviewed-on: http://review.gluster.org/6620
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Quota contributions of a file/directory are tracked by quota
xlator using xattrs on the file. Quota allows these xattrs to be
healed as part of metadata self-heal. This leads to
wrong quota calculations on this brick after self-heal because
quota xattrs don't represent the actual contributions on the
brick anymore.
Fix:
Don't let self-heal of this xattr happen as part of self-heal
by filtering quota xattrs on file in listxattr.
Change-Id: Iea68a116595ba271e58c6fdcc3dd21c7bb55ebb3
BUG: 1035576
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/6374
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currenly the CLI rebalance status command output does not indicate the
'type' of rebalance, i.e. whether a full rebalance or only a fix-layout
was carried out.
Fix: After the rebalance status of all peers is received by the
originator glusterd, alter it to reflect the type of rebalance
before passing it on to the CLI process.
Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3
BUG: 1004744
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/5826
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, remove-brick supports removal of only one distributed
stripe/ replica pair at a time. Fix it to support removal of multiple
pairs. This is consistent with add-brick behaviour which supports adding
multiple stripe/replica pairs simultaneously.
Removal is successful irrespective of the order of the bricks given at
the CLI, as long as the bricks are from the same subvolume(s).
Change-Id: I7c11c1235ce07b124155978b9d48d0ea65396103
BUG: 974007
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/5210
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently rebalance/remove-brick op's display migration failed count even
for files which failed due to space issues (not enough space for file, or
migration leading to cluster imbalance)
These will now be counted as skipped, and rebalance/remove-brick status
will display the additional counter
Change-Id: I674904d380b5f8300e9ca9e6af557c3d30d6cff4
BUG: 989846
Signed-off-by: shishir gowda <sgowda@redhat.com>
Reviewed-on: http://review.gluster.org/5399
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|