| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
Change-Id: Iceccef8f3f466c7ffb9991f8eb248b81e7b80efb
BUG: 1256580
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/12020
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
be same.
Problem:
After replacing the brick using "replace-brick" command and running "heal
full", the version of the root directory of the newly added brick is not
getting healed. heal starts running on the dentries of the root but does not
run on root directory.
Solution:
Run heal on root directory.
Change-Id: Ifd42a3fb341b049c895817e892e5b484a5aa6f80
BUG: 1243382
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
Reviewed-on: http://review.gluster.org/11676
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
host of the brick
remove brick stage blindly starts the remove brick operation even if the
glusterd instance of the node hosting the brick is down. Operationally its
incorrect and this could result into a inconsistent rebalance status across all
the nodes as the originator of this command will always have the rebalance
status to 'DEFRAG_NOT_STARTED', however when the glusterd instance on the other
nodes comes up, will trigger rebalance and make the status to completed once the
rebalance is finished.
This patch fixes two things:
1. Add a validation in remove brick to check whether all the peers hosting the
bricks to be removed are up.
2. Don't copy volinfo->rebal.dict from stale volinfo during restore as this
might end up in a incosistent node_state.info file resulting into volume status
command failure.
Change-Id: Ia4a76865c05037d49eec5e3bbfaf68c1567f1f81
BUG: 1245045
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/11726
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is one more attempt to eliminate corruption
of files by our tests scripts on NetBSD.
Changes done:
1. Have every local variable with a unique name.
2. Change date format to match with gluster's.
3. Pass the parameters to G_LOG without interpretation,
hence the change from $* to $@.
Change-Id: I833a93555da93179a1b39a9e4e7086216c335c3d
BUG: 1251592
Signed-off-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-on: http://review.gluster.org/11993
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During lookup and discover, currently read_subvol is based
only on data_readable. read_subvol should be decided based
on both data_readable and metadata_readable.
Credits to Ravishankar N for the logic of afr_first_up_child
from http://review.gluster.org/10905/ .
Change-Id: I98580b23c278172ee2902be08eeaafb6722e830c
BUG: 1240244
Signed-off-by: Anuradha Talur <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/11551
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In an AFR transaction, we need to consider something as failed only if the
failure (either in the pre-op or the FOP phase) occurs on the bricks on which a
transaction lock was obtained.
Without this, we would end up considering the transaction as failure even on the
bricks on which the lock was not obtained, resulting in unnecessary fsyncs
during the post-op phase of every write transaction for non-appending writes.
Change-Id: Iee79e5d85dc7b4c41459d8bdd04a8454bdaf9a9d
BUG: 1250170
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/11827
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently glusterd is not stopping all the deamon service on peer detach
With this fix it will do peer detach cleanup properlly and will stop all
the daemon which was running before peer detach on the node.
Change-Id: Ifed403ed09187e84f2a60bf63135156ad1f15775
BUG: 1255386
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/11509
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously glfs_h_lookupat won't follow the symlink, this patch
introduces new flag `follow` which will resolve the same. Applications
linking against the new library will need to use the new glfs_h_lookupat
API call.
In order to stay compatible with existing binaries that use the previous
glfs_h_lookupat() function, the old symbol needs to stay available.
Verification that there are two versions of glfs_h_lookupat:
$ objdump -T /usr/lib64/libgfapi.so.0 | grep -w glfs_h_lookupat
0000000000015070 g DF .text 000000000000021e GFAPI_3.7.4 glfs_h_lookupat
0000000000015290 g DF .text 0000000000000008 (GFAPI_3.4.2) glfs_h_lookupat
Testing with a binary (based on anonymous_fd_read_write.c from ./tests/)
that was linked against the old library:
$ objdump -T ./lookupat | grep -w glfs_h_lookupat
0000000000000000 DF *UND* 0000000000000000 GFAPI_3.4.2 glfs_h_lookupat
Enable debugging for 'ld.so' so that we can check that the GFAPI_3.4.2
version of the symbol gets loaded:
$ export LD_DEBUG_OUTPUT=lookupat.ld.log LD_DEBUG=all
$ ./lookupat
$ grep -w glfs_h_lookupat lookupat.ld.log.2543
2543: symbol=glfs_h_lookupat; lookup in file=./lookupat [0]
2543: symbol=glfs_h_lookupat; lookup in file=/lib64/libgfapi.so.0 [0]
2543: binding file ./lookupat [0] to /lib64/libgfapi.so.0 [0]: normal symbol `glfs_h_lookupat' [GFAPI_3.4.2]
Change-Id: I8bf9b1c19a0585f681bc1a7f84aad1ccd0f75f6a
BUG: 1252410
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/11883
Reviewed-by: soumya k <skoduri@redhat.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Display description field with (null) if
no description is present for the snapshot, instead
of removing the field altogether.
Change-Id: I965b08cd6e54eea56c32e2712fab7daa8a663f11
BUG: 1250387
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/11834
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
volume-snapshot.t failspuriously because of having additional test cases
which restarts glusterd and they are really not needed as far as the test
coverage is concerned. Currently glusterd doesn't have a mechanism to indicate
that volumes handshaking has been completed or not, due to this even if the peer
handshaking finishes and all the peers are back to the cluster there could be a
case where any command which accesses the volume structure might end up in
corruption as volume handshaking is still in progress. This is because of volume
list is still not been made URCU protected.
Change-Id: Id8669c22584384f988be5e0a5a0deca7708a277d
BUG: 1255599
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/11972
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are three problems with marker-rename which
is fixed in this patch
Problem 1)
1) mq_reduce_parent_size is not handling inode-quota contribution
2) When dest files exists and IO is happening
Now renaming will overwrite existing file
mq_reduce_parent_size called on dest file
with saved contribution, this can be
a problem is IO is still happening
contribution might have changed
Problem 2)
There is a small race between rename and in-progress write
Consider below scenario
1) rename FOP invoked on file 'x'
2) write is still in progress for file 'x'
3) rename takes a lock on old-parent
4) write-update txn blocked on old-parent to acquire lock
5) in rename_cbk, contri xattrs are removed and contribution is deleted and
lock is released
6) now write-update txn gets the lock and updates the wrong parent
as it was holding lock on old parent
so validate parent once the lock is acquired
Problem 3)
when a rename operation is performed, a lock is
held on old parent. This lock is release before
unwinding the rename operation.
This can be a problem if there are in-progress
writes happening during rename, where update txn
can take a lock and update the old parent
as inode table is not updated with new parent
Change-Id: Ic3316097c001c33533f98592e8fcf234b1ee2aa2
BUG: 1240991
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/11578
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
original file
Change-Id: Id759af8f3ff5fd8bfa9f8121bab25722709d42b7
BUG: 1251824
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/11874
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tier daemon should always run with tier volume. If volume
is stopped and started again, we manually need to start
the tier-daemon, instead this patch will automatically trigger
tier process along with volume start.
A snapshot restored volume will not have node_state_info,
so we need to create and store it dynamically
Change-Id: I659387c914bec7a1b6929ee5cb61f7b406402075
BUG: 1238593
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Signed-off-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-on: http://review.gluster.org/11525
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Self-heal was always using a fixed block size to heal a file. This
was incorrect for dispersed volumes with a number of data bricks not
being a power of 2.
This patch adjusts the block size to a multiple of the stripe size
of the volume. It also propagates errors detected during the data
heal to stop healing the file and not mark it as healed.
Change-Id: I9ee3fde98a9e5d6116fd096ceef88686fd1d28e2
BUG: 1251446
Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-on: http://review.gluster.org/11862
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
problem:
when executing testcases manually, some time we may want to terminate the
testcase execution in between due to various reasons.
Existing testcase flow has no mechanism to call cleanup before they terminate
abnormally, hence we endup with volume setups and mount points uncleaned.
Solution:
This patch traps such kind of abnormal terminations and calls 'cleanup'
function soon after they are caught and then terminates the testcases
with appropriate status..
$ ./tests/basic/mount-nfs-auth.t
1..87
=========================
TEST 1 (line 8): glusterd
ok 1
RESULT 1: 0
=========================
TEST 2 (line 9): pidof glusterd
ok 2
RESULT 2: 0
=========================
TEST 3 (line 10): gluster -mode=script --wignore volume info
No volumes present
ok 3
RESULT 3: 0
^C
received external signal --INT--, calling 'cleanup' ...
$ glusterd && gluster vol status
No volumes present
Change-Id: Ia51a850c356e599b8b789cec22b9bb5e87e1548a
BUG: 1252374
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-on: http://review.gluster.org/11882
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We run tests in trace mode(set -x) when
re-running failed tests. G_LOG is a util
function and it need not be executed in
trace mode.
Change-Id: I5490bdcacef6856c5501272c8173828c30aaf373
BUG: 1251592
Signed-off-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-on: http://review.gluster.org/11865
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The only place where shard translator was initialising inode ctx
was lookup callback. But if the inodes are created and linked through
readdirp, shard_lookup() path _may_ not be exercised before FUSE
winds other fops on them. Since shard translator does an
inode_ctx_get() first thing in most fops, an uninitialised ctx could
cause it to fail the operation with ENOMEM.
The solution would be to also initialise inode ctx if it has not been
done already in readdir(p) callback.
Change-Id: I3e058cd2a29bc6a69a96aaac89165c3251315625
BUG: 1250855
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/11854
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a new simple regression test suite for
geo-replication. This is written keeping in mind
the run time for regression test. The existing
regression test suite is rigorous one and could
be run nightly. Hence the existing geo-rep tests
are being removed as part of this.
Also re-enable geo-rep regression with this patch.
Thanks Aravinda for initial template and plan.
Change-Id: If544ac295eaf67ac66e0b071903cc1096e71d437
BUG: 1227624
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/11058
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Aravinda VK <avishwan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In stub, for fops like readv, writev etc, if the the object is bad, then the fop
is denied. But for checking if the object is bad inode context should be
checked. Now, if the inode context is not there, then the fop is allowed to
continue. This patch fixes it and the fop is unwound with an error, if the inode
context is not found.
Change-Id: I5ea4d4fc1a91387f7f9d13ca8cb43c88429f02b0
BUG: 1243391
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/11449
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The check_counters function contained "grep -o '[0-9*]'", which
was in error. This patch corrects it to "grep -o '[0-9]*'".
This fix was necessary to accommodate for double-digit counters.
Change-Id: Idaa09de4403bf66e741176a7377eba264819ca3b
BUG: 1252121
Signed-off-by: Pamela Ousley <pousley@redhat.com>
Reviewed-on: http://review.gluster.org/11877
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a test-case for BZ 1251346
Change-Id: Icbde8d17c823a3f2c98056c14a75f0ef5227b848
BUG: 1251346
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/11864
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test sets the lru limit of the inode table to 1 and checks if inode forgets
and resolve cause any problem with bit-rot xattrs (especially bad-file xattr).
Change-Id: I1fa25fa2d31dda8d26e8192562e896e5bddd0381
BUG: 1244613
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/11718
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Volume-reset shouldn't remove quota-deem-statfs, unless
explicitly specified, when quota is enabled.
1) glusterd_op_stage_reset_volume ()
'gluster volume set/reset <VOLNAME>' features.quota/
features.inode-quota' should not be allowed as it is deprecated.
Setting and resetting quota/inode-quota features should be allowed
only through 'gluster volume quota <VOLNAME> enable/disable'.
2) glusterd_enable_default_options ()
Option 'features.quota-deem-statfs' should not be turned off
with 'gluster volume reset <VOLNAME>', since quota features
can be set/reset only with 'gluster volume quota <VOLNAME>
enable/disable'.
But, 'gluster volume set features.quota-deem-statfs'
can be turned on/off when quota is enabled.
Change-Id: Ib5aa00a4d8c82819c08dfc23e2a86f43ebc436c4
BUG: 1250582
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/11839
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
problem : Reset/set commands were not working properly. reset command returns
success but it not sending notification to svcs if corresponding graph modified.
Fix: Whenever reset/set command issued, generate the temp graph and compare
with original graph and do the fallowing actions
1.) If both graph are identical nothing to do with svcs.
2.) If any changes in graph topology restart/stop service by calling
svc manager.
3) If changes in options send notify signal by calling glusterd_fetchspec_notify.
Change-Id: I852c4602eafed1ae6e6a02424814fe3a83e3d4c7
BUG: 1209329
Signed-off-by: anand <anekkunt@redhat.com>
Reviewed-on: http://review.gluster.org/10850
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The bitmask of good and bad bricks was kept in the context of the
corresponding inode or fd. This was problematic when an external
process (another client or the self-heal process) did heal the
bricks but no one changed the bitmaks of other clients.
This patch removes the bitmask stored in the context and calculates
which bricks are healthy after locking them and doing the initial
xattrop. After that, it's updated using the result of each fop.
Change-Id: I225e31cd219a12af4ca58871d8a4bb6f742b223c
BUG: 1236065
Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-on: http://review.gluster.org/11844
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a --resolve-gids commandline option to the glusterfs binary. This
option gets set when executing "mount -t glusterfs -o resolve-gids ...".
This option is most useful in combination with the "acl" mount option.
POSIX ACL permission checking is done on the FUSE-client side to improve
performance (in addition to the checking on the bricks).
The fuse-bridge reads /proc/$PID/status by default, and this file
contains maximum 32 groups. Any local (client-side) permission checking
that requires more than the first 32 groups will fail.
By enabling the "resolve-gids" option, the fuse-bridge will call
getgrouplist() to retrieve all the groups from the user accessing the
mountpoint. This is comparable to how "nfs.server-aux-gids" works.
Note that when a user belongs to more than ~93 groups, the volume option
server.manage-gids needs to be enabled too. Without this option, the
RPC-layer will need to reduce the number of groups to make them fit in
the RPC-header.
Change-Id: I7ede90d0e41bcf55755cced5747fa0fb1699edb2
BUG: 1246275
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/11732
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Introduce ssl.dh-param option to specify a file containinf DH parameters.
If it is provided, EDH ciphers are available.
- Introduce ssl.ec-curve option to specify an elliptic curve name. If
unspecified, ECDH ciphers are available using the prime256v1 curve.
- Introduce ssl.crl-path option to specify the directory where the
CRL hash file can be found. Setting to NULL disable CRL checking,
just like the default.
- Make all ssl.* options accessible through gluster volume set.
- In default cipher list, exclude weak ciphers instead of listing
the strong ones.
- Enforce server cipher preference.
- introduce RPC_SET_OPT macro to factor repetitive code in glusterd-volgen.c
- Add ssl-ciphers.t test to check all the features touched by this change.
Change-Id: I7bfd433df6bbf176f4a58e770e06bcdbe22a101a
BUG: 1247152
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/11735
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Many thanks to fanghuang.data@yahoo.com for RC and BUG
https://bugzilla.redhat.com/show_bug.cgi?id=1245425#c0
BUG: 1245425
Change-Id: I411384ad2b81db9941ac136f4e584a3a965d53f1
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/11779
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
threads
Issue : glusterd was crashing due to race between handshake thread and snapshot
remove
RCA : Snapshot thread referring voinfo and same time volinfo is modified during handshake,
glusterd was crashing due to this inconsistent data of volinfo .
Note: Sending commands without checking cluster status may lead to crash
Fix:.Wait for handshake complete/cluster ready before proceeding commands.
Change-Id: Iefd986664bd9dd225f0abf8f85476d6afd206914
BUG: 1246432
Signed-off-by: anand <anekkunt@redhat.com>
Reviewed-on: http://review.gluster.org/11757
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ia8706ec9b66d78c4e33e7b7faf69f0d113ba68a4
BUG: 1245981
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/11729
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RCA: If rebalance start is triggered from one node and one of other nodes in the cluster goes down simultaneously
we might end up in a case where callback will use the txn_id from priv->global_txn_id which is always zeros and
this means injecting an event with an incorrect txn_id will result into op-sm getting stuck.
fix: set txn_id in frame->cookie during sumbit_and_request, so that we can get txn_id in call back
functions.
Change-Id: I519176c259ea9d37897791a77a7c92eb96d10052
BUG: 1245142
Signed-off-by: anand <anekkunt@redhat.com>
Reviewed-on: http://review.gluster.org/11728
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case added to check NO EMPTY changelogs gets
created over changelog rollover period.
Change-Id: I83323644e1a0c4b920a472e1179606a0fd54d1d9
BUG: 1237000
Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com>
Reviewed-on: http://review.gluster.org/11460
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
and rename().
Change-Id: I25a02386dc95580c2e76a13fdd8e11a0df234d56
BUG: 1245547
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/11737
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As of now all the daemon services are initialized at glusterD init path. Since
socket file path of per node daemon demands the uuid of the node, MY_UUID macro
is invoked as part of the initialization.
The above flow breaks the usecases where a gluster image is built following a
template could be Dockerfile, Vagrantfile or any kind of virtualization
environment. This means bringing instances of this image would have same UUIDs
for the node resulting in peer probe failure.
Solution is to lazily initialize the services on demand.
Change-Id: If7caa533026c83e98c7c7678bded67085d0bbc1e
BUG: 1238135
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/11488
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issue:Rebalance is failing in cluster framework (any simulated cluster environment in same node ).
RCA:
1. we are passing always "localhost" as volfile server for rebalance xlator .
2. Rebalance daemons are overwriting unix socket and log files each other.
(All rebalance processes are creating socket with same name) .
Fix: set vol_file_server, unix socket and log files properly.
Change-Id: I6654461e00c2a164b2f1f1db24a316c4180dd8d5
BUG: 1231437
Signed-off-by: anand <anekkunt@redhat.com>
Reviewed-on: http://review.gluster.org/11210
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Below command is wrong way of executing
mutilple command with | (pipe)
local cmd="$CLI volume quota $V0 list $QUOTA_PATH | grep $QUOTA_PATH |
awk '{print \$$FIELD}'"
$cmd
This patch fixes the issue
This patch also fixes testcase inode-quota.t, which checking
quota values in wrongs fields
Change-Id: If28732e6a76ea4bf75560f6496c8f56670915cf9
BUG: 1229297
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/11673
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On restarting glusterd quota daemon is not started when more than one
volumes are configured and quota is enabled only on 2nd volume.
This is because of while restarting glusterd it will restart all the bricks.
During brick restart it will start respective daemon by passing volinfo of
first volume. Passing volinfo to glusterd_svc_manager will imply daemon
managers will take action based on the same volume's configuration which
is incorrect for per node daemons.
Fix is to pass volinfo NULL while restarting bricks.
Change-Id: I2602002a8ba7762fc1eb08123e79fbcf568ecab4
BUG: 1242875
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/11658
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With multiple hardlinks check_quota_limit is invoked for each parent
each of this check_limit can invoke validation
this can cause frame->local to get corrupted during validation.
Testcase tests/bugs/quota/bug-1235182.t fails spuriously with
this problem
Change-Id: I53adc54b431fb5f43e67a94248102ddaf0d7978f
BUG: 1238747
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/11510
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I0fdb58e15da15c40c3fc9767f2fe4df0ea9d2350
BUG: 1242609
Signed-off-by: Anuradha Talur <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/11651
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The md5sum fingerprints were not correctly compared after moving
files between the hot and cold tiers.
This version of tier.t uses a new function, "check_counters", to
ensure that the number of promotions/demotions is as expected.
This is intended to avoid spurious timing-related errors that were
seen with the old script.
Change-Id: I4a0ae7315493bfd307a0f68f21fa3ea33c88b08f
BUG: 1231268
Signed-off-by: Pamela Ousley <pousley@redhat.com>
Reviewed-on: http://review.gluster.org/11285
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem : Basically, in this test case a file is created
which exceeds the quota limit. Once the limit is reached
that file will be deleted. At the same moment we are
testing inode-quota. It can so happen that before the
marker updates the information related to deletion of
file, a new file creation operation comes and sees that
quota limit is still exceeded.
Solution : Inducing a check to see if marker updation
completed successfully.
Updated all the test case which has the similar
machanism and also moved the "usage" function
to a common place "volume.rc"
Change-Id: I36ddbc5ebbf1b74c9d326a0d1d5f3b32f20a906a
BUG: 1229297
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/11125
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During quota-update process if inode info is present in size-xattr and
missing in contri-xattrs, then in function '_mq_get_metadata', we set
contri-size as zero (on error -2, which means usage info present, but inode info missing).
With this we are calculating wrong delta and updating the same.
With this patch we are ignoring errors if inode info in xattrs are missing
Change-Id: I7940a0e299b8bb425b5b43746b1f13f775c7fb92
BUG: 1241153
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/11583
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix spurious failure where snapd takes a while to come up.
Change-Id: I32931afd4ff78f8d930c70f49b26f08976033d42
BUG: 1241071
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/11579
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test failed @
http://build.gluster.org/job/rackspace-regression-2GB-triggered/12010/consoleFull
(Reported by Vijaykumar M)
Fix:
s/afr_get_pending_heal_count/get_pending_heal_count
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Change-Id: I69c44919ae68e3ebb9a5bc58a8e45a0a96fad62e
BUG: 1238508
Reviewed-on: http://review.gluster.org/11556
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
| |
Provide options to control number of active background heal count and qlen.
Change-Id: Idc2419219d881f47e7d2e9bbc1dcdd999b372033
BUG: 1237381
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/11473
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When entry self-heals are performed, the files/directories
that are to be expunged should be removed first and then
impunge should be done.
Consider the following scenario :
A volume with 2 bricks : b0 and b1.
1) With following hierarchy on both bricks:
olddir
|__ oldfile
2) Bring down b1 and do 'mv olddir newdir'.
3) Bring up b1 and self-heal.
4) Without patch, during self-heal the events occur in
following order,
a) Creation of newdir on the sink brick. Notice that
gfid of olddir and newdir are same. As a result of which
gfid-link file in .glusterfs directory still points to olddir
and not to newdir.
b) Deletion of olddir on the sink brick. As a part of
this deletion, the gfid link file is also deleted. Now, there
is no link file pointing to newdir.
5) Files under newdir will not get listed as part of readdir.
To tackle this kind of scenario, an expunge should be done first
and impunge later; which is the purpose of this patch.
Change-Id: Idc8546f652adf11a13784ff989077cf79986bbd5
BUG: 1238508
Signed-off-by: Anuradha Talur <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/11498
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Appends all commands being run under the test framework into the logs
with time stamps. Its a hack but I find it very useful to see what
sections of the logs (across all logs) need to be analysed between
execution of 2 commands.
Here's a sample output:
[2015-06-26 13:25:15.866764479]:++++++++++ G_LOG:tests/basic/afr/quorum.t: TEST: 46 ! test_write ++++++++++
[2015-06-26 13:25:15.872002] I [afr-common.c:1682:afr_local_discovery_cbk] 0-patchy-replicate-0: selecting local read_child patchy-client-1
[2015-06-26 13:25:15.874559] W [fuse-bridge.c:723:fuse_truncate_cbk] 0-glusterfs-fuse: 81: TRUNCATE() /a => -1 (Read-only file system)
[2015-06-26 13:25:15.880554623]:++++++++++ G_LOG:tests/basic/afr/quorum.t: TEST: 47 abc cat /mnt/glusterfs/0/b ++++++++++
[2015-06-26 13:25:15.897767878]:++++++++++ G_LOG:tests/basic/afr/quorum.t: TEST: 48 gluster --mode=script --wignore volume set patchy cluster.quorum-reads on ++++++++++[2015-06-26 13:25:15.994410] I [glusterfsd-mgmt.c:51:mgmt_cbk_spec] 0-mgmt: Volume file changed
[2015-06-26 13:25:17.098519] I [glusterfsd-mgmt.c:51:mgmt_cbk_spec] 0-mgmt: Volume file changed
[2015-06-26 13:25:17.099241] I [glusterfsd-mgmt.c:51:mgmt_cbk_spec] 0-mgmt: Volume file changed
[2015-06-26 13:25:17.099685] I [glusterfsd-mgmt.c:51:mgmt_cbk_spec] 0-mgmt: Volume file changed
[2015-06-26 13:25:17.100055] I [glusterfsd-mgmt.c:51:mgmt_cbk_spec] 0-mgmt: Volume file changed
[2015-06-26 13:25:17.105896] W [MSGID: 108003] [afr.c:94:fix_quorum_options] 0-patchy-replicate-0: quorum-type auto overriding quorum-count 2
[2015-06-26 13:25:17.105936] W [MSGID: 108001] [afr.c:189:reconfigure] 0-patchy-replicate-0: Client-quorum is not met
[2015-06-26 13:25:17.107438] I [glusterfsd-mgmt.c:1507:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing
[2015-06-26 13:25:17.108724] I [glusterfsd-mgmt.c:1507:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing
[2015-06-26 13:25:17.110082] I [glusterfsd-mgmt.c:1507:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing
[2015-06-26 13:25:17.110599] I [glusterfsd-mgmt.c:1507:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing
[2015-06-26 13:25:17.109678070]:++++++++++ G_LOG:tests/basic/afr/quorum.t: TEST: 49 1 mount_get_option_value /mnt/glusterfs/0 patchy-replicate-0 quorum-reads ++++++++++
[2015-06-26 13:25:17.117801] I [afr-common.c:1682:afr_local_discovery_cbk] 0-patchy-replicate-0: selecting local read_child patchy-client-1
Change-Id: Ib51284a0384508350579babaf1ae69cb372e0baa
BUG: 1233018
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/10667
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When running with the replica-3 volume, the "big_write" test sometimes
becomes unresponsive. This seems to be an issue (bug 1226941) in the
RPC/socket-layer, and not related to the NFS test itself.
BUG: 1163543
Change-Id: I51115e4b68d45f3ef7902b4f7a8535518d09408f
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/11085
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ic6a23165df1703b330636a059967c3c674dbde57
BUG: 1235231
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/11355
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When quota_deem_statfs is enabled, quota sends aggregated statfs values
In EC we should not multiply statfs values with fragment number
Change-Id: I7ef8ea1598d84b86ba5c5941a2bbe0a6ab43c101
BUG: 1233162
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/11315
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|