| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
changed the key from "client.fqdn", which could be wrongly
construed as belonging to protocol/client, to "fqdn".
This is a backport of 8403f9a2d976c33e01fbd9e4a4b04e8f1e936806.
Change-Id: Ib5f4a875a00b99cd903a29da19bafeb70baaab4e
BUG: 950056
Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com>
Reviewed-on: http://review.gluster.org/4536
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com>
Reviewed-on: http://review.gluster.org/4965
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/5058
Change-Id: I7731fd33ca0c925cc52f8d105275b44fc625a1e2
BUG: 948686
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/5071
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/5047
Removing task from syncbarrier's waitq after wake could result in a
subsequent syncbarrier_wake, wake'ing up the already running task. This
fix makes the removal from waitq and wake 'atomic'
The root cause and the fix are similar in spirit to what was observed
in synclock's waitq implementation.
Change-Id: I7dd9e6ad5945742bcda20eb5a06a9376bb18528e
BUG: 948686
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/5054
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/4985
* Earlier, SYNCOP macro, the only consumer of synctask_yield, would set
the task->state to SYNCTASK_SUSPEND. Today, we have glusterd having its
own wrapper macros which don't set task's state. There is also the
syncbarrier and synclock framework, which also participate in a
synctask's scheduling (and need to keep a task's state up to date). It
only makes more sense to leave a synctask's state to the synctask
library, since its an internal affair.
* Need to 'yawn' before 'yield' to avoid re-running tasks to set
task->woken appropriately.
Change-Id: Ic7a59e6ebcc46f03e53223ca237668d45a3cba40
BUG: 948686
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/5053
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of change f75be77 from master
remove-brick start doesn't remove the brick from the volume immediately.
It would wait until migration of data to other bricks are complete. Even
when there is no data to be migrated, one can expect a finite delay from
the time of remove-brick start command's exit and removal of brick(s).
This may cause subsequent checks on brick count to fail in a
non-deterministic manner.
Also, renamed the test file name to reflect bug-id corresponding to
community release.
BUG: 878004
Change-Id: Ic6e1360ae5a5280d0d7efe8c3e9a0aa57dddb508
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5052
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We observed that the number of write requests thus inodelks
are increasing very rapidly to thousands without write-behind
in the graph.
Change-Id: I901a6a820eb7b21b413d33e1a0a3420c7f4746a8
BUG: 928341
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/4736
Reviewed-by: Anand Avati <avati@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I8aa4f90ba7e1eecf3f978be04f8550049275464f
BUG: 765785
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/5028
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I6b666115a8ce155563078d7fc476d41486bab54f
BUG: 884452
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/4995
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I61815b502c90314ea6924e3046fb9b396ff56e8b
BUG: 927616
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/5051
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
See RHBZ 955283, and http://fedoraproject.org/wiki/Packaging:Guidelines#PIE
The previous change for BZ 851092 in
commit 058a736f9e36238c284ca80e7ed5f62434655019
breaks the ability to enable _hardened_build in release-3.4 and master
wrt test/bugs/bug-884455.t; passes on master/HEAD, passes on my dev box,
passed once with prove -rfvc in run-tests.sh.
BUG: 851092
Change-Id: Ic2afc53bcdf11ede4a543b87aa7c7a3a41ed6f1d
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/4997
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The restarting of bricks has been deffered until the cluster 'stabilizes'
itself volumes' view. Since glusterd_spawn_daemons is executed everytime
a peer 'joins' the cluster, it may inadvertently restart bricks that
were taken offline for say, maintenance purposes. This fix avoids that.
Change-Id: Ic2a0a9657eb95c82d03cf5eb893322cf55c44eba
BUG: 960190
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/5022
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Following commits were cherry-picked from master,
044f8ce syncop: Remove task from synclock's waitq before 'wake'
cb6aeed glusterd: Give up big lock before performing any RPC
46572fe Revert "glusterd: Fix spurious wakeups in glusterd syncops"
5021e04 synctask: implement barriers around yield, not the other way
4843937 glusterd: Syncop callbks should take big lock too
Change-Id: I5ae71ab98f9a336dc9bbf0e7b2ec50a6ed42b0f5
BUG: 948686
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/4938
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/5021
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I2ae30f08965b26a21db541f87de78772cb17135a
BUG: 962362
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/4996
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I12a660a7dfbe4a2d0428910d762434043395fe02
BUG: 927616
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/5010
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
BUG: 927648
Change-Id: Ic016e9d1f090372329a8a2e530dac5fc6ed6c5ae
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/4874
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before Anonymous fds are available, afr had to queue up
transactions if the file is not opened on one of its
subvolumes. This happens until the attempt to open the
file either succeeds or fails. These attempts happen
until the file is successfully opened on the subvolume.
Now client xlator uses anonymous fds to perform the fops
if the fd used for the fop is not 'opened'.
Fops will be successful even when the file is not opened
so there is no need to queue up the transactions anymore in afr.
Open is attempted on the subvolume where it is not
opened independent of the fop.
Change-Id: I6d59293023e2de41c606395028c8980b83faca3f
BUG: 953887
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/4868
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the reserved ports file in proc contains just a newline, then
do not proceed with ports checking and reserving.
Change-Id: I776d0be1c3824dcd982f0685b171f2172b4e11e6
BUG: 762989
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/4821
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
posix_fill_readdir() is a multi-step function which performs many
readdir() calls, and expects the directory cursor to have not
"seeked away" elsewhere between two successive iterations. Usually
this is not a problem as each opendir() from an application has its
own backend fd, and there is nobody else to "seek away" the directory
cursor. However in case of NFS's use of anonymous fd, the same fd_t
is shared between all NFS readdir requests, and two readdir loops can
be executing in parallel on the same dir dragging away the cursor in
a chaotic manner.
The fix in this patch is to lock on the fd around the loop. Another
approach could be to reimplement posix_fill_readdir() with a single
getdents() call, but that's for another day.
Change-Id: Ia42e9c7fbcde43af4c0d08c20cc0f7419b98bd3f
BUG: 948086
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/4774
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-on: http://review.gluster.org/4963
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Without a lockfile under /var/lock/subsys, the glusterd service is not
stopped on shutdown or reboot.
Change-Id: Ib2c28821061ed0fd374731681a81f3fd8e989193
BUG: 960476
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/4961
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Historic bug - posix_writev() has been inspecting pfd->flushwrites for
performing fsync() after write, instead of @flags for O_SYNC|O_DSYNC.
pfd->flushwrites was never set anywhere and is unused completely. This
is behavior from the time before anonymous FD where open() had @wbflags
param. This is a leftover from that cleanup.
Change-Id: Id9bfe562a60db4eb3bd0a7705bdba91f2df2f3ec
BUG: 916372
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/4738
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/4962
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
With the present implementation, eager-lock is issued for
any fd fop. eager-lock is being transferred to metadata
transactions. But the lk-owner is set to local->fd address
only for DATA transactions, but for METADATA transactions
it is frame->root. Because of this unlock on the eager-lock fails
and rebalance hangs.
Fix:
Enable eager-lock for fd DATA transactions
This is a backport of change If30df7486a0b2f5e4150d3259d1261f81473ce8a
http://review.gluster.org/#/c/4588/
BUG: 916226
Change-Id: Id41ac17f467c37e7fd8863e0c19932d7b16344f8
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/4899
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On readv error io-cache frame->local is not set to NULL
so the local is mem_put in STACK_DESTROY as well. This
patch sets frame->local to NULL in all cases.
BUG: 955751
Change-Id: I4a7340189efe02473452986b5870b02fcfa9038e
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/4886
Reviewed-by: Raghavendra G <raghavendra@gluster.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
synctask_new() initialize task->mutex is task->synccbk is NULL.
synctask_done() calls synctask_destroy() if task->synccbk is not NULL.
synctask_destroy() always destroys the mutex.
Fix that by checking for task->synccbk in synctask_destroy()
This is a backport of I50bb53bc6e2738dc0aa830adc4c1ea37b24ee2a0
BUG: 764655
Change-Id: I3d6292f05a986ae3ceee35161791348ce3771c12
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/4920
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Spurious disconnect were caused by a race condition inside
rpc_transport_ref()/rpc_transport_unref() that allowed the refcount
to drop to zero while the transport was still in use. The race
condition is made possible because of an uninitiaized mutex
produced when socket_server_event_handler() copies the transport
This is a backport of I34fe097a0ac21b0dbf58f5eed84880e3fd9814f2
BUG: 764655
Change-Id: Ib6a7c736f28ccc67d05be45629cddc18a642c11f
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/4908
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The changes in the .spec file from Fedora have largely been merged into
the glusterfs.spec.in. It seems that some dependencies have been missed,
most importantly some additions to the init-script that are called while
(un)installing or updating RPMs.
These changes come from the downstream Fedora package that carries its
own glusterd.init script. In future, Fedora/EPEL should be able to drop
that file and use the Gluster project version.
BUG: 954149
Change-Id: I7d25622ffa52228451e742b539f1f092eac57b6b
URL: http://lists.nongnu.org/archive/html/gluster-devel/2013-04/msg00077.html
CC: Fedora GlusterFS Packagers <glusterfs-owner@fedoraproject.org>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/4865
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
BUG: 819130
Change-Id: I96aeb8fbe8b79bbc058ff9a45167d822abb576ed
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/4877
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are primarily three lists that are part of glusterd process,
that are concurrently accessed. Namely, priv->volumes, priv->peers
and volinfo->bricks_list.
Big-lock approach
-----------------
WHAT IS IT?
Big lock is a coarse-grained lock which protects all three
lists, mentioned above, from racy access.
HOW DOES IT WORK?
At any given point in time, glusterd's thread(s) are in execution
_iff_ there is a preceding, inbound network event. Of course, the
sigwaiter thread and timer thread are exceptions.
A network event is an external trigger to glusterd, via the epoll
thread, in the form of POLLIN and POLLERR.
As long as we take the big-lock at all such entry points and yield
it when we are done, we are guaranteed that all the network events,
accessing the global lists, are serialised.
This amounts to holding the big lock at
- all the handlers of all the actors in glusterd. (POLLIN)
- all the cbks in glusterd. (POLLIN)
- rpc_notify (DISCONNECT event), if we access/modify
one of the three lists. (POLLERR)
In the case of synctask'ized volume operations, we must remember that,
if we held the big lock for the entire duration of the handler,
we may block other non-synctask rpc actors from executing.
For eg, volume-start would block in PMAP SIGNIN, if done incorrectly.
To prevent this, we need to yield the big lock, when we yield the
synctask, and reacquire on waking up of the synctask.
BUG: 948686
Change-Id: I429832f1fed67bcac0813403d58346558a403ce9
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/4835
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd syncops perform a barrier_wake whenever rpc_clnt_submit returned -1.
This is based on the wrong assumption that the cbkfn wasn't called.
This would result in one more wakeup than there ought to be.
BUG: 948686
Change-Id: I839fd218a81255fe50c2047d67461d45360e894d
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/4834
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the current implementation, when the callers of synctasks perform
a spurious wake() of a sleeping synctask (i.e, an extra wake() soon
after a wake() which already woke up a yielded synctask), there is
now a possibility of two sync threacs picking up the same synctask.
This can result in a crash. The fix is to change ->slept = 0|1 and
membership of synctask in runqueue atomically.
Today we dequeue a task from the runqueue in syncenv_task(), but
reset ->slept = 0 much later in synctask_switchto() in an unlocked
manner -- which is safe, when there are no spurious wake()s.
However, this opens a race window where, if a second wake() happens
after the dequeue, but before setting ->slept = 0, it results in
queueing the same synctask in the runqueue once again, and get
picked up by a different synctask.
This is has been diagnosed to be the crashes in the regression tests
of http://review.gluster.org/4784. However that patch still has a
spurious wake() [the trigger for this bug] which is yet to be fixed.
BUG: 948686
Change-Id: I51858e887cad2680e46fb973629f8465f4429363
Original-author: Anand Avati <avati@redhat.com>
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/4833
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the introduction of http://review.gluster.org/4784, there are
delays which breaks bug-874498.t which wrongly depends on healing
to finish within 2 seconds.
Fix this by using 'EXPECT_WITHIN 60' instead of sleep 2.
BUG: 874498
Change-Id: I7131699908e63b024d2dd71395b3e94c15fe925c
Original-author: Anand Avati <avati@redhat.com>
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/4832
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The failure of bug-874498.t seems to be a "bug" in glustershd.
The situation seems to be when both subvolumes of a replica are
"local" to glustershd, and in such cases glustershd is sensitive
to the order in which the subvols come up.
The core of the issue itself is that, without the patch (#4784),
self-heal daemon completes the processing of index and no entries
are left inside the xattrop index after a few seconds of volume
start force. However with the patch, the stale "backing file"
(against which index performs link()) is left. The likely reason
is that an "INDEX" based crawl is not happening against the subvol
when this patch is applied.
Before #4784 patch, the order in which subvols came up was :
[2013-04-09 22:55:35.117679] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-0: Connected to 10.3.129.13:49156, attached to remote volume '/d/backends/brick1'.
...
[2013-04-09 22:55:35.118399] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-1: Connected to 10.3.129.13:49157, attached to remote volume '/d/backends/brick2'.
However, with the patch, the order is reversed:
[2013-04-09 22:53:34.945370] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-1: Connected to 10.3.129.13:49153, attached to remote volume '/d/backends/brick2'.
...
[2013-04-09 22:53:34.950966] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-0: Connected to 10.3.129.13:49152, attached to remote volume '/d/backends/brick1'.
The index in brick2 has the list of files/gfid to heal. It appears
to be the case that when brick1 is the first subvol to be detected
as coming up, somehow an INDEX based crawl is clearing all the
index entries in brick2, but if brick2 comes up as the first subvol,
then the backing file is left stale.
Also, doing a "gluster volume heal full" seems to leave out stale
backing files too. As the crawl is performed on the namespace and
the backing file is never encountered there to get cleared out.
So the interim (possibly permanent) fix is to have the script issue
a regular self-heal command (and not a "full" one).
The failure of the script itself is non-critical. The data files are
all healed, and it is just the backing file which is left behind. The
stale backing file too gets cleared in the next index based healing,
either triggered manually or after 10mins.
BUG: 874498
Change-Id: I601e9adec46bb7f8ba0b1ba09d53b83bf317ab6a
Original-author: Anand Avati <avati@redhat.com>
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/4831
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces a synclocks - co-operative locks for synctasks.
Synctasks yield themselves when a lock cannot be acquired at the time
of the lock call, and the unlocker will wake the yielded locker at
the time of unlock.
The implementation is safe in a multi-threaded syncenv framework.
It is also safe for sharing the lock between non-synctasks. i.e, the
same lock can be used for synchronization between a synctask and
a regular thread. In such a situation, waiting synctasks will yield
themselves while non-synctasks will sleep on a cond variable. The
unlocker (which could be either a synctask or a regular thread) will
wake up any type of lock waiter (synctask or regular).
Usage:
Declaration and Initialization
------------------------------
synclock_t lock;
ret = synclock_init (&lock);
if (ret) {
/* lock could not be allocated */
}
Locking and non-blocking lock attempt
-------------------------------------
ret = synclock_trylock (&lock);
if (ret && (errno == EBUSY)) {
/* lock is held by someone else */
return;
}
synclock_lock (&lock);
{
/* critical section */
}
synclock_unlock (&lock);
BUG: 763820
Change-Id: I23066f7b66b41d3d9fb2311fdaca333e98dd7442
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Original-author: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/4830
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If for some reason glusterd_get_brick_root() fails,
it frees the gf_strdup'ed *mount_point in its own error path,
and returns -1.
Unfortunately it already had assigned that pointer value
to the output argument, the caller function
glusterd_add_brick_detail() sees a non-NULL pointer,
and free() again: segfault.
Could be fixed with a one-liner (*mount_point = NULL)
in the error path, but I think glusterd_get_brick_root()
should only assign to the output argument once all checks passed,
so I use a local temporary pointer, which increases the patch a bit.
Change-Id: I3f3035f01e80a5e9bdf2da895e4cf7baa3dfbd2f
BUG: 919352
Signed-off-by: Lars Ellenberg <lars@linbit.com>
Reviewed-on: http://review.gluster.org/4646
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-on: http://review.gluster.org/4841
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is needed to support automated testing of cluster-communication
features such as probing and quorum. In order to use this, you need to
do the following preparatory steps.
* Copy /var/lib/glusterd to another directory for each virtual host
* Ensure that each virtual host has a different UUID in its glusterd.info
Now you can start each copy of glusterd with the following xlator-options.
* management.transport.socket.bind-address=$ip_address
* management.working-directory=$unique_working_directory
You can use 127.x.y.z addresses for binding without needing to assign
them to interfaces explicitly. Note that you must use addresses, not
names, because of some stuff in the socket code that's not worth fixing
just for this usage, but after that you can use names in /etc/hosts
instead.
At this point you can issue CLI commands to a specific glusterd using
the --remote-host option. So far probe, volume create/start/stop,
mount, and basic I/O all seem to work as expected with multiple
instances.
Change-Id: I1beabb44cff8763d2774bc208b2ffcda27c1a550
BUG: 913555
Original-author: Jeff Darcy <jdarcy@redhat.com>
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/4838
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tests
Since http://review.gluster.org/4556 glusterd is capable of running
many instances of itself on a single system. This patch exploits
that feature and enhances the regression test framework to expose
handy primitives so that test cases may be written to test glusterd
in a cluster.
Usage:
1. Include "$(dirname)/../cluster.rc" to get access to the extensions
2. Call launch_cluster $N where $N is the count of virtual servers
Calling launch_cluster, starts $N glusterds which bind to $N different
IPs and dynamically defines these primitives:
- Variables $H1 .. $Hn assigned to hostnames of each "server".
- Variables $CLI_1 .. $CLI_n assigned as commands to run CLI commands
on the corresponding N'th server.
- Variables $B1 .. $Bn assigned to the backend directories on each
"server".
- Function kill_glusterd, which accepts a parameter - index number of
glusterd to be killed.
- Variables $glusterd_1 .. $glusterd_n assigned to the command lines
to restart the corresponding glusterd, if it was previously killed.
The current set of primitives and functions were implemented with the goal
of satisfying ./tests/bugs/bug-913555.t. The API will be made richer as
we add more cluster test cases
Change-Id: I6e79c58098ed0862cf75a0b56e4ce384ec2e4eb2
BUG: 913555
Original-author: Anand Avati <avati@redhat.com>
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/4836
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add checks before trying to delete vol_opt from list and free
Change-Id: I2858f58518394beb8f74fa477be81d7bdd38304f
BUG: 924215
Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com>
Reviewed-on: http://review.gluster.org/4704
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/4819
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Suppose there is an xlator option which is considered by the xlator
only if the source was built with debug mode enabled (the only example
in the current code base is run-with-valgrind option for glusterd), then
giving that option would make the process crash if the source was not
built with debug mode enabled.
Reason:
In rpc, after getting the options symbol dynamically, it was stored in the
newly allocated volume options structure and the structure's list head was
added to the xlator's volume_options list. But while freeing the structure
the list was not deleted. Thus when the list was traversed, already freed
structure was accessed leading to segfault.
Change-Id: I3e9e51dd2099e34b206199eae7ba44d9d88a86ad
BUG: 922877
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/4687
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-on: http://review.gluster.org/4818
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
cherry-pick from:
refs/changes/16/4816/1; http://review.gluster.org/#/c/4816/
BUG: 951551
Change-Id: I3de5bd86d4238a60a0a85ba2e15d9c131969b210
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/4817
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I2911d3ac80825310f84c5ba6bd7890e65e1ee219
BUG: 950048
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/4643
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Merged (git cherry-pick) from master/HEAD to release-3.4
Change-Id: I24265c12a45eac4cec761748096118c9647440be
BUG: 948041
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/4780
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Data self-heal may choose sink iatt to set mtimes.
This happens because after syncing of data is done
self-heal does one more xattrops/fstat to determine
sources sinks to set the inode-ctx. Since this is done
after data syncing and erase of xattrs, old source and
old sink are now sources, but the mtimes of them differ.
Old code just takes the first source from the list and
update mtimes, which could be sink before the self-heal
started.
Fix:
Set mtime from 'sources before syncing'.
Change-Id: Id769e1b99aa4f041eaee775f64cbf2c57b799723
BUG: 918437
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/4658
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-on: http://review.gluster.org/4663
|
|
|
|
|
|
|
|
|
| |
Change-Id: Icb60cf7ad3ea7ca0eeb12fd19b95a6b340857bb2
BUG: 920916
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/4685
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In the dictionary serialization function, if the
[(buf + vallen) > (orig_buf + size)], then memdup is getting failed.
Fix:
Put "goto out" whenever this condition is met.
Change-Id: I8c07dd5187364ccd6ad7625e2e3907d8b56447a9
BUG: 947824
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/4771
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
See http://review.gluster.org/149
Installed librdmacm-devel RPM on the build server.
cherry pick from http://review.gluster.org/#/c/4804/
BUG: 819130
Change-Id: I30e14ebf7646c19923940f86a72bf42497cac70c
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/4806
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backporting fix http://review.gluster.org/#/c/4668/
When subvols-per-directory is < available subvols, then there are layouts
which are not populated. This leads to incorrect identification of holes or
overlaps. We need to ignore layouts, which have err == 0, and start == stop.
In the current scenario (start == stop == 0).
Additionally, in layout-merge, treat missing xattrs as err = 0. In case of
missing layouts, anomalies will reset them.
For any other valid subvoles, err != 0 in case of layouts being zeroed out.
Also reverted back dht_selfheal_dir_xattr, which does layout calculation only
on subvols which have errors.
BUG: 921408
Change-Id: I75a8edcb92af5b53b3253c9addd7a812e9242836
Signed-off-by: shishir gowda <sgowda@redhat.com>
Reviewed-on: http://review.gluster.org/4800
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backporting Avati's fix http://review.gluster.org/4711
The scheme to encode brick d_off and brick id into global d_off has
two approaches. Since both brick d_off and global d_off are both 64-bit
wide, we need to be careful about how the brick id is encoded.
Filesystems like XFS always give a d_off which fits within 32bits. So
we have another 32bits (actually 31, in this scheme, as seen ahead) to
encode the brick id - which is typically plenty.
Filesystems like the recent EXT4 utilize the upto 63 low bits in d_off,
as the d_off is calculated based on a hash function value. This leaves
us no "unused" bits to encode the brick id.
However both these filesystmes (EXT4 more importantly) are "tolerant" in
terms of the accuracy of the value presented back in seekdir(). i.e, a
seekdir(val) actually seeks to the entry which has the "closest" true
offset.
This "two-prong" scheme exploits this behavior - which seems to be the
best middle ground amongst various approaches and has all the advantages
of the old approach:
- Works against XFS and EXT4, the two most common filesystems out there.
(which wasn't an "advantage" of the old approach as it is borken against
EXT4)
- Probably works against most of the others as well. The ones which would
NOT work are those which return HUGE d_offs _and_ NOT tolerant to
seekdir() to "closest" true offset.
- Nothing to "remember in memory" or evict "old entries".
- Works fine across NFS server reboots and also NFS head failover.
- Tolerant to seekdir() to arbitrary locations.
Algorithm:
Each d_off can be encoded in either of the two schemes. There is no
requirement to encode all d_offs of a directory or a reply-set in
the same scheme.
The topmost bit of the 64 bits is used to specify the "type" of encoding
of this particular d_off. If the topmost bit (bit-63) is 1, it indicates
that the encoding scheme holds a HUGE d_off. If the topmost bit is is 0,
it indicates that the "small" d_off encoding scheme is used.
The goal of the "small" d_off encoding is to stay as dense as possible
towards the lower bits even in the global d_off.
The goal of the HUGE d_off encoding is to stay as accurate (close) as
possible to the "true" d_off after a round of encoding and decoding.
If DHT has N subvolumes, we need ROOF(Log2(N)) "bits" to encode the brick
ID (call it "n").
SMALL d_off
===========
Encoding
--------
If the top n + 1 bits are free in a brick offset, then we leave the
top bit as 0 and set the remaining bits based on the old formula:
hi_mask = 0xffffffffffffffff
hi_mask = ~(hi_mask >> (n + 1))
if ((hi_mask & d_off_brick) != 0)
do_large_d_off_encoding ()
d_off_global = (d_off_brick * N) + brick_id
Decoding
--------
If the top bit in the global offset is 0, it indicates that this
is the encoding formula used. So decoding such a global offset will
be like the old formula:
if ((d_off_global & 0x8000000000000000) != 0)
do_large_d_off_decoding()
d_off_brick = (d_off_global % N)
brick_id = d_off_global / N
HUGE d_off
==========
Encoding
--------
If the top n + 1 bits are NOT free in a given brick offset, then we
set the top bit as 1 in the global offset. The low n bits are replaced
by brick_id.
low_mask = 0xffffffffffffffff << n // where n is ROOF(Log2(N))
d_off_global = (0x8000000000000000 | d_off_brick & low_mask) + brick_id
if (d_off_global == 0xffffffffffffffff)
discard_entry();
Decoding
--------
If the top bit in the global offset is set 1, it indicates that
the encoding formula used is above. So decoding would look like:
hi_mask = (0xffffffffffffffff << n)
low_mask = ~(hi_mask)
d_off_brick = (global_d_off & hi_mask & 0x7fffffffffffffff)
brick_id = global_d_off & low_mask
If "losing" the low n bits in this decoding of d_off_brick looks
"scary", we need to realize that till recently EXT4 used to only
return what can now be expressed as (d_off_global >> 32). The extra
31 bits of hash added by EXT recently, only decreases the probability
of a collision, and not eliminate it completely, anyways. In a way,
the "lost" n bits are made up by decreasing the probability of
collision by sharding the files into N bricks / EXT directories
-- call it "hash hedging", if you will :-)
Change-Id: I9551c581c3f3d4c9e719764881036d554f60c557
Thanks-to: Zach Brown <zab@redhat.com>
BUG: 838784
Signed-off-by: shishir gowda <sgowda@redhat.com>
Reviewed-on: http://review.gluster.org/4799
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
add --without ufo
cherry-pick from refs/changes/42/4742/1
Change-Id: If1b77003ded537f9664fa6ad677d48d118516c64
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
BUG: 819130
Reviewed-on: http://review.gluster.org/4743
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: Luis Pabon <lpabon@redhat.com>
Reviewed-by: Luis Pabon <lpabon@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
- missing "pairs = next" caused infinite loop
Change-Id: I3edc4f50473f7498815c73e1066167392718fddf
BUG: 905871
Signed-off-by: Vijaykumar Koppad <vkoppad@redhat.com>
Reviewed-on: http://review.gluster.org/4728
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
cherry-pick from master, including commits:
5d3b478e76f1015b11bfd7d48465ab12a4f0737e
fd407a4f5cdb869dc52efe8fc9e1d284f60f5992
6f6789884227b8260f140c39c063d77b0516af97
84f5e4b354526fbb7f0665345816e81c81245c8f
2398e1e0da61f4ec5f209c704e037b54b5c249e1
Resync with Fedora's glusterfs.spec
To build a set of RPMs:
% ./autogen.sh
% ./configure --enable-fusermount
% make dist
% cd extras/LinuxRPM && make glusterrpms
Updated rpm.t
BUG: 819130
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Change-Id: Ib73be0fbb7ee16a5c41b4f7c7a3f66d0224bfe6c
Reviewed-on: http://review.gluster.org/4725
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I00e0ebc4e36cedd771a46b6bd1f3267439ab9474
BUG: 922765
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/4673
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|