<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git, branch v3.4.0beta1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>posix: fix dangerous "sharing" of fd in readdir between two requests</title>
<updated>2013-05-07T18:54:33+00:00</updated>
<author>
<name>Anand Avati</name>
<email>avati@redhat.com</email>
</author>
<published>2013-04-03T23:31:07+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=5ac55756cd923e4bb1e5b5df50aeaf198d5531b7'/>
<id>5ac55756cd923e4bb1e5b5df50aeaf198d5531b7</id>
<content type='text'>
posix_fill_readdir() is a multi-step function which performs many
readdir() calls, and expects the directory cursor to have not
"seeked away" elsewhere between two successive iterations. Usually
this is not a problem as each opendir() from an application has its
own backend fd, and there is nobody else to "seek away" the directory
cursor. However in case of NFS's use of anonymous fd, the same fd_t
is shared between all NFS readdir requests, and two readdir loops can
be executing in parallel on the same dir dragging away the cursor in
a chaotic manner.

The fix in this patch is to lock on the fd around the loop. Another
approach could be to reimplement posix_fill_readdir() with a single
getdents() call, but that's for another day.

Change-Id: Ia42e9c7fbcde43af4c0d08c20cc0f7419b98bd3f
BUG: 948086
Signed-off-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4774
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-on: http://review.gluster.org/4963
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
posix_fill_readdir() is a multi-step function which performs many
readdir() calls, and expects the directory cursor to have not
"seeked away" elsewhere between two successive iterations. Usually
this is not a problem as each opendir() from an application has its
own backend fd, and there is nobody else to "seek away" the directory
cursor. However in case of NFS's use of anonymous fd, the same fd_t
is shared between all NFS readdir requests, and two readdir loops can
be executing in parallel on the same dir dragging away the cursor in
a chaotic manner.

The fix in this patch is to lock on the fd around the loop. Another
approach could be to reimplement posix_fill_readdir() with a single
getdents() call, but that's for another day.

Change-Id: Ia42e9c7fbcde43af4c0d08c20cc0f7419b98bd3f
BUG: 948086
Signed-off-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4774
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-on: http://review.gluster.org/4963
</pre>
</div>
</content>
</entry>
<entry>
<title>extras: /etc/init.d/glusterd should create a lockfile under /var/lock/sybsys</title>
<updated>2013-05-07T18:48:25+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2013-05-07T09:10:26+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f00720f3ed760f018d8c847476563a5eb1b111a3'/>
<id>f00720f3ed760f018d8c847476563a5eb1b111a3</id>
<content type='text'>
Without a lockfile under /var/lock/subsys, the glusterd service is not
stopped on shutdown or reboot.

Change-Id: Ib2c28821061ed0fd374731681a81f3fd8e989193
BUG: 960476
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4961
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Without a lockfile under /var/lock/subsys, the glusterd service is not
stopped on shutdown or reboot.

Change-Id: Ib2c28821061ed0fd374731681a81f3fd8e989193
BUG: 960476
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4961
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>storage/posix: honor O_SYNC and O_DSYNC sent in @flags of writev()</title>
<updated>2013-05-07T18:02:10+00:00</updated>
<author>
<name>Anand Avati</name>
<email>avati@redhat.com</email>
</author>
<published>2013-03-25T19:18:13+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=cce370f6d2c1d3bfaf1d772ebe5d6a01f761016f'/>
<id>cce370f6d2c1d3bfaf1d772ebe5d6a01f761016f</id>
<content type='text'>
Historic bug - posix_writev() has been inspecting pfd-&gt;flushwrites for
performing fsync() after write, instead of @flags for O_SYNC|O_DSYNC.

pfd-&gt;flushwrites was never set anywhere and is unused completely. This
is behavior from the time before anonymous FD where open() had @wbflags
param. This is a leftover from that cleanup.

Change-Id: Id9bfe562a60db4eb3bd0a7705bdba91f2df2f3ec
BUG: 916372
Signed-off-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4738
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4962
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Historic bug - posix_writev() has been inspecting pfd-&gt;flushwrites for
performing fsync() after write, instead of @flags for O_SYNC|O_DSYNC.

pfd-&gt;flushwrites was never set anywhere and is unused completely. This
is behavior from the time before anonymous FD where open() had @wbflags
param. This is a leftover from that cleanup.

Change-Id: Id9bfe562a60db4eb3bd0a7705bdba91f2df2f3ec
BUG: 916372
Signed-off-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4738
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4962
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Turn on eager-lock for fd DATA transactions</title>
<updated>2013-05-07T12:00:05+00:00</updated>
<author>
<name>Emmanuel Dreyfus</name>
<email>manu@netbsd.org</email>
</author>
<published>2013-04-29T15:15:56+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=eaa3cdcb80befe3fe7c6b181672bface9d4ff539'/>
<id>eaa3cdcb80befe3fe7c6b181672bface9d4ff539</id>
<content type='text'>
Problem:
With the present implementation, eager-lock is issued for
any fd fop. eager-lock is being transferred to metadata
transactions. But the lk-owner is set to local-&gt;fd address
only for DATA transactions, but for METADATA transactions
it is frame-&gt;root. Because of this unlock on the eager-lock fails
and rebalance hangs.

Fix:
Enable eager-lock for fd DATA transactions

This is a backport of change If30df7486a0b2f5e4150d3259d1261f81473ce8a
http://review.gluster.org/#/c/4588/

BUG: 916226
Change-Id: Id41ac17f467c37e7fd8863e0c19932d7b16344f8
Signed-off-by: Emmanuel Dreyfus &lt;manu@netbsd.org&gt;
Reviewed-on: http://review.gluster.org/4899
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
With the present implementation, eager-lock is issued for
any fd fop. eager-lock is being transferred to metadata
transactions. But the lk-owner is set to local-&gt;fd address
only for DATA transactions, but for METADATA transactions
it is frame-&gt;root. Because of this unlock on the eager-lock fails
and rebalance hangs.

Fix:
Enable eager-lock for fd DATA transactions

This is a backport of change If30df7486a0b2f5e4150d3259d1261f81473ce8a
http://review.gluster.org/#/c/4588/

BUG: 916226
Change-Id: Id41ac17f467c37e7fd8863e0c19932d7b16344f8
Signed-off-by: Emmanuel Dreyfus &lt;manu@netbsd.org&gt;
Reviewed-on: http://review.gluster.org/4899
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>performance/io-cache: Avoid double mem_put in ioc_readv</title>
<updated>2013-05-07T11:39:44+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2013-04-24T12:35:13+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=5925d8a2512b8d4452b0b0efbafd9c7536ed3a51'/>
<id>5925d8a2512b8d4452b0b0efbafd9c7536ed3a51</id>
<content type='text'>
On readv error io-cache frame-&gt;local is not set to NULL
so the local is mem_put in STACK_DESTROY as well. This
patch sets frame-&gt;local to NULL in all cases.

BUG: 955751
Change-Id: I4a7340189efe02473452986b5870b02fcfa9038e
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4886
Reviewed-by: Raghavendra G &lt;raghavendra@gluster.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
On readv error io-cache frame-&gt;local is not set to NULL
so the local is mem_put in STACK_DESTROY as well. This
patch sets frame-&gt;local to NULL in all cases.

BUG: 955751
Change-Id: I4a7340189efe02473452986b5870b02fcfa9038e
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4886
Reviewed-by: Raghavendra G &lt;raghavendra@gluster.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Fix uninitialized mutex usage in synctask_destroy</title>
<updated>2013-05-04T07:32:22+00:00</updated>
<author>
<name>Emmanuel Dreyfus</name>
<email>manu@netbsd.org</email>
</author>
<published>2013-05-01T04:23:57+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=dfa76943df9c36c3c7f5b31cf153b3c4bbc2ac2e'/>
<id>dfa76943df9c36c3c7f5b31cf153b3c4bbc2ac2e</id>
<content type='text'>
synctask_new() initialize task-&gt;mutex is task-&gt;synccbk is NULL.
synctask_done() calls synctask_destroy() if task-&gt;synccbk is not NULL.
synctask_destroy() always destroys the mutex.

Fix that by checking for task-&gt;synccbk in synctask_destroy()

This is a backport of I50bb53bc6e2738dc0aa830adc4c1ea37b24ee2a0

BUG: 764655
Change-Id: I3d6292f05a986ae3ceee35161791348ce3771c12
Signed-off-by: Emmanuel Dreyfus &lt;manu@netbsd.org&gt;
Reviewed-on: http://review.gluster.org/4920
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
synctask_new() initialize task-&gt;mutex is task-&gt;synccbk is NULL.
synctask_done() calls synctask_destroy() if task-&gt;synccbk is not NULL.
synctask_destroy() always destroys the mutex.

Fix that by checking for task-&gt;synccbk in synctask_destroy()

This is a backport of I50bb53bc6e2738dc0aa830adc4c1ea37b24ee2a0

BUG: 764655
Change-Id: I3d6292f05a986ae3ceee35161791348ce3771c12
Signed-off-by: Emmanuel Dreyfus &lt;manu@netbsd.org&gt;
Reviewed-on: http://review.gluster.org/4920
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Fix spurious brick disconnects</title>
<updated>2013-05-01T03:12:52+00:00</updated>
<author>
<name>Emmanuel Dreyfus</name>
<email>manu@netbsd.org</email>
</author>
<published>2013-04-30T00:41:09+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=bae32a5affd514e5a78ba3af6cc644cd5cd6814a'/>
<id>bae32a5affd514e5a78ba3af6cc644cd5cd6814a</id>
<content type='text'>
Spurious disconnect were caused by a race condition inside
rpc_transport_ref()/rpc_transport_unref() that allowed the refcount
to drop to zero while the transport was still in use. The race
condition is made possible because of an uninitiaized mutex
produced when socket_server_event_handler() copies the transport

This is a backport of I34fe097a0ac21b0dbf58f5eed84880e3fd9814f2

BUG: 764655
Change-Id: Ib6a7c736f28ccc67d05be45629cddc18a642c11f
Signed-off-by: Emmanuel Dreyfus &lt;manu@netbsd.org&gt;
Reviewed-on: http://review.gluster.org/4908
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Spurious disconnect were caused by a race condition inside
rpc_transport_ref()/rpc_transport_unref() that allowed the refcount
to drop to zero while the transport was still in use. The race
condition is made possible because of an uninitiaized mutex
produced when socket_server_event_handler() copies the transport

This is a backport of I34fe097a0ac21b0dbf58f5eed84880e3fd9814f2

BUG: 764655
Change-Id: Ib6a7c736f28ccc67d05be45629cddc18a642c11f
Signed-off-by: Emmanuel Dreyfus &lt;manu@netbsd.org&gt;
Reviewed-on: http://review.gluster.org/4908
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>extras: include Fedora changes in init.d/glusterd</title>
<updated>2013-04-30T00:07:46+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2013-04-21T09:10:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=6b67229526d41d1158f0617cbb41297b12be727d'/>
<id>6b67229526d41d1158f0617cbb41297b12be727d</id>
<content type='text'>
The changes in the .spec file from Fedora have largely been merged into
the glusterfs.spec.in. It seems that some dependencies have been missed,
most importantly some additions to the init-script that are called while
(un)installing or updating RPMs.

These changes come from the downstream Fedora package that carries its
own glusterd.init script. In future, Fedora/EPEL should be able to drop
that file and use the Gluster project version.

BUG: 954149
Change-Id: I7d25622ffa52228451e742b539f1f092eac57b6b
URL: http://lists.nongnu.org/archive/html/gluster-devel/2013-04/msg00077.html
CC: Fedora GlusterFS Packagers &lt;glusterfs-owner@fedoraproject.org&gt;
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4865
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The changes in the .spec file from Fedora have largely been merged into
the glusterfs.spec.in. It seems that some dependencies have been missed,
most importantly some additions to the init-script that are called while
(un)installing or updating RPMs.

These changes come from the downstream Fedora package that carries its
own glusterd.init script. In future, Fedora/EPEL should be able to drop
that file and use the Gluster project version.

BUG: 954149
Change-Id: I7d25622ffa52228451e742b539f1f092eac57b6b
URL: http://lists.nongnu.org/archive/html/gluster-devel/2013-04/msg00077.html
CC: Fedora GlusterFS Packagers &lt;glusterfs-owner@fedoraproject.org&gt;
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4865
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>build: sync glusterfs.spec.in with Fedora glusterfs.spec</title>
<updated>2013-04-26T15:17:02+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2013-04-23T16:57:40+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7f162316f074f19cfecca9197060e4e687658345'/>
<id>7f162316f074f19cfecca9197060e4e687658345</id>
<content type='text'>
BUG: 819130
Change-Id: I96aeb8fbe8b79bbc058ff9a45167d822abb576ed
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4877
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
BUG: 819130
Change-Id: I96aeb8fbe8b79bbc058ff9a45167d822abb576ed
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4877
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: big lock - a coarse-grained locking to prevent races</title>
<updated>2013-04-17T12:48:50+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-04-15T10:26:57+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=92729add67e2e7b8c7589c2dfab0bde071a7faf2'/>
<id>92729add67e2e7b8c7589c2dfab0bde071a7faf2</id>
<content type='text'>
There are primarily three lists that are part of glusterd process,
that are concurrently accessed. Namely, priv-&gt;volumes, priv-&gt;peers
and volinfo-&gt;bricks_list.

Big-lock approach
-----------------
WHAT IS IT?
Big lock is a coarse-grained lock which protects all three
lists, mentioned above, from racy access.

HOW DOES IT WORK?
At any given point in time, glusterd's thread(s) are in execution
_iff_ there is a preceding, inbound network event. Of course, the
sigwaiter thread and timer thread are exceptions.
A network event is an external trigger to glusterd, via the epoll
thread, in the form of POLLIN and POLLERR.
As long as we take the big-lock at all such entry points and yield
it when we are done, we are guaranteed that all the network events,
accessing the global lists, are serialised.

This amounts to holding the big lock at
- all the handlers of all the actors in glusterd. (POLLIN)
- all the cbks in glusterd. (POLLIN)
- rpc_notify (DISCONNECT event), if we access/modify
  one of the three lists. (POLLERR)

In the case of synctask'ized volume operations, we must remember that,
if we held the big lock for the entire duration of the handler,
we may block other non-synctask rpc actors from executing.
For eg, volume-start would block in PMAP SIGNIN, if done incorrectly.
To prevent this, we need to yield the big lock, when we yield the
synctask, and reacquire on waking up of the synctask.

BUG: 948686
Change-Id: I429832f1fed67bcac0813403d58346558a403ce9
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4835
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There are primarily three lists that are part of glusterd process,
that are concurrently accessed. Namely, priv-&gt;volumes, priv-&gt;peers
and volinfo-&gt;bricks_list.

Big-lock approach
-----------------
WHAT IS IT?
Big lock is a coarse-grained lock which protects all three
lists, mentioned above, from racy access.

HOW DOES IT WORK?
At any given point in time, glusterd's thread(s) are in execution
_iff_ there is a preceding, inbound network event. Of course, the
sigwaiter thread and timer thread are exceptions.
A network event is an external trigger to glusterd, via the epoll
thread, in the form of POLLIN and POLLERR.
As long as we take the big-lock at all such entry points and yield
it when we are done, we are guaranteed that all the network events,
accessing the global lists, are serialised.

This amounts to holding the big lock at
- all the handlers of all the actors in glusterd. (POLLIN)
- all the cbks in glusterd. (POLLIN)
- rpc_notify (DISCONNECT event), if we access/modify
  one of the three lists. (POLLERR)

In the case of synctask'ized volume operations, we must remember that,
if we held the big lock for the entire duration of the handler,
we may block other non-synctask rpc actors from executing.
For eg, volume-start would block in PMAP SIGNIN, if done incorrectly.
To prevent this, we need to yield the big lock, when we yield the
synctask, and reacquire on waking up of the synctask.

BUG: 948686
Change-Id: I429832f1fed67bcac0813403d58346558a403ce9
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4835
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
