diff options
author | Krishnan Parthasarathi <kp@gluster.com> | 2012-01-11 15:39:38 +0530 |
---|---|---|
committer | Vijay Bellur <vijay@gluster.com> | 2012-02-03 07:40:46 -0800 |
commit | 3ec7680a70bcace6b195ae412362b7e1b072eaeb (patch) | |
tree | 3b67cb58722f5a9ed3abca98c011e654c649b359 /xlators/mgmt/glusterd | |
parent | 2313600f0749094f1246e663a0db15da3c2812db (diff) |
glusterd: Changed op_sm_queue locking mechanism to accomodate nested calls to op_sm
Today if an rpc call made inside an op_sm can fail due to a disconnected peer,
resulting in the rpc callback to be called in the same stack with appropriate
status set. All glusterd rpc cbks move the state machine based on the status
returned by the rpc layer, which would result in a nested call to op_sm. With
the current scheme of locking, glusterd would end up in a deadlock situation
The new scheme will fail the nested glusterd_op_sm (). This prevents the
deadlock. It would work without any change in overall behaviour, as the
current op_sm () call in execution wouldn't return until all events in the
queue are processed.
Change-Id: I6a7ba16d3810b699bcd06dc28a5ff3205a25476f
BUG: 772142
Signed-off-by: Krishnan Parthasarathi <kp@gluster.com>
Reviewed-on: http://review.gluster.com/2625
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amar@gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
Diffstat (limited to 'xlators/mgmt/glusterd')
-rw-r--r-- | xlators/mgmt/glusterd/src/glusterd-op-sm.c | 9 |
1 files changed, 8 insertions, 1 deletions
diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c b/xlators/mgmt/glusterd/src/glusterd-op-sm.c index f24cb9b1332..d0d280a0923 100644 --- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c +++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c @@ -3570,11 +3570,16 @@ glusterd_op_sm () glusterd_op_sm_event_t *event = NULL; glusterd_op_sm_event_t *tmp = NULL; int ret = -1; + int lock_err = 0; glusterd_op_sm_ac_fn handler = NULL; glusterd_op_sm_t *state = NULL; glusterd_op_sm_event_type_t event_type = GD_OP_EVENT_NONE; - (void ) pthread_mutex_lock (&gd_op_sm_lock); + if ((lock_err = pthread_mutex_trylock (&gd_op_sm_lock))) { + gf_log (THIS->name, GF_LOG_DEBUG, "lock failed due to %s", + strerror (lock_err)); + goto lock_failed; + } while (!list_empty (&gd_op_sm_queue)) { @@ -3624,6 +3629,8 @@ glusterd_op_sm () (void ) pthread_mutex_unlock (&gd_op_sm_lock); ret = 0; +lock_failed: + return ret; } |