<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/features/locks/src, branch v3.5.3beta1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>features/locks: Clean up logging of cleanup in DISCONNECT codepath</title>
<updated>2014-06-23T09:38:31+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2014-06-05T03:52:34+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=5888a89fa8950be38ed3c5b000a37013f6656031'/>
<id>5888a89fa8950be38ed3c5b000a37013f6656031</id>
<content type='text'>
        Backport of http://review.gluster.org/7981

Now, gfid is printed as opposed to path in cleanup messages.

Also, refkeeper update is eliminated in inodelk and entrylk.
Instead, the patch ensures inode and pl_inode are kept alive as
long as there is atleast one lock (granted/blocked) on an inode.

Also, every inode is unref'd appropriately on a DISCONNECT from the
lock-owning client.

Change-Id: I234db688ad0d314f4936a16cc5af70a3bd071970
BUG: 1104915
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8042
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
        Backport of http://review.gluster.org/7981

Now, gfid is printed as opposed to path in cleanup messages.

Also, refkeeper update is eliminated in inodelk and entrylk.
Instead, the patch ensures inode and pl_inode are kept alive as
long as there is atleast one lock (granted/blocked) on an inode.

Also, every inode is unref'd appropriately on a DISCONNECT from the
lock-owning client.

Change-Id: I234db688ad0d314f4936a16cc5af70a3bd071970
BUG: 1104915
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8042
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/locks: Remove stale entrylk objects from 'blocked_locks' list</title>
<updated>2014-05-08T13:00:56+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2014-04-24T11:07:05+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d8014f53c22a7da2d7b38e7d3215aa83e3e51d0d'/>
<id>d8014f53c22a7da2d7b38e7d3215aa83e3e51d0d</id>
<content type='text'>
        Backport of http://review.gluster.org/7560

* In the event of a DISCONNECT from a client, as part of cleanup,
  entrylk objects are not removed from the blocked_locks list before
  being unref'd and freed, causing the brick process to crash at
  some point when the (now) stale object is accessed again in the list.

* Also during cleanup, it is pointless to try and grant lock to a
  previously blocked entrylk (say L1) as part of releasing another
  conflicting lock (L2), (which is a side-effect of L1 not being
  deleted from blocked_locks list before grant_blocked_entry_locks()
  in cleanup) if L1 is also associated with the DISCONNECTing client.
  This patch fixes the problem.

Change-Id: Ie077f8eeb61c5505f047a8fdaac67db32e5d4270
BUG: 1089470
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7576
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
        Backport of http://review.gluster.org/7560

* In the event of a DISCONNECT from a client, as part of cleanup,
  entrylk objects are not removed from the blocked_locks list before
  being unref'd and freed, causing the brick process to crash at
  some point when the (now) stale object is accessed again in the list.

* Also during cleanup, it is pointless to try and grant lock to a
  previously blocked entrylk (say L1) as part of releasing another
  conflicting lock (L2), (which is a side-effect of L1 not being
  deleted from blocked_locks list before grant_blocked_entry_locks()
  in cleanup) if L1 is also associated with the DISCONNECTing client.
  This patch fixes the problem.

Change-Id: Ie077f8eeb61c5505f047a8fdaac67db32e5d4270
BUG: 1089470
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7576
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/locks: Remove stale inodelk objects from 'blocked_locks' list</title>
<updated>2014-04-28T16:54:11+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2014-04-19T14:33:38+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=82c244a390679e03ea25830abbb90b0dcc7a69cc'/>
<id>82c244a390679e03ea25830abbb90b0dcc7a69cc</id>
<content type='text'>
        Backport of  http://review.gluster.org/7512

* In the event of a DISCONNECT from a client, as part of cleanup,
  inodelk objects are not removed from the blocked_locks list before
  being unref'd and freed, causing the brick process to crash at
  some point when the (now) stale object is accessed again in the list.

* Also during cleanup, it is pointless to try and grant lock to a
  previously blocked inodelk (say L1) as part of releasing another
  conflicting lock (L2), (which is a side-effect of L1 not being
  deleted from blocked_locks list before grant_blocked_inode_locks()
  in cleanup) if L1 is also associated with the DISCONNECTing client.
  This patch fixes the problem.

Change-Id: I84d884e203761d0b071183860ffe8cae1f212ea5
BUG: 1089470
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7575
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
        Backport of  http://review.gluster.org/7512

* In the event of a DISCONNECT from a client, as part of cleanup,
  inodelk objects are not removed from the blocked_locks list before
  being unref'd and freed, causing the brick process to crash at
  some point when the (now) stale object is accessed again in the list.

* Also during cleanup, it is pointless to try and grant lock to a
  previously blocked inodelk (say L1) as part of releasing another
  conflicting lock (L2), (which is a side-effect of L1 not being
  deleted from blocked_locks list before grant_blocked_inode_locks()
  in cleanup) if L1 is also associated with the DISCONNECTing client.
  This patch fixes the problem.

Change-Id: I84d884e203761d0b071183860ffe8cae1f212ea5
BUG: 1089470
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7575
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/locks: Fix a missing assignment in new_entrylk_lock()</title>
<updated>2014-04-10T10:29:59+00:00</updated>
<author>
<name>Vijay Bellur</name>
<email>vbellur@redhat.com</email>
</author>
<published>2014-04-08T06:58:04+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=bf8c9b331b2fb55882204fa31ede3997165e6748'/>
<id>bf8c9b331b2fb55882204fa31ede3997165e6748</id>
<content type='text'>
Change-Id: If5c03456d61ec930d588b57781fb545eed18e4a2
BUG: 1085220
Signed-off-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7414
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: If5c03456d61ec930d588b57781fb545eed18e4a2
BUG: 1085220
Signed-off-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7414
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>locks: fix unconditional op_ret success of entrylk</title>
<updated>2014-04-04T13:38:13+00:00</updated>
<author>
<name>Anand Avati</name>
<email>avati@redhat.com</email>
</author>
<published>2014-03-08T20:50:47+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=18ad6f2dc7b8254ef718de6311f7344b28e99797'/>
<id>18ad6f2dc7b8254ef718de6311f7344b28e99797</id>
<content type='text'>
Bug introduced in recent refactoring. op_ret of entrylk() was always
getting set to 0 even though second locker wouldn't have gotten a lock.
This was resulting in multiple contenders to get locks granted at the
same time.

BUG: 849630
Change-Id: I130d521407afee15de59f270b59687d41982fb29
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7232
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Bug introduced in recent refactoring. op_ret of entrylk() was always
getting set to 0 even though second locker wouldn't have gotten a lock.
This was resulting in multiple contenders to get locks granted at the
same time.

BUG: 849630
Change-Id: I130d521407afee15de59f270b59687d41982fb29
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7232
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>locks: set @lock-&gt;frame = NULL when lock is granted</title>
<updated>2014-01-23T07:55:29+00:00</updated>
<author>
<name>Anand Avati</name>
<email>avati@redhat.com</email>
</author>
<published>2014-01-20T03:44:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=cd65bab3a1586e03178e8b683afea26bd324a71b'/>
<id>cd65bab3a1586e03178e8b683afea26bd324a71b</id>
<content type='text'>
This way disconnect cleanup code can differentiate which locks
are granted vs blocked.

Change-Id: I2a835c6865b6c804231d852953ea84eeccef35a3
BUG: 849630
Signed-off-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6762
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This way disconnect cleanup code can differentiate which locks
are granted vs blocked.

Change-Id: I2a835c6865b6c804231d852953ea84eeccef35a3
BUG: 849630
Signed-off-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6762
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>locks: various fixes</title>
<updated>2014-01-14T12:37:56+00:00</updated>
<author>
<name>Anand Avati</name>
<email>avati@redhat.com</email>
</author>
<published>2013-12-04T00:30:45+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=8ee3846e75327bb81001607d9023fce4910fe405'/>
<id>8ee3846e75327bb81001607d9023fce4910fe405</id>
<content type='text'>
- implement ref/unref of entry locks (and fix bad pointer deref crashes)
- code cleanup and deleted various data types
- fix improper read/write lock conflict detection in entrylk
- fix indefinite hang of blocked locks on disconnect
- register locks in client_t synchronously, fix crashes in disconnect path

Change-Id: Id273690c9111b8052139d1847060d1fb5a711924
BUG: 849630
Signed-off-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6695
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
- implement ref/unref of entry locks (and fix bad pointer deref crashes)
- code cleanup and deleted various data types
- fix improper read/write lock conflict detection in entrylk
- fix indefinite hang of blocked locks on disconnect
- register locks in client_t synchronously, fix crashes in disconnect path

Change-Id: Id273690c9111b8052139d1847060d1fb5a711924
BUG: 849630
Signed-off-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6695
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>client_t: phase 2, refactor server_ctx and locks_ctx out</title>
<updated>2013-10-31T16:32:50+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2013-08-21T18:11:38+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=3108d4529d57690f58027da61ac5e56a0987ed57'/>
<id>3108d4529d57690f58027da61ac5e56a0987ed57</id>
<content type='text'>
remove server_ctx and locks_ctx from client_ctx directly and store as
into discrete entities in the scratch_ctx

hooking up dump will be in phase 3

BUG: 849630
Change-Id: I94cea328326db236cdfdf306cb381e4d58f58d4c
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5678
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
remove server_ctx and locks_ctx from client_ctx directly and store as
into discrete entities in the scratch_ctx

hooking up dump will be in phase 3

BUG: 849630
Change-Id: I94cea328326db236cdfdf306cb381e4d58f58d4c
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5678
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/locks : Improves debuggability of inode/entry locks.</title>
<updated>2013-09-03T13:50:40+00:00</updated>
<author>
<name>Anuradha Talur</name>
<email>atalur@redhat.com</email>
</author>
<published>2013-08-28T08:59:50+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7dbfbfd3694e02b90e8f3ce509f5279da1523a02'/>
<id>7dbfbfd3694e02b90e8f3ce509f5279da1523a02</id>
<content type='text'>
Prints, in the statedump, the information about the mount that
performed the inode/entry lk.

For the entrylks that are granted after a blocked state, the
blocked time is not printed. A patch for that will be sent
later.

Change-Id: Ib0c1ed21fa9328b435f96b590dd343f59814a08d
BUG: 915629
Signed-off-by: Anuradha Talur &lt;atalur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5712
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Prints, in the statedump, the information about the mount that
performed the inode/entry lk.

For the entrylks that are granted after a blocked state, the
blocked time is not printed. A patch for that will be sent
later.

Change-Id: Ib0c1ed21fa9328b435f96b590dd343f59814a08d
BUG: 915629
Signed-off-by: Anuradha Talur &lt;atalur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5712
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/locks: Convert old style metadata locks to new-style</title>
<updated>2013-08-07T10:33:22+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2013-08-06T12:10:05+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c6a555d1268c667b72728ffa58600fc0632465e4'/>
<id>c6a555d1268c667b72728ffa58600fc0632465e4</id>
<content type='text'>
Problem:
In 3.3, inode locks of both metadata and data are competing in same
domain called data domain (old style). This coupled with eager-lock,
delayed post-ops introduce delays for metadata operations like chmod,
chown etc. To avoid this problem, inode locks for metadata ops are
moved to different domain called metadata domain in 3.4 (new style).
But when both 3.3 clients and 3.4 clients are present, 3.4 clients
for metadata operations still need to take locks in "old style" so
that proper synchronization happens across 3.3 and 3.4 clients. Only
when all clients are &gt;= 3.4 locks will be taken in "new style" for
metadata locks. Because of this behavior as long as at least one 3.3
client is present, delays will be perceived for doing metadata
operations on all 3.4 clients while data operations are in
progress (Ex: Untar will untar one file per sec).

Fix:
Make locks xlators translate old-style metadata locks to new-style
metadata locks. Since upgrade process suggests upgrading servers
first and then clients, this approach gives good results.

Tests:
1) Tested that old style metadata locks are converted to new style by
   locks xlator using gdb
2) Tested that disconnects purge locks in meta-data domain as well
   using gdb and statedumps.
3) Tested that untar performance is not hampered by meta-data and
   data operations.
4) Had two mounts one with orthogonal-meta-data on and other with
   orthogonal-meta-data off ran chmod 777 &lt;file&gt; on one mount and
   chmod 555 &lt;file&gt; on the other mount in while loops when I took
   statedumps I saw that both the transports are taking lock on
   same domain with same range.

   18:49:30 :) ⚡ sudo grep -B1 "ACTIVE" /usr/local/var/run/gluster/home-gfs-r2_0.324.dump.*
   home-gfs-r2_0.324.dump.1375794971-lock-dump.domain.domain=r2-replicate-0:metadata
   home-gfs-r2_0.324.dump.1375794971:inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=9223372036854775806, len=0, pid = 7525, owner=78f9e652497f0000, transport=0x15ac9e0, , granted at Tue Aug  6 18:46:11 2013

   home-gfs-r2_0.324.dump.1375795051-lock-dump.domain.domain=r2-replicate-0:metadata
   home-gfs-r2_0.324.dump.1375795051:inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=9223372036854775806, len=0, pid = 8879, owner=0019cc3cad7f0000, transport=0x158f580, , granted at Tue Aug  6 18:47:31 2013

Change-Id: I268df4efd93a377a0c73fbc59b739ef12a7a8bb6
BUG: 993981
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5503
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In 3.3, inode locks of both metadata and data are competing in same
domain called data domain (old style). This coupled with eager-lock,
delayed post-ops introduce delays for metadata operations like chmod,
chown etc. To avoid this problem, inode locks for metadata ops are
moved to different domain called metadata domain in 3.4 (new style).
But when both 3.3 clients and 3.4 clients are present, 3.4 clients
for metadata operations still need to take locks in "old style" so
that proper synchronization happens across 3.3 and 3.4 clients. Only
when all clients are &gt;= 3.4 locks will be taken in "new style" for
metadata locks. Because of this behavior as long as at least one 3.3
client is present, delays will be perceived for doing metadata
operations on all 3.4 clients while data operations are in
progress (Ex: Untar will untar one file per sec).

Fix:
Make locks xlators translate old-style metadata locks to new-style
metadata locks. Since upgrade process suggests upgrading servers
first and then clients, this approach gives good results.

Tests:
1) Tested that old style metadata locks are converted to new style by
   locks xlator using gdb
2) Tested that disconnects purge locks in meta-data domain as well
   using gdb and statedumps.
3) Tested that untar performance is not hampered by meta-data and
   data operations.
4) Had two mounts one with orthogonal-meta-data on and other with
   orthogonal-meta-data off ran chmod 777 &lt;file&gt; on one mount and
   chmod 555 &lt;file&gt; on the other mount in while loops when I took
   statedumps I saw that both the transports are taking lock on
   same domain with same range.

   18:49:30 :) ⚡ sudo grep -B1 "ACTIVE" /usr/local/var/run/gluster/home-gfs-r2_0.324.dump.*
   home-gfs-r2_0.324.dump.1375794971-lock-dump.domain.domain=r2-replicate-0:metadata
   home-gfs-r2_0.324.dump.1375794971:inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=9223372036854775806, len=0, pid = 7525, owner=78f9e652497f0000, transport=0x15ac9e0, , granted at Tue Aug  6 18:46:11 2013

   home-gfs-r2_0.324.dump.1375795051-lock-dump.domain.domain=r2-replicate-0:metadata
   home-gfs-r2_0.324.dump.1375795051:inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=9223372036854775806, len=0, pid = 8879, owner=0019cc3cad7f0000, transport=0x158f580, , granted at Tue Aug  6 18:47:31 2013

Change-Id: I268df4efd93a377a0c73fbc59b739ef12a7a8bb6
BUG: 993981
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5503
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
