<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests, branch v3.4.0alpha3</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: big lock - a coarse-grained locking to prevent races</title>
<updated>2013-04-17T12:48:50+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-04-15T10:26:57+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=92729add67e2e7b8c7589c2dfab0bde071a7faf2'/>
<id>92729add67e2e7b8c7589c2dfab0bde071a7faf2</id>
<content type='text'>
There are primarily three lists that are part of glusterd process,
that are concurrently accessed. Namely, priv-&gt;volumes, priv-&gt;peers
and volinfo-&gt;bricks_list.

Big-lock approach
-----------------
WHAT IS IT?
Big lock is a coarse-grained lock which protects all three
lists, mentioned above, from racy access.

HOW DOES IT WORK?
At any given point in time, glusterd's thread(s) are in execution
_iff_ there is a preceding, inbound network event. Of course, the
sigwaiter thread and timer thread are exceptions.
A network event is an external trigger to glusterd, via the epoll
thread, in the form of POLLIN and POLLERR.
As long as we take the big-lock at all such entry points and yield
it when we are done, we are guaranteed that all the network events,
accessing the global lists, are serialised.

This amounts to holding the big lock at
- all the handlers of all the actors in glusterd. (POLLIN)
- all the cbks in glusterd. (POLLIN)
- rpc_notify (DISCONNECT event), if we access/modify
  one of the three lists. (POLLERR)

In the case of synctask'ized volume operations, we must remember that,
if we held the big lock for the entire duration of the handler,
we may block other non-synctask rpc actors from executing.
For eg, volume-start would block in PMAP SIGNIN, if done incorrectly.
To prevent this, we need to yield the big lock, when we yield the
synctask, and reacquire on waking up of the synctask.

BUG: 948686
Change-Id: I429832f1fed67bcac0813403d58346558a403ce9
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4835
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There are primarily three lists that are part of glusterd process,
that are concurrently accessed. Namely, priv-&gt;volumes, priv-&gt;peers
and volinfo-&gt;bricks_list.

Big-lock approach
-----------------
WHAT IS IT?
Big lock is a coarse-grained lock which protects all three
lists, mentioned above, from racy access.

HOW DOES IT WORK?
At any given point in time, glusterd's thread(s) are in execution
_iff_ there is a preceding, inbound network event. Of course, the
sigwaiter thread and timer thread are exceptions.
A network event is an external trigger to glusterd, via the epoll
thread, in the form of POLLIN and POLLERR.
As long as we take the big-lock at all such entry points and yield
it when we are done, we are guaranteed that all the network events,
accessing the global lists, are serialised.

This amounts to holding the big lock at
- all the handlers of all the actors in glusterd. (POLLIN)
- all the cbks in glusterd. (POLLIN)
- rpc_notify (DISCONNECT event), if we access/modify
  one of the three lists. (POLLERR)

In the case of synctask'ized volume operations, we must remember that,
if we held the big lock for the entire duration of the handler,
we may block other non-synctask rpc actors from executing.
For eg, volume-start would block in PMAP SIGNIN, if done incorrectly.
To prevent this, we need to yield the big lock, when we yield the
synctask, and reacquire on waking up of the synctask.

BUG: 948686
Change-Id: I429832f1fed67bcac0813403d58346558a403ce9
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4835
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: fix dependency on sleep in bug-874498.t</title>
<updated>2013-04-17T11:36:27+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-04-15T10:22:53+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=63098d9ff8dcfc08fd2ed83c62c4ffb63fc2126f'/>
<id>63098d9ff8dcfc08fd2ed83c62c4ffb63fc2126f</id>
<content type='text'>
With the introduction of http://review.gluster.org/4784, there are
delays which breaks bug-874498.t which wrongly depends on healing
to finish within 2 seconds.

Fix this by using 'EXPECT_WITHIN 60' instead of sleep 2.

BUG: 874498
Change-Id: I7131699908e63b024d2dd71395b3e94c15fe925c
Original-author: Anand Avati &lt;avati@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4832
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With the introduction of http://review.gluster.org/4784, there are
delays which breaks bug-874498.t which wrongly depends on healing
to finish within 2 seconds.

Fix this by using 'EXPECT_WITHIN 60' instead of sleep 2.

BUG: 874498
Change-Id: I7131699908e63b024d2dd71395b3e94c15fe925c
Original-author: Anand Avati &lt;avati@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4832
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: fix further issues with bug-874498.t</title>
<updated>2013-04-17T08:53:43+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-04-15T10:21:14+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=28da431e5bedba64380cc3886cbab03c0d7a3cfd'/>
<id>28da431e5bedba64380cc3886cbab03c0d7a3cfd</id>
<content type='text'>
The failure of bug-874498.t seems to be a "bug" in glustershd.
The situation seems to be when both subvolumes of a replica are
"local" to glustershd, and in such cases glustershd is sensitive
to the order in which the subvols come up.

The core of the issue itself is that, without the patch (#4784),
self-heal daemon completes the processing of index and no entries
are left inside the xattrop index after a few seconds of volume
start force. However with the patch, the stale "backing file"
(against which index performs link()) is left. The likely reason
is that an "INDEX" based crawl is not happening against the subvol
when this patch is applied.

Before #4784 patch, the order in which subvols came up was :

  [2013-04-09 22:55:35.117679] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-0: Connected to 10.3.129.13:49156, attached to remote volume '/d/backends/brick1'.
  ...
  [2013-04-09 22:55:35.118399] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-1: Connected to 10.3.129.13:49157, attached to remote volume '/d/backends/brick2'.

However, with the patch, the order is reversed:

  [2013-04-09 22:53:34.945370] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-1: Connected to 10.3.129.13:49153, attached to remote volume '/d/backends/brick2'.
  ...
  [2013-04-09 22:53:34.950966] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-0: Connected to 10.3.129.13:49152, attached to remote volume '/d/backends/brick1'.

The index in brick2 has the list of files/gfid to heal. It appears
to be the case that when brick1 is the first subvol to be detected
as coming up, somehow an INDEX based crawl is clearing all the
index entries in brick2, but if brick2 comes up as the first subvol,
then the backing file is left stale.

Also, doing a "gluster volume heal full" seems to leave out stale
backing files too. As the crawl is performed on the namespace and
the backing file is never encountered there to get cleared out.

So the interim (possibly permanent) fix is to have the script issue
a regular self-heal command (and not a "full" one).

The failure of the script itself is non-critical. The data files are
all healed, and it is just the backing file which is left behind. The
stale backing file too gets cleared in the next index based healing,
either triggered manually or after 10mins.

BUG: 874498
Change-Id: I601e9adec46bb7f8ba0b1ba09d53b83bf317ab6a
Original-author: Anand Avati &lt;avati@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4831
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The failure of bug-874498.t seems to be a "bug" in glustershd.
The situation seems to be when both subvolumes of a replica are
"local" to glustershd, and in such cases glustershd is sensitive
to the order in which the subvols come up.

The core of the issue itself is that, without the patch (#4784),
self-heal daemon completes the processing of index and no entries
are left inside the xattrop index after a few seconds of volume
start force. However with the patch, the stale "backing file"
(against which index performs link()) is left. The likely reason
is that an "INDEX" based crawl is not happening against the subvol
when this patch is applied.

Before #4784 patch, the order in which subvols came up was :

  [2013-04-09 22:55:35.117679] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-0: Connected to 10.3.129.13:49156, attached to remote volume '/d/backends/brick1'.
  ...
  [2013-04-09 22:55:35.118399] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-1: Connected to 10.3.129.13:49157, attached to remote volume '/d/backends/brick2'.

However, with the patch, the order is reversed:

  [2013-04-09 22:53:34.945370] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-1: Connected to 10.3.129.13:49153, attached to remote volume '/d/backends/brick2'.
  ...
  [2013-04-09 22:53:34.950966] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-0: Connected to 10.3.129.13:49152, attached to remote volume '/d/backends/brick1'.

The index in brick2 has the list of files/gfid to heal. It appears
to be the case that when brick1 is the first subvol to be detected
as coming up, somehow an INDEX based crawl is clearing all the
index entries in brick2, but if brick2 comes up as the first subvol,
then the backing file is left stale.

Also, doing a "gluster volume heal full" seems to leave out stale
backing files too. As the crawl is performed on the namespace and
the backing file is never encountered there to get cleared out.

So the interim (possibly permanent) fix is to have the script issue
a regular self-heal command (and not a "full" one).

The failure of the script itself is non-critical. The data files are
all healed, and it is just the backing file which is left behind. The
stale backing file too gets cleared in the next index based healing,
either triggered manually or after 10mins.

BUG: 874498
Change-Id: I601e9adec46bb7f8ba0b1ba09d53b83bf317ab6a
Original-author: Anand Avati &lt;avati@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4831
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: allow multiple instances of glusterd on one machine</title>
<updated>2013-04-16T11:48:58+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-04-16T05:06:31+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=29d7563416c0d94cf36d7e05493332aacebfa0e0'/>
<id>29d7563416c0d94cf36d7e05493332aacebfa0e0</id>
<content type='text'>
This is needed to support automated testing of cluster-communication
features such as probing and quorum.  In order to use this, you need to
do the following preparatory steps.

* Copy /var/lib/glusterd to another directory for each virtual host

* Ensure that each virtual host has a different UUID in its glusterd.info

Now you can start each copy of glusterd with the following xlator-options.

* management.transport.socket.bind-address=$ip_address

* management.working-directory=$unique_working_directory

You can use 127.x.y.z addresses for binding without needing to assign
them to interfaces explicitly.  Note that you must use addresses, not
names, because of some stuff in the socket code that's not worth fixing
just for this usage, but after that you can use names in /etc/hosts
instead.

At this point you can issue CLI commands to a specific glusterd using
the --remote-host option.  So far probe, volume create/start/stop,
mount, and basic I/O all seem to work as expected with multiple
instances.

Change-Id: I1beabb44cff8763d2774bc208b2ffcda27c1a550
BUG: 913555
Original-author: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4838
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This is needed to support automated testing of cluster-communication
features such as probing and quorum.  In order to use this, you need to
do the following preparatory steps.

* Copy /var/lib/glusterd to another directory for each virtual host

* Ensure that each virtual host has a different UUID in its glusterd.info

Now you can start each copy of glusterd with the following xlator-options.

* management.transport.socket.bind-address=$ip_address

* management.working-directory=$unique_working_directory

You can use 127.x.y.z addresses for binding without needing to assign
them to interfaces explicitly.  Note that you must use addresses, not
names, because of some stuff in the socket code that's not worth fixing
just for this usage, but after that you can use names in /etc/hosts
instead.

At this point you can issue CLI commands to a specific glusterd using
the --remote-host option.  So far probe, volume create/start/stop,
mount, and basic I/O all seem to work as expected with multiple
instances.

Change-Id: I1beabb44cff8763d2774bc208b2ffcda27c1a550
BUG: 913555
Original-author: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4838
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests/cluster.rc: support for virtual multi-server glusterd</title>
<updated>2013-04-16T03:48:26+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2013-04-15T14:32:56+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=695d173b13f583de846720b66fc201bd84969330'/>
<id>695d173b13f583de846720b66fc201bd84969330</id>
<content type='text'>
 tests

Since http://review.gluster.org/4556 glusterd is capable of running
many instances of itself on a single system. This patch exploits
that feature and enhances the regression test framework to expose
handy primitives so that test cases may be written to test glusterd
in a cluster.

Usage:

1. Include "$(dirname)/../cluster.rc" to get access to the extensions

2. Call launch_cluster $N where $N is the count of virtual servers

Calling launch_cluster, starts $N glusterds which bind to $N different
IPs and dynamically defines these primitives:

 - Variables $H1 .. $Hn assigned to hostnames of each "server".

 - Variables $CLI_1 .. $CLI_n assigned as commands to run CLI commands
   on the corresponding N'th server.

 - Variables $B1 .. $Bn assigned to the backend directories on each
   "server".

 - Function kill_glusterd, which accepts a parameter - index number of
   glusterd to be killed.

 - Variables $glusterd_1 .. $glusterd_n assigned to the command lines
   to restart the corresponding glusterd, if it was previously killed.

The current set of primitives and functions were implemented with the goal
of satisfying ./tests/bugs/bug-913555.t. The API will be made richer as
we add more cluster test cases

Change-Id: I6e79c58098ed0862cf75a0b56e4ce384ec2e4eb2
BUG: 913555
Original-author: Anand Avati &lt;avati@redhat.com&gt;
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4836
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
 tests

Since http://review.gluster.org/4556 glusterd is capable of running
many instances of itself on a single system. This patch exploits
that feature and enhances the regression test framework to expose
handy primitives so that test cases may be written to test glusterd
in a cluster.

Usage:

1. Include "$(dirname)/../cluster.rc" to get access to the extensions

2. Call launch_cluster $N where $N is the count of virtual servers

Calling launch_cluster, starts $N glusterds which bind to $N different
IPs and dynamically defines these primitives:

 - Variables $H1 .. $Hn assigned to hostnames of each "server".

 - Variables $CLI_1 .. $CLI_n assigned as commands to run CLI commands
   on the corresponding N'th server.

 - Variables $B1 .. $Bn assigned to the backend directories on each
   "server".

 - Function kill_glusterd, which accepts a parameter - index number of
   glusterd to be killed.

 - Variables $glusterd_1 .. $glusterd_n assigned to the command lines
   to restart the corresponding glusterd, if it was previously killed.

The current set of primitives and functions were implemented with the goal
of satisfying ./tests/bugs/bug-913555.t. The API will be made richer as
we add more cluster test cases

Change-Id: I6e79c58098ed0862cf75a0b56e4ce384ec2e4eb2
BUG: 913555
Original-author: Anand Avati &lt;avati@redhat.com&gt;
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4836
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Preserve mtime in self-heal</title>
<updated>2013-04-12T07:22:01+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2013-03-08T09:50:22+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=38c53e13e0eab7c8e5b10b71b2be20d4ef4ebc5a'/>
<id>38c53e13e0eab7c8e5b10b71b2be20d4ef4ebc5a</id>
<content type='text'>
Problem:
Data self-heal may choose sink iatt to set mtimes.
This happens because after syncing of data is done
self-heal does one more xattrops/fstat to determine
sources sinks to set the inode-ctx. Since this is done
after data syncing and erase of xattrs, old source and
old sink are now sources, but the mtimes of them differ.
Old code just takes the first source from the list and
update mtimes, which could be sink before the self-heal
started.

Fix:
Set mtime from 'sources before syncing'.

Change-Id: Id769e1b99aa4f041eaee775f64cbf2c57b799723
BUG: 918437
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4658
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-on: http://review.gluster.org/4663
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Data self-heal may choose sink iatt to set mtimes.
This happens because after syncing of data is done
self-heal does one more xattrops/fstat to determine
sources sinks to set the inode-ctx. Since this is done
after data syncing and erase of xattrs, old source and
old sink are now sources, but the mtimes of them differ.
Old code just takes the first source from the list and
update mtimes, which could be sink before the self-heal
started.

Fix:
Set mtime from 'sources before syncing'.

Change-Id: Id769e1b99aa4f041eaee775f64cbf2c57b799723
BUG: 918437
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4658
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-on: http://review.gluster.org/4663
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/distribute: Ignore non-participating subvols for layout checks</title>
<updated>2013-04-11T17:41:01+00:00</updated>
<author>
<name>shishir gowda</name>
<email>sgowda@redhat.com</email>
</author>
<published>2013-04-04T05:53:08+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=a8b6cf1b64de7e03652c05ecd8d63b73bbd2523e'/>
<id>a8b6cf1b64de7e03652c05ecd8d63b73bbd2523e</id>
<content type='text'>
Backporting fix http://review.gluster.org/#/c/4668/

When subvols-per-directory is &lt; available subvols, then there are layouts
which are not populated. This leads to incorrect identification of holes or
overlaps. We need to ignore layouts, which have err == 0, and start == stop.
In the current scenario (start == stop == 0).

Additionally, in layout-merge, treat missing xattrs as err = 0. In case of
missing layouts, anomalies will reset them.

For any other valid subvoles, err != 0 in case of layouts being zeroed out.

Also reverted back dht_selfheal_dir_xattr, which does layout calculation only
on subvols which have errors.

BUG: 921408
Change-Id: I75a8edcb92af5b53b3253c9addd7a812e9242836
Signed-off-by: shishir gowda &lt;sgowda@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4800
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backporting fix http://review.gluster.org/#/c/4668/

When subvols-per-directory is &lt; available subvols, then there are layouts
which are not populated. This leads to incorrect identification of holes or
overlaps. We need to ignore layouts, which have err == 0, and start == stop.
In the current scenario (start == stop == 0).

Additionally, in layout-merge, treat missing xattrs as err = 0. In case of
missing layouts, anomalies will reset them.

For any other valid subvoles, err != 0 in case of layouts being zeroed out.

Also reverted back dht_selfheal_dir_xattr, which does layout calculation only
on subvols which have errors.

BUG: 921408
Change-Id: I75a8edcb92af5b53b3253c9addd7a812e9242836
Signed-off-by: shishir gowda &lt;sgowda@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4800
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterfs.spec.in: resync with Fedora glusterfs.spec</title>
<updated>2013-03-27T01:28:54+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2013-03-26T19:30:48+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=20b1be77f6cd7ddebedb0cc64d419b5686e5d56e'/>
<id>20b1be77f6cd7ddebedb0cc64d419b5686e5d56e</id>
<content type='text'>
cherry-pick from master, including commits:
 5d3b478e76f1015b11bfd7d48465ab12a4f0737e
 fd407a4f5cdb869dc52efe8fc9e1d284f60f5992
 6f6789884227b8260f140c39c063d77b0516af97
 84f5e4b354526fbb7f0665345816e81c81245c8f
 2398e1e0da61f4ec5f209c704e037b54b5c249e1

Resync with Fedora's glusterfs.spec

To build a set of RPMs:
 % ./autogen.sh
 % ./configure --enable-fusermount
 % make dist
 % cd extras/LinuxRPM &amp;&amp; make glusterrpms

Updated rpm.t

BUG: 819130
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Change-Id: Ib73be0fbb7ee16a5c41b4f7c7a3f66d0224bfe6c
Reviewed-on: http://review.gluster.org/4725
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
cherry-pick from master, including commits:
 5d3b478e76f1015b11bfd7d48465ab12a4f0737e
 fd407a4f5cdb869dc52efe8fc9e1d284f60f5992
 6f6789884227b8260f140c39c063d77b0516af97
 84f5e4b354526fbb7f0665345816e81c81245c8f
 2398e1e0da61f4ec5f209c704e037b54b5c249e1

Resync with Fedora's glusterfs.spec

To build a set of RPMs:
 % ./autogen.sh
 % ./configure --enable-fusermount
 % make dist
 % cd extras/LinuxRPM &amp;&amp; make glusterrpms

Updated rpm.t

BUG: 819130
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Change-Id: Ib73be0fbb7ee16a5c41b4f7c7a3f66d0224bfe6c
Reviewed-on: http://review.gluster.org/4725
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/glusterd: Start fs-crawl in separate thread so as not to block epoll</title>
<updated>2013-03-19T17:29:59+00:00</updated>
<author>
<name>Varun Shastry</name>
<email>vshastry@redhat.com</email>
</author>
<published>2013-03-14T13:26:18+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e43c8fe9ee9e782ebed8aafdb15310edde587ba0'/>
<id>e43c8fe9ee9e782ebed8aafdb15310edde587ba0</id>
<content type='text'>
tests/basic/quota.t covers test case for this.

Patch is only for 3.4 branch, http://review.gluster.org/4495 fixes the issue
in master.

Change-Id: I92674f5413441cc896245d5b3d0925f44ce8b2d3
BUG: 919998
Signed-off-by: Varun Shastry &lt;vshastry@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4680
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
tests/basic/quota.t covers test case for this.

Patch is only for 3.4 branch, http://review.gluster.org/4495 fixes the issue
in master.

Change-Id: I92674f5413441cc896245d5b3d0925f44ce8b2d3
BUG: 919998
Signed-off-by: Varun Shastry &lt;vshastry@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4680
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/distribute: Fix layout overlaps due to spread-count in selfheal path</title>
<updated>2013-03-09T15:36:34+00:00</updated>
<author>
<name>shishir gowda</name>
<email>sgowda@redhat.com</email>
</author>
<published>2013-03-07T14:11:33+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=881d48d3278f4fe04eea899f2f45c0f45d6ee56e'/>
<id>881d48d3278f4fe04eea899f2f45c0f45d6ee56e</id>
<content type='text'>
We needed to zero out the layout range, before we re-calculate the range.
When spread-count is issued, we would end up with stale ranges in the layout.

Replaced dht_selfheal_dir_xattr with dht_fix_dir_xattr, which correctly resets
the un-used (after re-cal) layouts.

Change-Id: I1a900d15df07335f59356bd23182ccec34381ab2
BUG: 884455
Signed-off-by: shishir gowda &lt;sgowda@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4648
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We needed to zero out the layout range, before we re-calculate the range.
When spread-count is issued, we would end up with stale ranges in the layout.

Replaced dht_selfheal_dir_xattr with dht_fix_dir_xattr, which correctly resets
the un-used (after re-cal) layouts.

Change-Id: I1a900d15df07335f59356bd23182ccec34381ab2
BUG: 884455
Signed-off-by: shishir gowda &lt;sgowda@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4648
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
