<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/cluster/dht/src/dht-helper.c, branch v3.7.2</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>cluster/dht: Prevent use after free bug</title>
<updated>2015-06-19T05:16:25+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2015-06-13T12:03:14+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c5c1ad0fae6701cc9350bda4e5dc6d33cdf28eca'/>
<id>c5c1ad0fae6701cc9350bda4e5dc6d33cdf28eca</id>
<content type='text'>
        Backport of http://review.gluster.org/11209

BUG: 1233042
Change-Id: If3685c9ed84a6720d8696d11773005e9786b503f
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11305
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
        Backport of http://review.gluster.org/11209

BUG: 1233042
Change-Id: If3685c9ed84a6720d8696d11773005e9786b503f
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11305
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: fix incorrect dst subvol info in inode_ctx</title>
<updated>2015-06-04T16:44:24+00:00</updated>
<author>
<name>Nithya Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2015-05-19T17:57:35+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7aa52673cc70011750ff019cefad8e0229f892ad'/>
<id>7aa52673cc70011750ff019cefad8e0229f892ad</id>
<content type='text'>
Stashing additional information in the inode_ctx to help
decide whether the migration information is stale, which could
happen if a file was migrated several times but FOPs only detected
the P1 migration phase. If no FOP detects the P2 phase, the inode
ctx1 is never reset.
We now save the src subvol as well as the dst subvol in the
inode ctx. The src subvol is the subvol on which the FOP was sent
when the mig info was set in the inode ctx. This information is
considered stale if:
1. The subvol on which the current FOP is sent is the same as
the dst subvol in the ctx
2. The subvol on which the current FOP is sent is not the same
as the src subvol in the ctx

This does not handle the case where the same file might have been
renamed such that the src subvol is the same but the dst subvol
is different. However, that is unlikely to happen very often.

Change-Id: I05a2e9b107ee64750c7ca629aee03b03a02ef75f
BUG: 1225809
Signed-off-by: Nithya Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10967
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Tested-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Stashing additional information in the inode_ctx to help
decide whether the migration information is stale, which could
happen if a file was migrated several times but FOPs only detected
the P1 migration phase. If no FOP detects the P2 phase, the inode
ctx1 is never reset.
We now save the src subvol as well as the dst subvol in the
inode ctx. The src subvol is the subvol on which the FOP was sent
when the mig info was set in the inode ctx. This information is
considered stale if:
1. The subvol on which the current FOP is sent is the same as
the dst subvol in the ctx
2. The subvol on which the current FOP is sent is not the same
as the src subvol in the ctx

This does not handle the case where the same file might have been
renamed such that the src subvol is the same but the dst subvol
is different. However, that is unlikely to happen very often.

Change-Id: I05a2e9b107ee64750c7ca629aee03b03a02ef75f
BUG: 1225809
Signed-off-by: Nithya Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10967
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Tested-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: pass a destination subvol to fop2 variants to avoid races.</title>
<updated>2015-06-04T08:34:27+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2015-05-27T11:49:30+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d50f0d3b7c15f25ff2d0ab29e6b7cd18bcf44cd0'/>
<id>d50f0d3b7c15f25ff2d0ab29e6b7cd18bcf44cd0</id>
<content type='text'>
The destination subvol used in the fop2 variants is either stored in
inode-ctx1 or local-&gt;cached_subvol. However, it is not guaranteed that
a value stored in these locations before invocation of fop2 is still
present after the invocation as these locations are shared among
different concurrent operations. So, to preserve the atomicity of
"check dst-subvol and invoke fop2 variant if dst-subvol found", we
pass down the dst-subvol to fop2 variant.

This patch also fixes error handling in some fop2 variants.

Change-Id: Icc226228a246d3f223e3463519736c4495b364d2
BUG: 1225809
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10966
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The destination subvol used in the fop2 variants is either stored in
inode-ctx1 or local-&gt;cached_subvol. However, it is not guaranteed that
a value stored in these locations before invocation of fop2 is still
present after the invocation as these locations are shared among
different concurrent operations. So, to preserve the atomicity of
"check dst-subvol and invoke fop2 variant if dst-subvol found", we
pass down the dst-subvol to fop2 variant.

This patch also fixes error handling in some fop2 variants.

Change-Id: Icc226228a246d3f223e3463519736c4495b364d2
BUG: 1225809
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10966
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Don't rely on linkto xattr to find destination subvol</title>
<updated>2015-06-04T05:19:11+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2015-05-13T14:26:47+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=271ee96e5c8636ee512a06e0478982874c160a1a'/>
<id>271ee96e5c8636ee512a06e0478982874c160a1a</id>
<content type='text'>
during phase 2 of migration.

linkto xattr on source file cannot be relied to find where the data
file currently resides. This can happen if there are multiple
migrations before phase 2 detection by a client. For eg.,

* migration (M1, node1, node2) starts.
* application writes some data. DHT correctly stores the state in
  inode context that phase-1 of migration is in progress
* migration M1 completes
* migration (M2, node2, node3) is triggered and completed
* application resumes writes to the file. DHT identifies it as phase-2
  of migration. However, linkto xattr on node1 points to node2, but
  the file is on node3. A lookup correctly identifies node3 as cached
  subvol

TBD:
   When we identify phase-2 of a previous migration (say M1), there
   might be a migration in progress - say (M3, node3, node4). In this
   case we need to send writes to both (node3, node4) not just
   node3. Also, the inode state needs to correctly indicate that its in
   phase-1 of migration. I'll send this as a different patch.

Change-Id: I1a861f766258170af2f6c0935468edb6be687b95
BUG: 1225809
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10805
Reviewed-on: http://review.gluster.org/10965
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
during phase 2 of migration.

linkto xattr on source file cannot be relied to find where the data
file currently resides. This can happen if there are multiple
migrations before phase 2 detection by a client. For eg.,

* migration (M1, node1, node2) starts.
* application writes some data. DHT correctly stores the state in
  inode context that phase-1 of migration is in progress
* migration M1 completes
* migration (M2, node2, node3) is triggered and completed
* application resumes writes to the file. DHT identifies it as phase-2
  of migration. However, linkto xattr on node1 points to node2, but
  the file is on node3. A lookup correctly identifies node3 as cached
  subvol

TBD:
   When we identify phase-2 of a previous migration (say M1), there
   might be a migration in progress - say (M3, node3, node4). In this
   case we need to send writes to both (node3, node4) not just
   node3. Also, the inode state needs to correctly indicate that its in
   phase-1 of migration. I'll send this as a different patch.

Change-Id: I1a861f766258170af2f6c0935468edb6be687b95
BUG: 1225809
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10805
Reviewed-on: http://review.gluster.org/10965
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>guster/dht: tiered volumes may not allow access to files undergoing migration</title>
<updated>2015-05-08T08:55:41+00:00</updated>
<author>
<name>Dan Lambright</name>
<email>dlambrig@redhat.com</email>
</author>
<published>2015-05-07T18:59:51+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=eee703473f17e3784e15dba291cc3df00fde9f96'/>
<id>eee703473f17e3784e15dba291cc3df00fde9f96</id>
<content type='text'>
This is a backport of fix 10324 to Gluster 3.7.

If a read IO occurs against a file that has reached rebalance
phase 2, we redirect the IO to the destination. For tiered
volumes, when we try to reopen the file (on the destination),
the lower level DHT receives the open call and fails; it does
not have a "cached subvol". Fix is to "teach" the lower level
DHT of the new location by sending a locate before the open.

&gt; http://review.gluster.org/#/c/10324/
&gt; Change-Id: Ia4acb0035ff1da15f6a8f9ed54f43c76e8b98f5f
&gt; BUG: 1214048
&gt; Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
&gt; Signed-off-by: root &lt;root@gprfs018.sbu.lab.eng.bos.redhat.com&gt;
&gt; Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/10324
&gt; Tested-by: NetBSD Build System
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; Tested-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;

Change-Id: Ia4acb0035ff1da15f6a8f9ed54f43c76e8b98f5f
BUG: 1219608
Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10654
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System
Reviewed-by: Joseph Fernandes
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This is a backport of fix 10324 to Gluster 3.7.

If a read IO occurs against a file that has reached rebalance
phase 2, we redirect the IO to the destination. For tiered
volumes, when we try to reopen the file (on the destination),
the lower level DHT receives the open call and fails; it does
not have a "cached subvol". Fix is to "teach" the lower level
DHT of the new location by sending a locate before the open.

&gt; http://review.gluster.org/#/c/10324/
&gt; Change-Id: Ia4acb0035ff1da15f6a8f9ed54f43c76e8b98f5f
&gt; BUG: 1214048
&gt; Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
&gt; Signed-off-by: root &lt;root@gprfs018.sbu.lab.eng.bos.redhat.com&gt;
&gt; Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/10324
&gt; Tested-by: NetBSD Build System
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; Tested-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;

Change-Id: Ia4acb0035ff1da15f6a8f9ed54f43c76e8b98f5f
BUG: 1219608
Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10654
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System
Reviewed-by: Joseph Fernandes
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rebalance: Introducing local crawl and parallel migration</title>
<updated>2015-05-07T09:37:02+00:00</updated>
<author>
<name>Susant Palai</name>
<email>spalai@redhat.com</email>
</author>
<published>2015-04-12T10:25:02+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=579186aeba940e3ec73093c48e17b5f6f94910d0'/>
<id>579186aeba940e3ec73093c48e17b5f6f94910d0</id>
<content type='text'>
The current patch address two part of the design proposed.
1. Rebalance multiple files in parallel
2. Crawl only bricks that belong to the current node

Brief design explanation for the above two points.

1. Rebalance multiple files in parallel:
   -------------------------------------
The existing rebalance engine is single threaded. Hence, introduced
multiple threads which will be running parallel to the crawler. The
current rebalance migration is converted to a "Producer-Consumer"
frame work.

Where Producer is : Crawler
      Consumer is : Migrating Threads

Crawler: Crawler is the main thread. The job of the crawler is now
limited to fix-layout of each directory and add the files which are
eligible for the migration to a global queue in a round robin manner
so that we will use all the disk resources efficiently. Hence, the
crawler will not be "blocked" by migration process.

Producer: Producer will monitor the global queue. If any file is
added to this queue, it will dqueue that entry and migrate the file.
Currently 20 migration threads are spawned at the beginning of the
rebalance process. Hence, multiple file migration happens in parallel.

2. Crawl only bricks that belong to the current node:
   --------------------------------------------------
As rebalance process is spawned per node, it migrates only the files
that belongs to it's own node for the sake of load balancing. But it
also reads entries from the whole cluster, which is not necessary as
readdir hits other nodes.

New Design:
        As part of the new design the rebalancer decides the subvols
that are local to the rebalancer node by checking the node-uuid of
root directory prior to the crawler starts. Hence, readdir won't hit
the whole cluster  as it has already the context of local subvols and
also node-uuid request for each file can be avoided. This makes the
rebalance process "more scalable".

Change-Id: I6f1b44086a09df8ca23935fd213509c70cc0c050
BUG: 1217381
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10466
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The current patch address two part of the design proposed.
1. Rebalance multiple files in parallel
2. Crawl only bricks that belong to the current node

Brief design explanation for the above two points.

1. Rebalance multiple files in parallel:
   -------------------------------------
The existing rebalance engine is single threaded. Hence, introduced
multiple threads which will be running parallel to the crawler. The
current rebalance migration is converted to a "Producer-Consumer"
frame work.

Where Producer is : Crawler
      Consumer is : Migrating Threads

Crawler: Crawler is the main thread. The job of the crawler is now
limited to fix-layout of each directory and add the files which are
eligible for the migration to a global queue in a round robin manner
so that we will use all the disk resources efficiently. Hence, the
crawler will not be "blocked" by migration process.

Producer: Producer will monitor the global queue. If any file is
added to this queue, it will dqueue that entry and migrate the file.
Currently 20 migration threads are spawned at the beginning of the
rebalance process. Hence, multiple file migration happens in parallel.

2. Crawl only bricks that belong to the current node:
   --------------------------------------------------
As rebalance process is spawned per node, it migrates only the files
that belongs to it's own node for the sake of load balancing. But it
also reads entries from the whole cluster, which is not necessary as
readdir hits other nodes.

New Design:
        As part of the new design the rebalancer decides the subvols
that are local to the rebalancer node by checking the node-uuid of
root directory prior to the crawler starts. Hence, readdir won't hit
the whole cluster  as it has already the context of local subvols and
also node-uuid request for each file can be avoided. This makes the
rebalance process "more scalable".

Change-Id: I6f1b44086a09df8ca23935fd213509c70cc0c050
BUG: 1217381
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
Reviewed-on: http://review.gluster.org/10466
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Tested-by: NetBSD Build System
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs/syncop: Add xdata to all syncop calls</title>
<updated>2015-04-08T15:14:59+00:00</updated>
<author>
<name>Raghavendra Talur</name>
<email>rtalur@redhat.com</email>
</author>
<published>2015-03-11T13:06:01+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=346e64e578573296028efa516cd93cfaf2b17b8f'/>
<id>346e64e578573296028efa516cd93cfaf2b17b8f</id>
<content type='text'>
This patch adds support for xdata in both the
request and response path of syncops.

Few calls like lookup already had the support;
have renamed variables in few places to maintain
uniformity.

xdata passed downwards is known as xdata_in
and xdata passed upwards is known as xdata_out.

There is an old patch by Jeff Darcy at
http://review.gluster.org/#/c/8769/3 which does the
same for some selected calls. It also brings in
xdata support at gfapi level.

xdata support at gfapi level would be introduced
in subsequent patches.

Change-Id: I340e94ebaf2a38e160e65bc30732e8fe1c532dcc
BUG: 1158621
Signed-off-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9859
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch adds support for xdata in both the
request and response path of syncops.

Few calls like lookup already had the support;
have renamed variables in few places to maintain
uniformity.

xdata passed downwards is known as xdata_in
and xdata passed upwards is known as xdata_out.

There is an old patch by Jeff Darcy at
http://review.gluster.org/#/c/8769/3 which does the
same for some selected calls. It also brings in
xdata support at gfapi level.

xdata support at gfapi level would be introduced
in subsequent patches.

Change-Id: I340e94ebaf2a38e160e65bc30732e8fe1c532dcc
BUG: 1158621
Signed-off-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9859
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Avoid conflict between contrib/uuid and system uuid</title>
<updated>2015-04-04T17:48:35+00:00</updated>
<author>
<name>Emmanuel Dreyfus</name>
<email>manu@netbsd.org</email>
</author>
<published>2015-04-02T13:51:30+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=28397cae4102ac3f08576ebaf071ad92683097e8'/>
<id>28397cae4102ac3f08576ebaf071ad92683097e8</id>
<content type='text'>
glusterfs relies on Linux uuid implementation, which
API is incompatible with most other systems's uuid. As
a result, libglusterfs has to embed contrib/uuid,
which is the Linux implementation, on non Linux systems.
This implementation is incompatible with systtem's
built in, but the symbols have the same names.

Usually this is not a problem because when we link
with -lglusterfs, libc's symbols are trumped. However
there is a problem when a program not linked with
-lglusterfs will dlopen() glusterfs component. In
such a case, libc's uuid implementation is already
loaded in the calling program, and it will be used
instead of libglusterfs's implementation, causing
crashes.

A possible workaround is to use pre-load libglusterfs
in the calling program (using LD_PRELOAD on NetBSD for
instance), but such a mechanism is not portable, nor
is it flexible. A much better approach is to rename
libglusterfs's uuid_* functions to gf_uuid_* to avoid
any possible conflict. This is what this change attempts.

BUG: 1206587
Change-Id: I9ccd3e13afed1c7fc18508e92c7beb0f5d49f31a
Signed-off-by: Emmanuel Dreyfus &lt;manu@netbsd.org&gt;
Reviewed-on: http://review.gluster.org/10017
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
glusterfs relies on Linux uuid implementation, which
API is incompatible with most other systems's uuid. As
a result, libglusterfs has to embed contrib/uuid,
which is the Linux implementation, on non Linux systems.
This implementation is incompatible with systtem's
built in, but the symbols have the same names.

Usually this is not a problem because when we link
with -lglusterfs, libc's symbols are trumped. However
there is a problem when a program not linked with
-lglusterfs will dlopen() glusterfs component. In
such a case, libc's uuid implementation is already
loaded in the calling program, and it will be used
instead of libglusterfs's implementation, causing
crashes.

A possible workaround is to use pre-load libglusterfs
in the calling program (using LD_PRELOAD on NetBSD for
instance), but such a mechanism is not portable, nor
is it flexible. A much better approach is to rename
libglusterfs's uuid_* functions to gf_uuid_* to avoid
any possible conflict. This is what this change attempts.

BUG: 1206587
Change-Id: I9ccd3e13afed1c7fc18508e92c7beb0f5d49f31a
Signed-off-by: Emmanuel Dreyfus &lt;manu@netbsd.org&gt;
Reviewed-on: http://review.gluster.org/10017
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Add tier translator.</title>
<updated>2015-03-21T16:50:29+00:00</updated>
<author>
<name>Dan Lambright</name>
<email>dlambrig@redhat.com</email>
</author>
<published>2015-02-22T16:05:58+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=20355992e8eed7d3ed78a23bc7e922d6ae94860d'/>
<id>20355992e8eed7d3ed78a23bc7e922d6ae94860d</id>
<content type='text'>
The tier translator shares most of DHT's code. It differs in how
subvolumes are chosen for I/Os, and how file migration (cache promotion
and demotion) is managed. That different functionality is split to either
DHT or tier logic according to the "tier_methods" structure.

A cache promotion and demotion thread is created in a manner
similar to the rebalance daemon. The thread operates a timing
wheel which periodically checks for promotion and demotion candidates
(files). Candidates are queued and then migrated. Candidates must exist on
the same node as the daemon and meet other critera per caching policies.

This patch has two authors (Dan Lambright and Joseph Fernandes). Dan
did the DHT changes and Joe wrote the cache policies. The fix depends on
DHT readidr changes and the database library which have been submitted
separately.  Header files in libglusterfs/src/gfdb should be reviewed in
patch 9683.

For more background and design see the feature page [1].

[1]
http://www.gluster.org/community/documentation/index.php/Features/data-classification

Change-Id: Icc26c517ccecf5c42aef039f5b9c6f7afe83e46c
BUG: 1194753
Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9724
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Tested-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The tier translator shares most of DHT's code. It differs in how
subvolumes are chosen for I/Os, and how file migration (cache promotion
and demotion) is managed. That different functionality is split to either
DHT or tier logic according to the "tier_methods" structure.

A cache promotion and demotion thread is created in a manner
similar to the rebalance daemon. The thread operates a timing
wheel which periodically checks for promotion and demotion candidates
(files). Candidates are queued and then migrated. Candidates must exist on
the same node as the daemon and meet other critera per caching policies.

This patch has two authors (Dan Lambright and Joseph Fernandes). Dan
did the DHT changes and Joe wrote the cache policies. The fix depends on
DHT readidr changes and the database library which have been submitted
separately.  Header files in libglusterfs/src/gfdb should be reviewed in
patch 9683.

For more background and design see the feature page [1].

[1]
http://www.gluster.org/community/documentation/index.php/Features/data-classification

Change-Id: Icc26c517ccecf5c42aef039f5b9c6f7afe83e46c
BUG: 1194753
Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9724
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Tested-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Change the subvolume encoding in d_off to be a "global"</title>
<updated>2015-03-18T11:47:41+00:00</updated>
<author>
<name>Dan Lambright</name>
<email>dlambrig@redhat.com</email>
</author>
<published>2015-02-18T19:49:50+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=a216745e5db3fdb4fa8d625c971e70f8d0e34d23'/>
<id>a216745e5db3fdb4fa8d625c971e70f8d0e34d23</id>
<content type='text'>
position in the graph rather than relative (local) to a particular
translator.

Encoding the volume in this way allows a single translator to manage
which brick is currently being scanned for directory entries. Using a
single translator minimizes allocated bits in the d_off. It also allows
multiple DHT translators in the same graph to have a common frame of
reference (the graph position) for which brick is being read. Multiple
DHT translators are needed for the Tiering feature.

The fix builds off a previous change (9332) which removed subvolume
encoding from AFR. The fix makes an equivalent change to the EC
translator.

More background can be found in fix 9332 and gluster-dev discussions [1].

DHT and AFR/EC are responsibile (as before) for choosing which brick to
enumerate directory entries in over the readdir lifecycle.

The client translator receiving the readdir fop encodes the dht_t. It
is referred to as the "leaf node" in the graph and corresponds to the
brick being scanned.

When DHT decodes the d_off, it translates the leaf node to a local
subvolume, which represents the next node in the graph leading to
the brick.

Tracking of leaf nodes is done in common utility functions. Leaf nodes
counts and positional information are updated on a graph switch.

[1] www.gluster.org/pipermail/gluster-devel/2015-January/043592.html

Change-Id: Iaf0ea86d7046b1ceadbad69d88707b243077ebc8
BUG: 1190734
Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9688
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Tested-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
position in the graph rather than relative (local) to a particular
translator.

Encoding the volume in this way allows a single translator to manage
which brick is currently being scanned for directory entries. Using a
single translator minimizes allocated bits in the d_off. It also allows
multiple DHT translators in the same graph to have a common frame of
reference (the graph position) for which brick is being read. Multiple
DHT translators are needed for the Tiering feature.

The fix builds off a previous change (9332) which removed subvolume
encoding from AFR. The fix makes an equivalent change to the EC
translator.

More background can be found in fix 9332 and gluster-dev discussions [1].

DHT and AFR/EC are responsibile (as before) for choosing which brick to
enumerate directory entries in over the readdir lifecycle.

The client translator receiving the readdir fop encodes the dht_t. It
is referred to as the "leaf node" in the graph and corresponds to the
brick being scanned.

When DHT decodes the d_off, it translates the leaf node to a local
subvolume, which represents the next node in the graph leading to
the brick.

Tracking of leaf nodes is done in common utility functions. Leaf nodes
counts and positional information are updated on a graph switch.

[1] www.gluster.org/pipermail/gluster-devel/2015-January/043592.html

Change-Id: Iaf0ea86d7046b1ceadbad69d88707b243077ebc8
BUG: 1190734
Signed-off-by: Dan Lambright &lt;dlambrig@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9688
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Tested-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
