<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/protocol/server/src/server.h, branch v3.7.16</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>server: send lookup on root inode when itable is created</title>
<updated>2016-03-31T12:15:32+00:00</updated>
<author>
<name>vmallika</name>
<email>vmallika@redhat.com</email>
</author>
<published>2016-03-31T02:05:35+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=4f2b417f0a6fea20c8a96b6f66732c709234d637'/>
<id>4f2b417f0a6fea20c8a96b6f66732c709234d637</id>
<content type='text'>
This is a backport of http://review.gluster.org/#/c/13837

 * xlators like quota, marker, posix_acl can cause problems
   if inode-ctx are not created.
   sometime these xlarors may not get lookup on root inode
   with below cases
   1) client may not send lookup on root inode (like NSR leader)
   2) if the xlators on one of the bricks are not up,
      and client sending lookup during this time: brick
      can miss the lookup
   It is always better to make sure that there is one lookup
   on root. So send a first lookup when the inode table is created

 * When sending lookup on root, new inode is created, we need to
   use itable-&gt;root instead

&gt; Change-Id: Iff2eeaa1a89795328833a7761789ef588f11218f
&gt; BUG: 1320818
&gt; Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/13837
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;

Change-Id: I0abf45444c21b3bc77b5a75ab9a2049a411048d3
BUG: 1320892
Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13862
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This is a backport of http://review.gluster.org/#/c/13837

 * xlators like quota, marker, posix_acl can cause problems
   if inode-ctx are not created.
   sometime these xlarors may not get lookup on root inode
   with below cases
   1) client may not send lookup on root inode (like NSR leader)
   2) if the xlators on one of the bricks are not up,
      and client sending lookup during this time: brick
      can miss the lookup
   It is always better to make sure that there is one lookup
   on root. So send a first lookup when the inode table is created

 * When sending lookup on root, new inode is created, we need to
   use itable-&gt;root instead

&gt; Change-Id: Iff2eeaa1a89795328833a7761789ef588f11218f
&gt; BUG: 1320818
&gt; Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/13837
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;

Change-Id: I0abf45444c21b3bc77b5a75ab9a2049a411048d3
BUG: 1320892
Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/13862
Smoke: Gluster Build System &lt;jenkins@build.gluster.com&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>server/protocol: option for dynamic authorization of client permissions</title>
<updated>2015-10-13T16:05:37+00:00</updated>
<author>
<name>Prasanna Kumar Kalever</name>
<email>prasanna.kalever@redhat.com</email>
</author>
<published>2015-08-20T18:38:23+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b8ba012da0cf276329025e30b36f43624548f7f1'/>
<id>b8ba012da0cf276329025e30b36f43624548f7f1</id>
<content type='text'>
problem:
assuming gluster volume is already mounted (for gfapi: say client transport
connection has already established), now if somebody change the volume
permissions say *.allow | *.reject for a client, gluster should allow/terminate
the client connection based on the fresh set of volume options immediately,
but in existing scenario neither we have any option to set this behaviour nor
we take any action until and unless we remount the volume manually

solution:
Introduce 'dynamic-auth' option (default: on).
If 'dynamic-auth' is 'on' gluster will perform dynamic authentication to
allow/terminate client transport connection immediately in response to
*.allow | *.reject volume set options, thus if volume permissions have changed
for a particular client (say client is added to auth.reject list), his
transport connection to gluster volume will be terminated immediately.

Backport of:
&gt; Change-Id: I6243a6db41bf1e0babbf050a8e4f8620732e00d8
&gt; BUG: 1245380
&gt; Signed-off-by: Prasanna Kumar Kalever &lt;prasanna.kalever@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/12229
&gt; Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; (cherry picked from commit 84e90b756566bc211535a8627ed16d4231110ade)

Change-Id: If7e5c9be912412ea388391ef406ee2c8bedb26b8
BUG: 1271065 
Signed-off-by: Prasanna Kumar Kalever &lt;prasanna.kalever@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12343
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
problem:
assuming gluster volume is already mounted (for gfapi: say client transport
connection has already established), now if somebody change the volume
permissions say *.allow | *.reject for a client, gluster should allow/terminate
the client connection based on the fresh set of volume options immediately,
but in existing scenario neither we have any option to set this behaviour nor
we take any action until and unless we remount the volume manually

solution:
Introduce 'dynamic-auth' option (default: on).
If 'dynamic-auth' is 'on' gluster will perform dynamic authentication to
allow/terminate client transport connection immediately in response to
*.allow | *.reject volume set options, thus if volume permissions have changed
for a particular client (say client is added to auth.reject list), his
transport connection to gluster volume will be terminated immediately.

Backport of:
&gt; Change-Id: I6243a6db41bf1e0babbf050a8e4f8620732e00d8
&gt; BUG: 1245380
&gt; Signed-off-by: Prasanna Kumar Kalever &lt;prasanna.kalever@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/12229
&gt; Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
&gt; (cherry picked from commit 84e90b756566bc211535a8627ed16d4231110ade)

Change-Id: If7e5c9be912412ea388391ef406ee2c8bedb26b8
BUG: 1271065 
Signed-off-by: Prasanna Kumar Kalever &lt;prasanna.kalever@redhat.com&gt;
Reviewed-on: http://review.gluster.org/12343
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/server: forget the inodes which got ENOENT in lookup</title>
<updated>2015-08-21T11:48:48+00:00</updated>
<author>
<name>Raghavendra Bhat</name>
<email>raghavendra@redhat.com</email>
</author>
<published>2015-07-01T10:26:58+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=cb879d6adbb9194b488f2ad7a97cf7fc7f5a5ef5'/>
<id>cb879d6adbb9194b488f2ad7a97cf7fc7f5a5ef5</id>
<content type='text'>
                 Backport of http://review.gluster.org/11489

If a looked up object is removed from the backend, then upon getting a
revalidated lookup on that object ENOENT error is received. protocol/server
xlator handles it by removing dentry upon which ENOENT is received. But the
inode associated with it still remains in the inode table, and whoever does
nameless lookup on the gfid of that object will be able to do it successfully
despite the object being not present.

For handling this issue, upon getting ENOENT on a looked up entry in revalidate
lookups, protocol/server should forget the inode as well.

Though removing files directly from the backend is not allowed, in case of
objects corrupted due to bitrot and marked as bad by scrubber, objects are
removed directly from the backend in case of replicate volumes, so that the
object is healed from the good copy. For handling this, the inode of the bad
object removed from the backend should be forgotten. Otherwise, the inode which
knows the object it represents is bad, does not allow read/write operations
happening as part of self-heal.

Change-Id: I268eeaf37969458687425187be6622347a6cc1f1
BUG: 1255604
Signed-off-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11973
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
                 Backport of http://review.gluster.org/11489

If a looked up object is removed from the backend, then upon getting a
revalidated lookup on that object ENOENT error is received. protocol/server
xlator handles it by removing dentry upon which ENOENT is received. But the
inode associated with it still remains in the inode table, and whoever does
nameless lookup on the gfid of that object will be able to do it successfully
despite the object being not present.

For handling this issue, upon getting ENOENT on a looked up entry in revalidate
lookups, protocol/server should forget the inode as well.

Though removing files directly from the backend is not allowed, in case of
objects corrupted due to bitrot and marked as bad by scrubber, objects are
removed directly from the backend in case of replicate volumes, so that the
object is healed from the good copy. For handling this, the inode of the bad
object removed from the backend should be forgotten. Otherwise, the inode which
knows the object it represents is bad, does not allow read/write operations
happening as part of self-heal.

Change-Id: I268eeaf37969458687425187be6622347a6cc1f1
BUG: 1255604
Signed-off-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11973
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/server: fail setvolume if any of xlators is not initialized yet</title>
<updated>2015-07-02T11:14:09+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2015-07-01T11:24:55+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=12c854b7a28a8d764f0446d2e0133c447c2537c2'/>
<id>12c854b7a28a8d764f0446d2e0133c447c2537c2</id>
<content type='text'>
We can only start recieving fops only when all xlators in graph are
initialized.

Change-Id: Id79100bab5878bb2518ed133c1118554fbb35229
BUG: 1214169
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11504
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We can only start recieving fops only when all xlators in graph are
initialized.

Change-Id: Id79100bab5878bb2518ed133c1118554fbb35229
BUG: 1214169
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11504
Tested-by: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>epoll: Adding the ability to configure epoll threads</title>
<updated>2015-02-07T21:23:03+00:00</updated>
<author>
<name>Shyam</name>
<email>srangana@redhat.com</email>
</author>
<published>2015-01-26T19:20:31+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=a7f5893c9243c8c563db215352fa7e47f6968e8b'/>
<id>a7f5893c9243c8c563db215352fa7e47f6968e8b</id>
<content type='text'>
Add the ability to configure the number of event threads
for various gluster services.

Currently with the multi thread epoll patch, it is possible
to have more than one thread waiting on socket activity and
processing the same. This thread count is currently static,
which this commit makes dynamic.

The current services which use IO path, i.e brick processes,
any client process (nfs, FUSE, gfapi, heal,
rebalance, etc.a), gain 2 set parameters to control the number
of threads that are processing events. These settings are,
  - client.event-threads &lt;n&gt;
  - server.event-threads &lt;n&gt;

The client setting affects the client graph consumers, and the
server setting affects the brick processes. These are processed
and inited/reconfigured using the client/server protocol xlators.

Other services (say glusterd) would need to extend similar
configuration settings to take advantage of multi threaded event
processing.

At present glusterd is not enabled with this commit, as it does not
stand to gain from this multi-threading (as I understand it).

Change-Id: Id8422fc57a9f95a135158eb6477ccf9d3c9ea4d9
BUG: 1104462
Signed-off-by: Shyam &lt;srangana@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9488
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Add the ability to configure the number of event threads
for various gluster services.

Currently with the multi thread epoll patch, it is possible
to have more than one thread waiting on socket activity and
processing the same. This thread count is currently static,
which this commit makes dynamic.

The current services which use IO path, i.e brick processes,
any client process (nfs, FUSE, gfapi, heal,
rebalance, etc.a), gain 2 set parameters to control the number
of threads that are processing events. These settings are,
  - client.event-threads &lt;n&gt;
  - server.event-threads &lt;n&gt;

The client setting affects the client graph consumers, and the
server setting affects the brick processes. These are processed
and inited/reconfigured using the client/server protocol xlators.

Other services (say glusterd) would need to extend similar
configuration settings to take advantage of multi threaded event
processing.

At present glusterd is not enabled with this commit, as it does not
stand to gain from this multi-threading (as I understand it).

Change-Id: Id8422fc57a9f95a135158eb6477ccf9d3c9ea4d9
BUG: 1104462
Signed-off-by: Shyam &lt;srangana@redhat.com&gt;
Reviewed-on: http://review.gluster.org/9488
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc: implement server.manage-gids for group resolving on the bricks</title>
<updated>2014-05-09T19:22:39+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2014-04-17T16:32:07+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2fd499d148fc8865c77de8b2c73fe0b7e1737882'/>
<id>2fd499d148fc8865c77de8b2c73fe0b7e1737882</id>
<content type='text'>
The new volume option 'server.manage-gids' can be enabled in
environments where a user belongs to more than the current absolute
maximum of 93 groups. This option triggers the following behavior:

1. The AUTH_GLUSTERFS structure sent by GlusterFS clients (fuse, nfs or
   libgfapi) will contain only one (1) auxiliary group, instead of
   a full list. This reduces network usage and prevents problems in
   encoding the AUTH_GLUSTERFS structure which should fit in 400 bytes.
2. The single group in the RPC Calls received by the server is replaced
   by resolving the groups server-side. Permission checks and similar in
   lower xlators are applied against the full list of groups where the
   user belongs to, and not the single auxiliary group that the client
   sent.

Change-Id: I9e540de13e3022f8b63ff893ecba511129a47b91
BUG: 1053579
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7501
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
Reviewed-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The new volume option 'server.manage-gids' can be enabled in
environments where a user belongs to more than the current absolute
maximum of 93 groups. This option triggers the following behavior:

1. The AUTH_GLUSTERFS structure sent by GlusterFS clients (fuse, nfs or
   libgfapi) will contain only one (1) auxiliary group, instead of
   a full list. This reduces network usage and prevents problems in
   encoding the AUTH_GLUSTERFS structure which should fit in 400 bytes.
2. The single group in the RPC Calls received by the server is replaced
   by resolving the groups server-side. Permission checks and similar in
   lower xlators are applied against the full list of groups where the
   user belongs to, and not the single auxiliary group that the client
   sent.

Change-Id: I9e540de13e3022f8b63ff893ecba511129a47b91
BUG: 1053579
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7501
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
Reviewed-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot : Clean up of old barrier code.</title>
<updated>2014-04-30T09:13:26+00:00</updated>
<author>
<name>Sachin Pandit</name>
<email>spandit@redhat.com</email>
</author>
<published>2014-04-28T00:28:41+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=d09b327a2796152eb80074169e17359394ae7cf8'/>
<id>d09b327a2796152eb80074169e17359394ae7cf8</id>
<content type='text'>
As a new barrier translator is introduced, we dont require
the old barrier code. Hence cleaning thar up.

Change-Id: Ieedca6f33a746898f0d2332fda1f1d4c86fff98f
BUG: 1061685
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7577
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-by: Vijaikumar Mallikarjuna &lt;vmallika@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
As a new barrier translator is introduced, we dont require
the old barrier code. Hence cleaning thar up.

Change-Id: Ieedca6f33a746898f0d2332fda1f1d4c86fff98f
BUG: 1061685
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7577
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-by: Vijaikumar Mallikarjuna &lt;vmallika@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Ping timer implmentation</title>
<updated>2014-04-29T21:23:51+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2014-04-07T05:27:45+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c61bc1f9e5874cb8380ec6398680fc71aea233b4'/>
<id>c61bc1f9e5874cb8380ec6398680fc71aea233b4</id>
<content type='text'>
This patch refactors the existing client ping timer implementation, and makes
use of the common code for implementing both client ping timer and the
glusterd ping timer.

A new gluster rpc program for ping is introduced. The ping timer is only
started for peers that have this new program. The deafult glusterd ping
timeout is 30 seconds. It is configurable by setting the option
'ping-timeout' in glusterd.vol .

Also, this patch introduces changes in the glusterd-handshake path. The client
programs for a peer are now set in the callback of dump_versions, for both
the older handshake and the newer op-version handshake. This is the only place
in the handshake process where we know what programs a peer supports.

Change-Id: I035815ac13449ca47080ecc3253c0a9afbe9016a
BUG: 1038261
Signed-off-by: Vijaikumar M &lt;vmallika@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5202
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch refactors the existing client ping timer implementation, and makes
use of the common code for implementing both client ping timer and the
glusterd ping timer.

A new gluster rpc program for ping is introduced. The ping timer is only
started for peers that have this new program. The deafult glusterd ping
timeout is 30 seconds. It is configurable by setting the option
'ping-timeout' in glusterd.vol .

Also, this patch introduces changes in the glusterd-handshake path. The client
programs for a peer are now set in the callback of dump_versions, for both
the older handshake and the newer op-version handshake. This is the only place
in the handshake process where we know what programs a peer supports.

Change-Id: I035815ac13449ca47080ecc3253c0a9afbe9016a
BUG: 1038261
Signed-off-by: Vijaikumar M &lt;vmallika@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5202
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gluster: GlusterFS Volume Snapshot Feature</title>
<updated>2014-04-11T23:29:17+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2014-02-19T11:00:11+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=29bccc2ed18eedc40e83d2f0d35327037a322384'/>
<id>29bccc2ed18eedc40e83d2f0d35327037a322384</id>
<content type='text'>
This is the initial patch for the Snapshot feature. Current patch
includes following features:
* Snapshot create
* Snapshot delete
* Snapshot restore
* Snapshot list
* Snapshot info
* Snapshot status
* Snapshot config

Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9
BUG: 1061685
Signed-off-by: shishir gowda &lt;sgowda@redhat.com&gt;
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Signed-off-by: Vijaikumar M &lt;vmallika@redhat.com&gt;
Signed-off-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
Signed-off-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Signed-off-by: Joseph Fernandes &lt;josferna@redhat.com&gt;
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7128
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This is the initial patch for the Snapshot feature. Current patch
includes following features:
* Snapshot create
* Snapshot delete
* Snapshot restore
* Snapshot list
* Snapshot info
* Snapshot status
* Snapshot config

Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9
BUG: 1061685
Signed-off-by: shishir gowda &lt;sgowda@redhat.com&gt;
Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Signed-off-by: Vijaikumar M &lt;vmallika@redhat.com&gt;
Signed-off-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
Signed-off-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Signed-off-by: Joseph Fernandes &lt;josferna@redhat.com&gt;
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7128
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>client_t: phase 2, refactor server_ctx and locks_ctx out</title>
<updated>2013-10-31T16:32:50+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2013-08-21T18:11:38+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=3108d4529d57690f58027da61ac5e56a0987ed57'/>
<id>3108d4529d57690f58027da61ac5e56a0987ed57</id>
<content type='text'>
remove server_ctx and locks_ctx from client_ctx directly and store as
into discrete entities in the scratch_ctx

hooking up dump will be in phase 3

BUG: 849630
Change-Id: I94cea328326db236cdfdf306cb381e4d58f58d4c
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5678
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
remove server_ctx and locks_ctx from client_ctx directly and store as
into discrete entities in the scratch_ctx

hooking up dump will be in phase 3

BUG: 849630
Change-Id: I94cea328326db236cdfdf306cb381e4d58f58d4c
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5678
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
