<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/rpc/rpc-lib/src/protocol-common.h, branch v3.5.3beta1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>libgfapi:  Added support to fetch volume info from glusterd and store in glfs object.</title>
<updated>2014-05-12T15:14:52+00:00</updated>
<author>
<name>Soumya Koduri</name>
<email>skoduri@redhat.com</email>
</author>
<published>2014-05-12T10:16:57+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0c87b67ba9659a2d029d8136835331301b7b6ceb'/>
<id>0c87b67ba9659a2d029d8136835331301b7b6ceb</id>
<content type='text'>
Defined new APIs in the libgfapi module, given a glfs object,
 * to send handshake RPC call to glusterd process to fetch UUID of the volume
 * store it in the glusterfs_context linked to the glfs object.
 * to parse UUID from its cannonical string format into 16-byte array
   before sending it to the libgfapi users.

Defined a RPC call in glusterd which can be used to query volume related
info by other processes using 'clnt_handshake_procs'.

Note - Currently this RPC call to glusterd process is used only to fetch UUID.
But it can be extended to get other volume related structures as well.

In addition to the above, defined a new variable to keep track of such handshake
RPCs still in progress to make sure all the corresponding RPC callbacks have been
processed before libgfapi returns the glfs object initialized.

Also bumping up the GFAPI current version number since there is a new API
"glfs_get_volume_id" defined and exposed by libgfapi as part of these changes.

Cherry picked from commit 5adb10b9ac1c634334f29732e062b12d747ae8c5:
&gt; Change-Id: I303f76d7177d32d25bdb301b1dbcf5cd73f42807
&gt; BUG: 1095775
&gt; Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/7218
&gt; Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
&gt; Reviewed-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;

This change differs a little from the patch in the master branch:
- libgfapi in 3.5 does not have glfs_get_volfile(), so there were some
  merge conflicts resolved,
- libgfapi in 3.5 is not versioned, the configure.ac changes related to
  the versioning have been skipped,
- in the master branch only the XDR .x files are available, release-3.5
  requires a manual re-generation of the relates .h and .c files.

Change-Id: I52c32d0e69a52a7f4285f74164bca6fd83c4f3b3
BUG: 1095775
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7741
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Tested-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Defined new APIs in the libgfapi module, given a glfs object,
 * to send handshake RPC call to glusterd process to fetch UUID of the volume
 * store it in the glusterfs_context linked to the glfs object.
 * to parse UUID from its cannonical string format into 16-byte array
   before sending it to the libgfapi users.

Defined a RPC call in glusterd which can be used to query volume related
info by other processes using 'clnt_handshake_procs'.

Note - Currently this RPC call to glusterd process is used only to fetch UUID.
But it can be extended to get other volume related structures as well.

In addition to the above, defined a new variable to keep track of such handshake
RPCs still in progress to make sure all the corresponding RPC callbacks have been
processed before libgfapi returns the glfs object initialized.

Also bumping up the GFAPI current version number since there is a new API
"glfs_get_volume_id" defined and exposed by libgfapi as part of these changes.

Cherry picked from commit 5adb10b9ac1c634334f29732e062b12d747ae8c5:
&gt; Change-Id: I303f76d7177d32d25bdb301b1dbcf5cd73f42807
&gt; BUG: 1095775
&gt; Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/7218
&gt; Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
&gt; Reviewed-by: Harshavardhana &lt;harsha@harshavardhana.net&gt;
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;

This change differs a little from the patch in the master branch:
- libgfapi in 3.5 does not have glfs_get_volfile(), so there were some
  merge conflicts resolved,
- libgfapi in 3.5 is not versioned, the configure.ac changes related to
  the versioning have been skipped,
- in the master branch only the XDR .x files are available, release-3.5
  requires a manual re-generation of the relates .h and .c files.

Change-Id: I52c32d0e69a52a7f4285f74164bca6fd83c4f3b3
BUG: 1095775
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7741
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Tested-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/geo-rep: more glusterd and cli fixes for geo-rep.</title>
<updated>2014-01-27T17:44:33+00:00</updated>
<author>
<name>Ajeet Jha</name>
<email>ajha@redhat.com</email>
</author>
<published>2013-12-02T07:25:18+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f33d09f23b089bd07437eb714f8dffa43460d6b5'/>
<id>f33d09f23b089bd07437eb714f8dffa43460d6b5</id>
<content type='text'>
    -&gt; handle option validation cases in reset case.
    -&gt; Creating valid conf path when glusterd restarts.
    -&gt; Reading the gsyncd worker thread status and displaying it.
    -&gt; Displaying status-detail per worker.
    -&gt; Fetch checkpoint info in geo-rep status.
    -&gt; use-tarssh value validation added.

misc: misc geo-rep fixes based on cluster, logrotate etc..
    -&gt; cluster/dht: fix 'stime' getxattr getting overwritten.
    -&gt; cluster/afr: return max of 'stime' values in subvol.
    -&gt; geo-rep-logrotate: Sending SIGHUP to geo-rep auxiliary.
    -&gt; cluster/dht: fix convoluted logic while aggregating.
    -&gt; cluster/*: fix 'stime' min/max fetch logic.

Change-Id: I811acea0bbd6194797a3e55d89295d1ea021ac85
BUG: 1036552
Signed-off-by: Ajeet Jha &lt;ajha@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6405
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@gmail.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6810
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
    -&gt; handle option validation cases in reset case.
    -&gt; Creating valid conf path when glusterd restarts.
    -&gt; Reading the gsyncd worker thread status and displaying it.
    -&gt; Displaying status-detail per worker.
    -&gt; Fetch checkpoint info in geo-rep status.
    -&gt; use-tarssh value validation added.

misc: misc geo-rep fixes based on cluster, logrotate etc..
    -&gt; cluster/dht: fix 'stime' getxattr getting overwritten.
    -&gt; cluster/afr: return max of 'stime' values in subvol.
    -&gt; geo-rep-logrotate: Sending SIGHUP to geo-rep auxiliary.
    -&gt; cluster/dht: fix convoluted logic while aggregating.
    -&gt; cluster/*: fix 'stime' min/max fetch logic.

Change-Id: I811acea0bbd6194797a3e55d89295d1ea021ac85
BUG: 1036552
Signed-off-by: Ajeet Jha &lt;ajha@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6405
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amarts@gmail.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6810
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/quota: Improvements to quota</title>
<updated>2013-11-26T18:24:02+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2013-09-16T12:16:50+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=ab3ab1978a4768e9eed8e23b47e72b25046e607a'/>
<id>ab3ab1978a4768e9eed8e23b47e72b25046e607a</id>
<content type='text'>
* Two stages of quota enforcement is done:

  Soft and hard quota Upon reaching soft quota limit on the directory
  it logs/alerts in the quota daemon log (ie DEFAULT_LOG_DIR/quotad.log)
  and no more writes allowed after hard
  quota limit. After reaching the soft-limit the daemon alerts the
  user/admin repeatively for every 'alert-time', which is
  configurable.

* Quota enforcer is moved to server-side.

  It  takes care of enforcing quota. Since enforcer doesn't have the
  cluster view, it relies on another service called
  quota-aggregator. Aggregator, on query can return the size of a
  directory based on the cluster view.

  Enforcer is always loaded in the server graph and is by passed if
  the feature is not enabled.

  Options specific to enforcer:

  server-quota - Specifies whether the feature is on/off. It is used
  to by pass the quota if turned off.

  deem-statfs - If set to on, it takes quota limits into consideration
  while estimating fs size. (df command). The algorithm followed is,
  i.   Adjust statvfs based on limit configured on root.
  ii.  If limit is set on the inode passed, use size/limits on that inode to
       populate statvfs. Otherwise, use size/limits configured on root.
  iii. Upon statvfs, update the ctx-&gt;size on the inode.
  iv.  Don't let DHT aggregate, instead take the maximum of the usages from the
       subvols of the DHT, since each of it contains the complete information.

  Enforcer also makes use of gfid-to-path conversion functionality to
  work correctly when a client like nfs predominently relies on
  nameless lookups.

* Quota Aggregator acts as a thin client to provide cluster view

  Its a lightweight *gluster client* process with no mount point,
  started upon enabling quota or restarting the volume. This is a
  single process run on each brick, which can answer queries on all
  volumes in the cluster. Its volfile stored in
  GLUSTERD_DEFAULT_WORKING_DIR/quotad/quotad.vol.

Credits:
Raghavendra Bhat        &lt;rabhat@redhat.com&gt;
Varun Shastry           &lt;vshastry@redhat.com&gt;
Shishir Gowda           &lt;sgowda@redhat.com&gt;
Kruthika Dhananjay      &lt;kdhananj@redhat.com&gt;
Brian Foster            &lt;bfoster@redhat.com&gt;
Krishnan Parthasarathi  &lt;kparthas@redhat.com&gt;

Change-Id: Id1cb25b414951da34c665a55f77385d482e0f9de
BUG: 969461
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5952
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
* Two stages of quota enforcement is done:

  Soft and hard quota Upon reaching soft quota limit on the directory
  it logs/alerts in the quota daemon log (ie DEFAULT_LOG_DIR/quotad.log)
  and no more writes allowed after hard
  quota limit. After reaching the soft-limit the daemon alerts the
  user/admin repeatively for every 'alert-time', which is
  configurable.

* Quota enforcer is moved to server-side.

  It  takes care of enforcing quota. Since enforcer doesn't have the
  cluster view, it relies on another service called
  quota-aggregator. Aggregator, on query can return the size of a
  directory based on the cluster view.

  Enforcer is always loaded in the server graph and is by passed if
  the feature is not enabled.

  Options specific to enforcer:

  server-quota - Specifies whether the feature is on/off. It is used
  to by pass the quota if turned off.

  deem-statfs - If set to on, it takes quota limits into consideration
  while estimating fs size. (df command). The algorithm followed is,
  i.   Adjust statvfs based on limit configured on root.
  ii.  If limit is set on the inode passed, use size/limits on that inode to
       populate statvfs. Otherwise, use size/limits configured on root.
  iii. Upon statvfs, update the ctx-&gt;size on the inode.
  iv.  Don't let DHT aggregate, instead take the maximum of the usages from the
       subvols of the DHT, since each of it contains the complete information.

  Enforcer also makes use of gfid-to-path conversion functionality to
  work correctly when a client like nfs predominently relies on
  nameless lookups.

* Quota Aggregator acts as a thin client to provide cluster view

  Its a lightweight *gluster client* process with no mount point,
  started upon enabling quota or restarting the volume. This is a
  single process run on each brick, which can answer queries on all
  volumes in the cluster. Its volfile stored in
  GLUSTERD_DEFAULT_WORKING_DIR/quotad/quotad.vol.

Credits:
Raghavendra Bhat        &lt;rabhat@redhat.com&gt;
Varun Shastry           &lt;vshastry@redhat.com&gt;
Shishir Gowda           &lt;sgowda@redhat.com&gt;
Kruthika Dhananjay      &lt;kdhananj@redhat.com&gt;
Brian Foster            &lt;bfoster@redhat.com&gt;
Krishnan Parthasarathi  &lt;kparthas@redhat.com&gt;

Change-Id: Id1cb25b414951da34c665a55f77385d482e0f9de
BUG: 969461
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5952
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>bd_map: Remove bd_map xlator</title>
<updated>2013-11-13T19:38:28+00:00</updated>
<author>
<name>M. Mohan Kumar</name>
<email>mohan@in.ibm.com</email>
</author>
<published>2013-11-13T17:14:42+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=15a8ecd9b3eedf80881bd3dba81f16b7d2cb7c97'/>
<id>15a8ecd9b3eedf80881bd3dba81f16b7d2cb7c97</id>
<content type='text'>
Remove bd_map xlator and CLI related changes.

Change-Id: If7086205df1907127c1a1fa4ba603f1c48421d09
BUG: 1028672
Signed-off-by: M. Mohan Kumar &lt;mohan@in.ibm.com&gt;
Reviewed-on: http://review.gluster.org/5747
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Remove bd_map xlator and CLI related changes.

Change-Id: If7086205df1907127c1a1fa4ba603f1c48421d09
BUG: 1028672
Signed-off-by: M. Mohan Kumar &lt;mohan@in.ibm.com&gt;
Reviewed-on: http://review.gluster.org/5747
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterfs: zerofill support</title>
<updated>2013-11-11T05:25:49+00:00</updated>
<author>
<name>M. Mohan Kumar</name>
<email>mohan@in.ibm.com</email>
</author>
<published>2013-11-09T09:21:53+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c8fef37c5d566c906728b5f6f27baaa9a8d2a20d'/>
<id>c8fef37c5d566c906728b5f6f27baaa9a8d2a20d</id>
<content type='text'>
Add support for a new ZEROFILL fop. Zerofill writes zeroes to a file in
the specified range. This fop will be useful when a whole file needs to
be initialized with zero (could be useful for zero filled VM disk image
provisioning or  during scrubbing of VM disk images).

Client/application can issue this FOP for zeroing out. Gluster server
will zero out required range of bytes ie server offloaded zeroing. In
the absence of this fop,  client/application has to repetitively issue
write (zero) fop to the server, which is very inefficient method because
of the overheads involved in RPC calls  and acknowledgements.

WRITESAME is a  SCSI T10 command that takes a block of data as input and
writes the same data to other blocks and this write is handled
completely within the storage and hence is known as offload . Linux ,now
has support for SCSI WRITESAME command which is exposed to the user in
the form of BLKZEROOUT ioctl.  BD Xlator can exploit BLKZEROOUT ioctl to
implement this fop. Thus zeroing out operations can be completely
offloaded to the storage device , making it highly efficient.

The fop takes two arguments offset and size. It zeroes out 'size' number
of bytes in an opened file starting from 'offset' position.

This patch adds zerofill support to the following areas:
	- libglusterfs
	- io-stats
	- performance/md-cache,open-behind
	- quota
	- cluster/afr,dht,stripe
	- rpc/xdr
	- protocol/client,server
	- io-threads
	- marker
	- storage/posix
	- libgfapi

Client applications can exloit this fop by using glfs_zerofill introduced in
libgfapi.FUSE support to this fop has not been added as there is no system call
for this fop.

Changes from previous version 3:
* Removed redundant memory failure log messages

Changes from previous version 2:
* Rebased and fixed build error

Changes from previous version 1:
* Rebased for latest master

TODO :
     * Add zerofill support to trace xlator
     * Expose zerofill capability as part of gluster volume info

Here is a performance comparison of server offloaded zeofill vs zeroing
out using repeated writes.

[root@llmvm02 remote]# time ./offloaded aakash-test log 20

real	3m34.155s
user	0m0.018s
sys	0m0.040s
[root@llmvm02 remote]# time ./manually aakash-test log 20

real	4m23.043s
user	0m2.197s
sys	0m14.457s
[root@llmvm02 remote]# time ./offloaded aakash-test log 25;

real	4m28.363s
user	0m0.021s
sys	0m0.025s
[root@llmvm02 remote]# time ./manually aakash-test log 25

real	5m34.278s
user	0m2.957s
sys	0m18.808s

The argument log is a file which we want to set for logging purpose and
the third argument is size in GB .

As we can see there is a performance improvement of around 20% with this
fop.

Change-Id: I081159f5f7edde0ddb78169fb4c21c776ec91a18
BUG: 1028673
Signed-off-by: Aakash Lal Das &lt;aakash@linux.vnet.ibm.com&gt;
Signed-off-by: M. Mohan Kumar &lt;mohan@in.ibm.com&gt;
Reviewed-on: http://review.gluster.org/5327
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Add support for a new ZEROFILL fop. Zerofill writes zeroes to a file in
the specified range. This fop will be useful when a whole file needs to
be initialized with zero (could be useful for zero filled VM disk image
provisioning or  during scrubbing of VM disk images).

Client/application can issue this FOP for zeroing out. Gluster server
will zero out required range of bytes ie server offloaded zeroing. In
the absence of this fop,  client/application has to repetitively issue
write (zero) fop to the server, which is very inefficient method because
of the overheads involved in RPC calls  and acknowledgements.

WRITESAME is a  SCSI T10 command that takes a block of data as input and
writes the same data to other blocks and this write is handled
completely within the storage and hence is known as offload . Linux ,now
has support for SCSI WRITESAME command which is exposed to the user in
the form of BLKZEROOUT ioctl.  BD Xlator can exploit BLKZEROOUT ioctl to
implement this fop. Thus zeroing out operations can be completely
offloaded to the storage device , making it highly efficient.

The fop takes two arguments offset and size. It zeroes out 'size' number
of bytes in an opened file starting from 'offset' position.

This patch adds zerofill support to the following areas:
	- libglusterfs
	- io-stats
	- performance/md-cache,open-behind
	- quota
	- cluster/afr,dht,stripe
	- rpc/xdr
	- protocol/client,server
	- io-threads
	- marker
	- storage/posix
	- libgfapi

Client applications can exloit this fop by using glfs_zerofill introduced in
libgfapi.FUSE support to this fop has not been added as there is no system call
for this fop.

Changes from previous version 3:
* Removed redundant memory failure log messages

Changes from previous version 2:
* Rebased and fixed build error

Changes from previous version 1:
* Rebased for latest master

TODO :
     * Add zerofill support to trace xlator
     * Expose zerofill capability as part of gluster volume info

Here is a performance comparison of server offloaded zeofill vs zeroing
out using repeated writes.

[root@llmvm02 remote]# time ./offloaded aakash-test log 20

real	3m34.155s
user	0m0.018s
sys	0m0.040s
[root@llmvm02 remote]# time ./manually aakash-test log 20

real	4m23.043s
user	0m2.197s
sys	0m14.457s
[root@llmvm02 remote]# time ./offloaded aakash-test log 25;

real	4m28.363s
user	0m0.021s
sys	0m0.025s
[root@llmvm02 remote]# time ./manually aakash-test log 25

real	5m34.278s
user	0m2.957s
sys	0m18.808s

The argument log is a file which we want to set for logging purpose and
the third argument is size in GB .

As we can see there is a performance improvement of around 20% with this
fop.

Change-Id: I081159f5f7edde0ddb78169fb4c21c776ec91a18
BUG: 1028673
Signed-off-by: Aakash Lal Das &lt;aakash@linux.vnet.ibm.com&gt;
Signed-off-by: M. Mohan Kumar &lt;mohan@in.ibm.com&gt;
Reviewed-on: http://review.gluster.org/5327
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: [Feature] Command implementation to get heal-count</title>
<updated>2013-10-14T21:41:54+00:00</updated>
<author>
<name>Venkatesh Somyajulu</name>
<email>vsomyaju@redhat.com</email>
</author>
<published>2013-10-07T08:17:47+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=75caba63714c7f7f9ab810937dae69a1a28ece53'/>
<id>75caba63714c7f7f9ab810937dae69a1a28ece53</id>
<content type='text'>
Currently to know the number of files to be healed, either user
has to go to backend and check the number of entries present in
indices/xattrop directory. But if a volume consists of large
number of bricks, going to each backend and counting the number
of entries is a time-taking task. Otherwise user can give
gluster volume heal vol-name info command but with this
approach if no. of entries are very hugh in the indices/
xattrop directory, it will comsume time.

So as a feature, new command is implemented.

Command 1: gluster volume heal vn statistics heal-count
This command will get the number of entries present in
every brick of a volume. The output displays only entries
count.

Command 2: gluster volume heal vn statistics heal-count
           replica 192.168.122.1:/home/user/brickname

           Here if we are concerned with just one replica.
So providing any one of the brick of a replica will get
the number of entries to be healed for that replica only.

Example:
Replicate volume with replica count 2.

Backend status:
--------------
[root@dhcp-0-17 xattrop]# ls -lia | wc -l
1918

NOTE: Out of 1918, 2 entries are &lt;xattrop-gfid&gt; dummy
entries so actual no. of entries to be healed are
1916.

[root@dhcp-0-17 xattrop]# pwd
/home/user/2ty/.glusterfs/indices/xattrop

Command output:
--------------
Gathering count of entries to be healed on volume volume3 has been successful

Brick 192.168.122.1:/home/user/22iu
Status: Brick is Not connected
Entries count is not available

Brick 192.168.122.1:/home/user/2ty
Number of entries: 1916

Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290
BUG: 1015990
Signed-off-by: Venkatesh Somyajulu &lt;vsomyaju@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6044
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently to know the number of files to be healed, either user
has to go to backend and check the number of entries present in
indices/xattrop directory. But if a volume consists of large
number of bricks, going to each backend and counting the number
of entries is a time-taking task. Otherwise user can give
gluster volume heal vol-name info command but with this
approach if no. of entries are very hugh in the indices/
xattrop directory, it will comsume time.

So as a feature, new command is implemented.

Command 1: gluster volume heal vn statistics heal-count
This command will get the number of entries present in
every brick of a volume. The output displays only entries
count.

Command 2: gluster volume heal vn statistics heal-count
           replica 192.168.122.1:/home/user/brickname

           Here if we are concerned with just one replica.
So providing any one of the brick of a replica will get
the number of entries to be healed for that replica only.

Example:
Replicate volume with replica count 2.

Backend status:
--------------
[root@dhcp-0-17 xattrop]# ls -lia | wc -l
1918

NOTE: Out of 1918, 2 entries are &lt;xattrop-gfid&gt; dummy
entries so actual no. of entries to be healed are
1916.

[root@dhcp-0-17 xattrop]# pwd
/home/user/2ty/.glusterfs/indices/xattrop

Command output:
--------------
Gathering count of entries to be healed on volume volume3 has been successful

Brick 192.168.122.1:/home/user/22iu
Status: Brick is Not connected
Entries count is not available

Brick 192.168.122.1:/home/user/2ty
Number of entries: 1916

Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290
BUG: 1015990
Signed-off-by: Venkatesh Somyajulu &lt;vsomyaju@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6044
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr : Implementation of command "gluster volume heal vn statistics"</title>
<updated>2013-10-14T21:41:44+00:00</updated>
<author>
<name>Venkatesh Somyajulu</name>
<email>vsomyaju@redhat.com</email>
</author>
<published>2013-09-12T07:07:37+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=047882750e0e97f5eed21ebe3445cdb216b15a9d'/>
<id>047882750e0e97f5eed21ebe3445cdb216b15a9d</id>
<content type='text'>
"gluster volume heal volumename statistics" command gives the summary
of the afr crawl done based on the entries present in the xattrop
directory. Whenever afr crawls are attempted, the beginning time of
crawl, end time of crawl, no of files healed, heal-failed count and
number of files in split brain are shown along with the type of the
crawl. If crawl is already in progress then it will give the number
of files healed, heal failed count and number of files in split-brain
from the beginning of the crawl and instead of telling the end time of
the crawl, "CRAWL IN PROGRESS" message will be shown.

Output format:
command: "gluster volume heal volume-name statistics"
Output:
Gathering afr crawl statistics crawl statistics on volume volume-name
has been successful
------------------------------------------------

Crawl statistics for brick no 0
Hostname of brick 192.168.122.248

Starting time of crawl: Wed Jul 10 15:52:38 2013

Ending time of crawl: Wed Jul 10 15:52:38 2013

Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0

Starting time of crawl: Wed Jul 10 15:52:38 2013

Ending time of crawl: Wed Jul 10 15:52:38 2013

Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0

------------------------------------------------

Crawl statistics for brick no 1
Hostname of brick 192.168.122.1

Starting time of crawl: Wed Jul 10 15:52:42 2013

Ending time of crawl: Wed Jul 10 15:52:42 2013

Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0

Starting time of crawl: Wed Jul 10 15:52:42 2013

Ending time of crawl: Wed Jul 10 15:52:42 2013

Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0

--------------------------------------------------

Change-Id: I10bf9d10b005741db9973fb1352e0dd59ed99aa9
BUG: 949400
Signed-off-by: Venkatesh Somyajulu &lt;vsomyaju@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4790
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
"gluster volume heal volumename statistics" command gives the summary
of the afr crawl done based on the entries present in the xattrop
directory. Whenever afr crawls are attempted, the beginning time of
crawl, end time of crawl, no of files healed, heal-failed count and
number of files in split brain are shown along with the type of the
crawl. If crawl is already in progress then it will give the number
of files healed, heal failed count and number of files in split-brain
from the beginning of the crawl and instead of telling the end time of
the crawl, "CRAWL IN PROGRESS" message will be shown.

Output format:
command: "gluster volume heal volume-name statistics"
Output:
Gathering afr crawl statistics crawl statistics on volume volume-name
has been successful
------------------------------------------------

Crawl statistics for brick no 0
Hostname of brick 192.168.122.248

Starting time of crawl: Wed Jul 10 15:52:38 2013

Ending time of crawl: Wed Jul 10 15:52:38 2013

Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0

Starting time of crawl: Wed Jul 10 15:52:38 2013

Ending time of crawl: Wed Jul 10 15:52:38 2013

Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0

------------------------------------------------

Crawl statistics for brick no 1
Hostname of brick 192.168.122.1

Starting time of crawl: Wed Jul 10 15:52:42 2013

Ending time of crawl: Wed Jul 10 15:52:42 2013

Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0

Starting time of crawl: Wed Jul 10 15:52:42 2013

Ending time of crawl: Wed Jul 10 15:52:42 2013

Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0

--------------------------------------------------

Change-Id: I10bf9d10b005741db9973fb1352e0dd59ed99aa9
BUG: 949400
Signed-off-by: Venkatesh Somyajulu &lt;vsomyaju@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4790
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/cli changes for distributed geo-rep</title>
<updated>2013-07-26T20:19:18+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2013-07-10T12:02:41+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=5757ed2727990fd2c3aaff420003638f1eec6b92'/>
<id>5757ed2727990fd2c3aaff420003638f1eec6b92</id>
<content type='text'>
Commands:
gluster system:: execute gsec_create
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; create [push-pem] [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; start [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; stop [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; delete
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; config
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; status

The geo-replication is distributed. The session will be created, and
gsyncd will be spawned on all relevant nodes, instead of only one
node.

geo-rep: Collecting status detail related data

Added persistent store for saving information about
TotalFilesSynced, TotalSyncTime, TotalBytesSynced

Changes in the status information in socket:
Existing(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;

New(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978;
TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640;

Persistent details stored in
/var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status

Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111
BUG: 847839
Original Author: Avra Sengupta &lt;asengupt@redhat.com&gt;
Original Author: Venky Shankar &lt;vshankar@redhat.com&gt;
Original Author: Aravinda VK &lt;avishwan@redhat.com&gt;
Original Author: Amar Tumballi &lt;amarts@redhat.com&gt;
Original Author: Csaba Henk &lt;csaba@redhat.com&gt;
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5132
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Tested-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Commands:
gluster system:: execute gsec_create
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; create [push-pem] [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; start [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; stop [force]
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; delete
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; config
gluster volume geo-rep &lt;master&gt; &lt;slave-url&gt; status

The geo-replication is distributed. The session will be created, and
gsyncd will be spawned on all relevant nodes, instead of only one
node.

geo-rep: Collecting status detail related data

Added persistent store for saving information about
TotalFilesSynced, TotalSyncTime, TotalBytesSynced

Changes in the status information in socket:
Existing(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;

New(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978;
TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640;

Persistent details stored in
/var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status

Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111
BUG: 847839
Original Author: Avra Sengupta &lt;asengupt@redhat.com&gt;
Original Author: Venky Shankar &lt;vshankar@redhat.com&gt;
Original Author: Aravinda VK &lt;avishwan@redhat.com&gt;
Original Author: Amar Tumballi &lt;amarts@redhat.com&gt;
Original Author: Csaba Henk &lt;csaba@redhat.com&gt;
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5132
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Tested-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>protocol/rpc: move latest added procedures to the end of the array</title>
<updated>2013-06-17T16:34:56+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2013-06-17T08:56:55+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=eb6b81e1fc182daba4d202d15362250eee1c86d6'/>
<id>eb6b81e1fc182daba4d202d15362250eee1c86d6</id>
<content type='text'>
While looking at the newly introduced procedures FALLOCATE and DISCARD,
it seems that these were added with already existing procedure numbers.
This makes the protocol incompatible with existing roll-outs.

It is very confusing when new procedures are added somewhere in the
middle of the array. This will cause the number of existing procedures
to change. It is much preferred to add new procedures at the end of the
array. This changes not only corrects the enum that generates the
procedure numbers, but also the ordering in the client and server
fops-array for clarity.

Correcting this greatly simplifies adding support for these new
procedures in Wireshark and will prevent confusion to the people reading
network traces (with or without Wireshark).

Change-Id: Ib9e7978531d016c7230d756b855cb94cb0793b0f
BUG: 974976
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5215
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Brian Foster &lt;bfoster@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
While looking at the newly introduced procedures FALLOCATE and DISCARD,
it seems that these were added with already existing procedure numbers.
This makes the protocol incompatible with existing roll-outs.

It is very confusing when new procedures are added somewhere in the
middle of the array. This will cause the number of existing procedures
to change. It is much preferred to add new procedures at the end of the
array. This changes not only corrects the enum that generates the
procedure numbers, but also the ordering in the client and server
fops-array for clarity.

Correcting this greatly simplifies adding support for these new
procedures in Wireshark and will prevent confusion to the people reading
network traces (with or without Wireshark).

Change-Id: Ib9e7978531d016c7230d756b855cb94cb0793b0f
BUG: 974976
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5215
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Brian Foster &lt;bfoster@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterfs: discard (hole punch) support</title>
<updated>2013-06-13T21:37:43+00:00</updated>
<author>
<name>Brian Foster</name>
<email>bfoster@redhat.com</email>
</author>
<published>2013-05-23T16:19:42+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=17f287172413dc04244781aa5302a0e4f10e2777'/>
<id>17f287172413dc04244781aa5302a0e4f10e2777</id>
<content type='text'>
Add support for the DISCARD file operation. Discard punches a hole
in a file in the provided range. Block de-allocation is implemented
via fallocate() (as requested via fuse and passed on to the brick
fs) but a separate fop is created within gluster to emphasize the
fact that discard changes file data (the discarded region is
replaced with zeroes) and must invalidate caches where appropriate.

BUG: 963678
Change-Id: I34633a0bfff2187afeab4292a15f3cc9adf261af
Signed-off-by: Brian Foster &lt;bfoster@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5090
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Add support for the DISCARD file operation. Discard punches a hole
in a file in the provided range. Block de-allocation is implemented
via fallocate() (as requested via fuse and passed on to the brick
fs) but a separate fop is created within gluster to emphasize the
fact that discard changes file data (the discarded region is
replaced with zeroes) and must invalidate caches where appropriate.

BUG: 963678
Change-Id: I34633a0bfff2187afeab4292a15f3cc9adf261af
Signed-off-by: Brian Foster &lt;bfoster@redhat.com&gt;
Reviewed-on: http://review.gluster.org/5090
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
