<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs-afrv1.git/cli/src/cli-rpc-ops.c, branch v3.3.0qa13</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/'/>
<entry>
<title>cli : new volume statedump command</title>
<updated>2011-09-27T13:45:10+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@gluster.com</email>
</author>
<published>2011-09-05T09:03:43+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=45172a5415abc6b2f17eea74d51805ac85cc0072'/>
<id>45172a5415abc6b2f17eea74d51805ac85cc0072</id>
<content type='text'>
Changes:
        1. Add a new 'volume statedump' command, that performs statedumps of
        all the bricks in the volume and saves them in a specified location.
        2. Add new server option 'server.statedump-path'.
        3. Remove multiple function definitions in glusterd.h

Statedump Information:

The 'volume statedump' command performs statedumps on all the bricks in
a given volume. The syntax of the command is,
        gluster volume statedump &lt;VOLNAME&gt; [type]......

Types include,
        * all
        * mem
        * iobuf
        * callpool
        * priv
        * fd
        * inode
Defaults to 'all' when no type is specified.

The statedump files are created by default in /tmp directory of the
server on which the bricks are present.
This path can be changed by setting the 'server.statedump-path' option.

The statedump files will be named as,
        &lt;brick-name&gt;.&lt;pid of brick process&gt;.dump

Change-Id: I01c0e1a8aad490da818e086d89f292bd2ed06fd4
BUG: 1964
Reviewed-on: http://review.gluster.com/321
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amar@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Changes:
        1. Add a new 'volume statedump' command, that performs statedumps of
        all the bricks in the volume and saves them in a specified location.
        2. Add new server option 'server.statedump-path'.
        3. Remove multiple function definitions in glusterd.h

Statedump Information:

The 'volume statedump' command performs statedumps on all the bricks in
a given volume. The syntax of the command is,
        gluster volume statedump &lt;VOLNAME&gt; [type]......

Types include,
        * all
        * mem
        * iobuf
        * callpool
        * priv
        * fd
        * inode
Defaults to 'all' when no type is specified.

The statedump files are created by default in /tmp directory of the
server on which the bricks are present.
This path can be changed by setting the 'server.statedump-path' option.

The statedump files will be named as,
        &lt;brick-name&gt;.&lt;pid of brick process&gt;.dump

Change-Id: I01c0e1a8aad490da818e086d89f292bd2ed06fd4
BUG: 1964
Reviewed-on: http://review.gluster.com/321
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amar@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: cleanup of volinfo '*_count' definitions</title>
<updated>2011-09-23T13:48:32+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amar@gluster.com</email>
</author>
<published>2011-09-15T07:27:44+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=76580479033087f6dde080c27618baf19b18b658'/>
<id>76580479033087f6dde080c27618baf19b18b658</id>
<content type='text'>
earlier, sub_count was having different meaning depending on the
volume type.

now, for replica and stripe count, one can directly access the
'replica_count' or 'stripe_count' to get the corresponding
value from the volume info. 'sub_count' is preserved as is for backward
compatibility. there is a new variable 'dist_leaf_count' to get
info about how many bricks are present in one distribute sub volume.

Change-Id: I5ea1c8f9ae08f584cca63b91ba69035c7e4350ca
BUG: 3158
Reviewed-on: http://review.gluster.com/435
Reviewed-by: Krishnan Parthasarathi &lt;kp@gluster.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
earlier, sub_count was having different meaning depending on the
volume type.

now, for replica and stripe count, one can directly access the
'replica_count' or 'stripe_count' to get the corresponding
value from the volume info. 'sub_count' is preserved as is for backward
compatibility. there is a new variable 'dist_leaf_count' to get
info about how many bricks are present in one distribute sub volume.

Change-Id: I5ea1c8f9ae08f584cca63b91ba69035c7e4350ca
BUG: 3158
Reviewed-on: http://review.gluster.com/435
Reviewed-by: Krishnan Parthasarathi &lt;kp@gluster.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Implemented cmd to trigger self-heal on a replicate volume.</title>
<updated>2011-09-22T16:43:25+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kp@gluster.com</email>
</author>
<published>2011-09-16T05:10:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=4765dd1a1c51c67ab86687fbd871c89156680c34'/>
<id>4765dd1a1c51c67ab86687fbd871c89156680c34</id>
<content type='text'>
This cmd is used in the context of proactive self-heal for replicated
volumes. User invokes the following cmd when (s)he suspects that self-heal
needs to be done on a particular volume,
        gluster volume heal &lt;VOLNAME&gt;.

Change-Id: I3954353b53488c28b70406e261808239b44997f3
BUG: 3602
Reviewed-on: http://review.gluster.com/454
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This cmd is used in the context of proactive self-heal for replicated
volumes. User invokes the following cmd when (s)he suspects that self-heal
needs to be done on a particular volume,
        gluster volume heal &lt;VOLNAME&gt;.

Change-Id: I3954353b53488c28b70406e261808239b44997f3
BUG: 3602
Reviewed-on: http://review.gluster.com/454
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>s@GFS_PREFIX"/sbin@SBIN_DIR@</title>
<updated>2011-09-20T04:50:11+00:00</updated>
<author>
<name>Csaba Henk</name>
<email>csaba@gluster.com</email>
</author>
<published>2011-08-31T14:03:28+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=e163bc5b3ab062e3cb22b0386dbe056ab4a54952'/>
<id>e163bc5b3ab062e3cb22b0386dbe056ab4a54952</id>
<content type='text'>
$sbindir is the install path for gluster* binaries,
so this is what should be used in their invocation

Change-Id: Ie748b4cbf59c3ee77f721ff6e0ab7151742ce0ab
BUG: 2825
Reviewed-on: http://review.gluster.com/458
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amar@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
$sbindir is the install path for gluster* binaries,
so this is what should be used in their invocation

Change-Id: Ie748b4cbf59c3ee77f721ff6e0ab7151742ce0ab
BUG: 2825
Reviewed-on: http://review.gluster.com/458
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amar@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: profile cmd incorrectly reports all bricks down.</title>
<updated>2011-09-16T05:06:30+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kp@gluster.com</email>
</author>
<published>2011-09-15T11:39:00+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=4ee093305a0237368118e425723792a028b02a94'/>
<id>4ee093305a0237368118e425723792a028b02a94</id>
<content type='text'>
If there are no bricks of a volume running 'local' to glusterd
where the 'profile info' command is issued, glusterd incorrectly
reports that all bricks of the volume are down.

Change-Id: Idd703c991f0bcf59b76b9ef8f4ad8cd71960a55b
BUG: 3553
Reviewed-on: http://review.gluster.com/430
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If there are no bricks of a volume running 'local' to glusterd
where the 'profile info' command is issued, glusterd incorrectly
reports that all bricks of the volume are down.

Change-Id: Idd703c991f0bcf59b76b9ef8f4ad8cd71960a55b
BUG: 3553
Reviewed-on: http://review.gluster.com/430
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>support for de-commissioning a node using 'remove-brick'</title>
<updated>2011-09-13T09:10:12+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amar@gluster.com</email>
</author>
<published>2011-09-09T04:12:51+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=25daa42911d2ff697880ee29c591cac5f2abebed'/>
<id>25daa42911d2ff697880ee29c591cac5f2abebed</id>
<content type='text'>
to achieve this, we now create volume-file with
'decommissioned-nodes' option in distribute volume, then just
perform the rebalance set of operations (with 'force' flag set).

now onwards, the 'remove-brick' (with 'start' option) operation tries
to migrate data from removed bricks to existing bricks.

'remove-brick' also supports similar options as of replace-brick.

* (no options) -&gt; works as 'force', will have the current behavior
         of remove-brick, ie., no data-migration, volume changes.

* start  (starts remove-brick with data-migration/draining process,
          which takes care of migrating data and once complete, will
          commit the changes to volume file)
* pause  (stop data migration, but keep the volume file intact with
          extra options whatever is set)
* abort  (stop data-migration, and fall back to old configuration)
* commit (if volume is stopped, commits the changes to volumefile)
* force  (stops the data-migration and commits the changes to
          volume file)

Change-Id: I3952bcfbe604a0952e68b6accace7014d5e401d3
BUG: 1952
Reviewed-on: http://review.gluster.com/118
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
to achieve this, we now create volume-file with
'decommissioned-nodes' option in distribute volume, then just
perform the rebalance set of operations (with 'force' flag set).

now onwards, the 'remove-brick' (with 'start' option) operation tries
to migrate data from removed bricks to existing bricks.

'remove-brick' also supports similar options as of replace-brick.

* (no options) -&gt; works as 'force', will have the current behavior
         of remove-brick, ie., no data-migration, volume changes.

* start  (starts remove-brick with data-migration/draining process,
          which takes care of migrating data and once complete, will
          commit the changes to volume file)
* pause  (stop data migration, but keep the volume file intact with
          extra options whatever is set)
* abort  (stop data-migration, and fall back to old configuration)
* commit (if volume is stopped, commits the changes to volumefile)
* force  (stops the data-migration and commits the changes to
          volume file)

Change-Id: I3952bcfbe604a0952e68b6accace7014d5e401d3
BUG: 1952
Reviewed-on: http://review.gluster.com/118
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd / cli: mount-broker service</title>
<updated>2011-09-12T13:23:11+00:00</updated>
<author>
<name>Csaba Henk</name>
<email>csaba@gluster.com</email>
</author>
<published>2011-07-30T15:50:22+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=37ac355cbbd36497f914905615bffb3e35805f0a'/>
<id>37ac355cbbd36497f914905615bffb3e35805f0a</id>
<content type='text'>
Mountbroker is configured in glusterd volfile through a DSL
which is restriced enough to be able to appear in the role
of the value of a volfile knob.

Basically the DSL describes set-theorical requirements
against the option set which is sent by the cli (in the
hope of getting a mount with these options).

If the requirements meet and the volume id and the uid
who is to "own" the mount can be unambigously deduced from
the given request, glusterd does the mount with the given
parameters.

The use case of geo-replication is sugared by means of volume
options which then generate a complete mount-broker option set.

Demo:

- add the following option to your glusterd volfile:

    option mountbroker-root /tmp/mbr
    option mountbroker.fool EQL(volfile-id=pop*|user-map-root=*|volfile-server=localhost)&amp;MEET(user-map-root=john|user-map-root=jane)

- before starting glusterd, create /tmp/mbr owned by root with mode 0755

- with cli, do

   $ gluster system:: mount fool volfile-id=pop33 user-map-root=jane volfile-server=localhost

- on succesful completion (volume pop33 exists and is started, jane is a valid username),
  the mount path will be echoed to you

- you can get rid of the mount by

   $ gluster system:: umount &lt;mount-path&gt;

Change-Id: I629cf64add0a45500d05becc3316f67cdb5b42ff
BUG: 3482
Reviewed-on: http://review.gluster.com/128
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Mountbroker is configured in glusterd volfile through a DSL
which is restriced enough to be able to appear in the role
of the value of a volfile knob.

Basically the DSL describes set-theorical requirements
against the option set which is sent by the cli (in the
hope of getting a mount with these options).

If the requirements meet and the volume id and the uid
who is to "own" the mount can be unambigously deduced from
the given request, glusterd does the mount with the given
parameters.

The use case of geo-replication is sugared by means of volume
options which then generate a complete mount-broker option set.

Demo:

- add the following option to your glusterd volfile:

    option mountbroker-root /tmp/mbr
    option mountbroker.fool EQL(volfile-id=pop*|user-map-root=*|volfile-server=localhost)&amp;MEET(user-map-root=john|user-map-root=jane)

- before starting glusterd, create /tmp/mbr owned by root with mode 0755

- with cli, do

   $ gluster system:: mount fool volfile-id=pop33 user-map-root=jane volfile-server=localhost

- on succesful completion (volume pop33 exists and is started, jane is a valid username),
  the mount path will be echoed to you

- you can get rid of the mount by

   $ gluster system:: umount &lt;mount-path&gt;

Change-Id: I629cf64add0a45500d05becc3316f67cdb5b42ff
BUG: 3482
Reviewed-on: http://review.gluster.com/128
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vijay@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>modify to the way we used XDR definitions files (.x files)</title>
<updated>2011-09-07T17:48:52+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amar@gluster.com</email>
</author>
<published>2011-08-29T12:23:24+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=f0f3b040dfa062021d3a193e5a19c380eb5e908d'/>
<id>f0f3b040dfa062021d3a193e5a19c380eb5e908d</id>
<content type='text'>
Earlier:
step 1: copy the existing &lt;xdr&gt;.x files to /tmp
step 2: generate '.[ch]' files using 'rpcgen &lt;xdr&gt;.x'
step 3: check diff with the to the existing files, add only your part
        of changes back to the original file. (ignore other changes).
step 4: there is another file to write wrapper functions to convert
        structures to/from XDR buffers, update it with your new structure.
step 5: use these wrapper functions in the newly written procedures.
step 6: commit :-|

Now:
step 1: update (mostly adding only) the &lt;xdr&gt;.x file
step 2: run '&lt;path-to-src&gt;/extras/generate-xdr-files.sh &lt;xdr&gt;.x' command
step 3: implement rpc procedure to handle the request/response.
step 4: commit :-)

Change-Id: I219f9159fc980438c86e847c6b030be96e595ea2
BUG: 3488
Reviewed-on: http://review.gluster.com/341
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Earlier:
step 1: copy the existing &lt;xdr&gt;.x files to /tmp
step 2: generate '.[ch]' files using 'rpcgen &lt;xdr&gt;.x'
step 3: check diff with the to the existing files, add only your part
        of changes back to the original file. (ignore other changes).
step 4: there is another file to write wrapper functions to convert
        structures to/from XDR buffers, update it with your new structure.
step 5: use these wrapper functions in the newly written procedures.
step 6: commit :-|

Now:
step 1: update (mostly adding only) the &lt;xdr&gt;.x file
step 2: run '&lt;path-to-src&gt;/extras/generate-xdr-files.sh &lt;xdr&gt;.x' command
step 3: implement rpc procedure to handle the request/response.
step 4: commit :-)

Change-Id: I219f9159fc980438c86e847c6b030be96e595ea2
BUG: 3488
Reviewed-on: http://review.gluster.com/341
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli: "profile info" output improvements</title>
<updated>2011-08-24T11:28:47+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@gluster.com</email>
</author>
<published>2011-08-19T09:37:36+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=f16a44f94f76e6c677cee37090d059e8bb5443f5'/>
<id>f16a44f94f76e6c677cee37090d059e8bb5443f5</id>
<content type='text'>
Some changes to profile info output.

Change-Id: I94a4bec1ca1c0670b3d9643f8321683b59c665aa
BUG: 3028
Reviewed-on: http://review.gluster.com/260
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amar@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Some changes to profile info output.

Change-Id: I94a4bec1ca1c0670b3d9643f8321683b59c665aa
BUG: 3028
Reviewed-on: http://review.gluster.com/260
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Amar Tumballi &lt;amar@gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/glusterd, cli: Introduce gluster volume status &lt;volname&gt;</title>
<updated>2011-08-19T08:29:18+00:00</updated>
<author>
<name>Vijay Bellur</name>
<email>vijay@gluster.com</email>
</author>
<published>2011-08-18T17:49:22+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=0143a2ef653d0f7a337c8220f127655dadbca942'/>
<id>0143a2ef653d0f7a337c8220f127655dadbca942</id>
<content type='text'>
Change-Id: Iea835b9e448e736016da2e44e3c9bfff93f2fa78
BUG: 3439
Reviewed-on: http://review.gluster.com/259
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Iea835b9e448e736016da2e44e3c9bfff93f2fa78
BUG: 3439
Reviewed-on: http://review.gluster.com/259
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@gluster.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
