<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs-nsr.git, branch master</title>
<subtitle>[no description]</subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-nsr.git/'/>
<entry>
<title>Merge branch 'upstream'</title>
<updated>2014-04-28T14:18:50+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2014-04-28T14:18:50+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-nsr.git/commit/?id=e139b4d0ba2286c0d4d44ba81260c2b287016019'/>
<id>e139b4d0ba2286c0d4d44ba81260c2b287016019</id>
<content type='text'>
Conflicts:
	rpc/xdr/src/glusterfs3-xdr.c
	rpc/xdr/src/glusterfs3-xdr.h
	xlators/features/changelog/src/Makefile.am
	xlators/features/changelog/src/changelog-helpers.h
	xlators/features/changelog/src/changelog.c
	xlators/mgmt/glusterd/src/glusterd-sm.c

Change-Id: I9972a5e6184503477eb77a8b56c50a4db4eec3e2
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Conflicts:
	rpc/xdr/src/glusterfs3-xdr.c
	rpc/xdr/src/glusterfs3-xdr.h
	xlators/features/changelog/src/Makefile.am
	xlators/features/changelog/src/changelog-helpers.h
	xlators/features/changelog/src/changelog.c
	xlators/mgmt/glusterd/src/glusterd-sm.c

Change-Id: I9972a5e6184503477eb77a8b56c50a4db4eec3e2
</pre>
</div>
</content>
</entry>
<entry>
<title>features/libgfchangelog: APIs to process history changelogs.</title>
<updated>2014-04-28T11:53:36+00:00</updated>
<author>
<name>Kotresh H R</name>
<email>khiremat@redhat.com</email>
</author>
<published>2014-02-13T18:23:27+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-nsr.git/commit/?id=f2bac9f9d5b9956969ddd25a54bc636b82f6923e'/>
<id>f2bac9f9d5b9956969ddd25a54bc636b82f6923e</id>
<content type='text'>
1. Create directories in following fashion for history API's
   usage when consumer is registered with libgfchangelog
   shared library through gf_changelog_register.
       scratch_dir/.history
       scratch_dir/.history/.current
       scratch_dir/.history/.processed
       scratch_dir/.history/.processing

2. Added new file 'gf-history-changelog.c' and following APIs
   are provided for consumers to process history changelogs.

    1. gf_history_changelog_scan:
            Move processed history changelog file from
            .processing to .processed
    2. gf_history_changelog_next_change:
            Return the next history changelog file entry.
            Zero means all history chanelogs are consumed.
    3. gf_history_changelog_done:
            Scan .processing directory and generate a list of
            change entries.
    4. gf_history_changelog_start_fresh:
            For a set of changelogs, start from the begining.

NOTE: Though this patch provides above funcationalities.
      It is considered functionally full with the
      patch (http://review.gluster.org/#/c/6930/).

Change-Id: I200780c7278e0a6c008910d93faad5858a4b3e76
Original-author: Kotresh H R &lt;khiremat@redhat.com&gt;
Signed-off-by: Kotresh H R &lt;khiremat@redhat.com&gt;
Signed-off-by: Ajeet Jha &lt;ajha@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6998
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Venky Shankar &lt;vshankar@redhat.com&gt;
Tested-by: Venky Shankar &lt;vshankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
1. Create directories in following fashion for history API's
   usage when consumer is registered with libgfchangelog
   shared library through gf_changelog_register.
       scratch_dir/.history
       scratch_dir/.history/.current
       scratch_dir/.history/.processed
       scratch_dir/.history/.processing

2. Added new file 'gf-history-changelog.c' and following APIs
   are provided for consumers to process history changelogs.

    1. gf_history_changelog_scan:
            Move processed history changelog file from
            .processing to .processed
    2. gf_history_changelog_next_change:
            Return the next history changelog file entry.
            Zero means all history chanelogs are consumed.
    3. gf_history_changelog_done:
            Scan .processing directory and generate a list of
            change entries.
    4. gf_history_changelog_start_fresh:
            For a set of changelogs, start from the begining.

NOTE: Though this patch provides above funcationalities.
      It is considered functionally full with the
      patch (http://review.gluster.org/#/c/6930/).

Change-Id: I200780c7278e0a6c008910d93faad5858a4b3e76
Original-author: Kotresh H R &lt;khiremat@redhat.com&gt;
Signed-off-by: Kotresh H R &lt;khiremat@redhat.com&gt;
Signed-off-by: Ajeet Jha &lt;ajha@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6998
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Venky Shankar &lt;vshankar@redhat.com&gt;
Tested-by: Venky Shankar &lt;vshankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot: Compare and update snapshots during peer handshake</title>
<updated>2014-04-28T11:02:22+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2014-04-22T00:52:57+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-nsr.git/commit/?id=54a5a42848870ee17b923c6c37d65fdfe4a5fec9'/>
<id>54a5a42848870ee17b923c6c37d65fdfe4a5fec9</id>
<content type='text'>
During a peer-handshake, after the volumes have synced, and the list of
missed snapshots have synced, the node will perform the pending deletes
and restores on this list. At this point, the current snapshot list in
the node will be updated, and hence in case of conflicts arising during
snapshot handshake, the peer hosting the bricks will be given precedence
Likewise, if there will be a conflict, and both peers will be in the same
state, i.e either both would be hosting bricks or both would not be hosting
bricks, then a decision can't be taken and a peer-reject will happen.

glusterd_compare_and_update_snap() implements the following algorithm to
perform the above task:
Step  1: Start.
Step  2: Check if the peer is missing a delete on the said snap.
         If yes, goto step 6.
Step  3: Check if there is a conflict between the peer's data and the
         local snap. If no, goto step 5.
Step  4: As there is a conflict, check if both the peer and the local nodes
         are hosting bricks. Based on the results perform the following:
         Peer Hosts Bricks    Local Node Hosts Bricks       Action
               Yes                     Yes                Goto Step 7
               No                      No                 Goto Step 7
               Yes                     No                 Goto Step 8
               No                      Yes                Goto Step 6
Step  5: Check if the local node is missing the peer's data.
         If yes, goto step 9.
Step  6: It's a no-op. Goto step 10
Step  7: Peer Reject. Goto step 10
Step  8: Delete local node's data.
Step  9: Accept Peer Data.
Step 10: Stop

Change-Id: I79be0f0f5f2a4f5c72277a4e77c2be732af432e1
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7525
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
During a peer-handshake, after the volumes have synced, and the list of
missed snapshots have synced, the node will perform the pending deletes
and restores on this list. At this point, the current snapshot list in
the node will be updated, and hence in case of conflicts arising during
snapshot handshake, the peer hosting the bricks will be given precedence
Likewise, if there will be a conflict, and both peers will be in the same
state, i.e either both would be hosting bricks or both would not be hosting
bricks, then a decision can't be taken and a peer-reject will happen.

glusterd_compare_and_update_snap() implements the following algorithm to
perform the above task:
Step  1: Start.
Step  2: Check if the peer is missing a delete on the said snap.
         If yes, goto step 6.
Step  3: Check if there is a conflict between the peer's data and the
         local snap. If no, goto step 5.
Step  4: As there is a conflict, check if both the peer and the local nodes
         are hosting bricks. Based on the results perform the following:
         Peer Hosts Bricks    Local Node Hosts Bricks       Action
               Yes                     Yes                Goto Step 7
               No                      No                 Goto Step 7
               Yes                     No                 Goto Step 8
               No                      Yes                Goto Step 6
Step  5: Check if the local node is missing the peer's data.
         If yes, goto step 9.
Step  6: It's a no-op. Goto step 10
Step  7: Peer Reject. Goto step 10
Step  8: Delete local node's data.
Step  9: Accept Peer Data.
Step 10: Stop

Change-Id: I79be0f0f5f2a4f5c72277a4e77c2be732af432e1
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7525
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Rename the export dictionary as peer_data</title>
<updated>2014-04-28T11:02:03+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2014-04-21T03:32:00+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-nsr.git/commit/?id=a7c8d514c0487019d218c327deb52f7d09645875'/>
<id>a7c8d514c0487019d218c327deb52f7d09645875</id>
<content type='text'>
During a glusterd handshake, a dictionary is passed among
the peers which contains, info of volumes, global opts,
and now also info of snaps and list of missed snaps

As it now contains more than just volume specific data,
renaming the dict in the code-base from "vols" to "peer_data"

Change-Id: Ib457172789ddd0d8978b08bceab0988c48e9eea7
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7524
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
During a glusterd handshake, a dictionary is passed among
the peers which contains, info of volumes, global opts,
and now also info of snaps and list of missed snaps

As it now contains more than just volume specific data,
renaming the dict in the code-base from "vols" to "peer_data"

Change-Id: Ib457172789ddd0d8978b08bceab0988c48e9eea7
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7524
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot: Recreate the mount dirs and mount the lvm snapshots on node reboot.</title>
<updated>2014-04-28T10:51:29+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2014-04-03T03:36:28+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-nsr.git/commit/?id=b46d0ba04901ebca81d0f477e3e9ac6ba8607946'/>
<id>b46d0ba04901ebca81d0f477e3e9ac6ba8607946</id>
<content type='text'>
The lvm snapshots of the bricks are mounted at /var/run/gluster/snaps/ or
/run/gluster/snaps. These paths being on a tempfs, on reboot are removed.
So when glusterd starts, we need to recreate these paths, activate the
respective logical volumes (lvm snapshots of the bricks), and mount
these logical volumes at their respective paths.

Change-Id: Ic5ef61e79a25d9830df717c592391965fe09db62
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7452
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The lvm snapshots of the bricks are mounted at /var/run/gluster/snaps/ or
/run/gluster/snaps. These paths being on a tempfs, on reboot are removed.
So when glusterd starts, we need to recreate these paths, activate the
respective logical volumes (lvm snapshots of the bricks), and mount
these logical volumes at their respective paths.

Change-Id: Ic5ef61e79a25d9830df717c592391965fe09db62
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7452
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot: Perform missed snap deletes and restores.</title>
<updated>2014-04-28T10:50:18+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2014-04-07T06:02:10+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-nsr.git/commit/?id=5d9172e0b3e14795db7aba321cfcac428a201399'/>
<id>5d9172e0b3e14795db7aba321cfcac428a201399</id>
<content type='text'>
Replacing is_volume_restored(gf_boolean_t) with
restored_from_snap(uuid_t) in glusterd_volinfo_

Also removed gd_restore_snap_volume from glusterd-volgen.c
to glusterd-snapshot.c

Change-Id: Ic615a1658cfaffa98d4590506ac82f20bf709ad6
BUG: 1089906
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7455
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Replacing is_volume_restored(gf_boolean_t) with
restored_from_snap(uuid_t) in glusterd_volinfo_

Also removed gd_restore_snap_volume from glusterd-volgen.c
to glusterd-snapshot.c

Change-Id: Ic615a1658cfaffa98d4590506ac82f20bf709ad6
BUG: 1089906
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7455
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gNFS: gNFS drc cache failed to detecte duplicates.</title>
<updated>2014-04-28T07:50:24+00:00</updated>
<author>
<name>Yuan Ding</name>
<email>beback198611@gmail.com</email>
</author>
<published>2014-04-21T14:10:13+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-nsr.git/commit/?id=49733d307f010bfeb3d2440402ff51a5366262f5'/>
<id>49733d307f010bfeb3d2440402ff51a5366262f5</id>
<content type='text'>
After the drc cache get full, message "DRC failed to detect duplicates" keep
printed in the log.
The root cause is drc_compare_reqs use the wrong compare type. This function
should use type drc_cache_op_t as its input type. Since all rbtree related code
(except function rpcsvc_drc_lookup) in drc cache pass drc_cache_op_t as compare
type. Only rpcsvc_drc_lookup use type rpcsvc_request_t. It has been modified
too.

Change-Id: I925c097debe6b82f267986961fd4e7755f3de9af
BUG: 1089676
Signed-off-by: Yuan Ding &lt;beback198611@gmail.com&gt;
Reviewed-on: http://review.gluster.org/7519
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
After the drc cache get full, message "DRC failed to detect duplicates" keep
printed in the log.
The root cause is drc_compare_reqs use the wrong compare type. This function
should use type drc_cache_op_t as its input type. Since all rbtree related code
(except function rpcsvc_drc_lookup) in drc cache pass drc_cache_op_t as compare
type. Only rpcsvc_drc_lookup use type rpcsvc_request_t. It has been modified
too.

Change-Id: I925c097debe6b82f267986961fd4e7755f3de9af
BUG: 1089676
Signed-off-by: Yuan Ding &lt;beback198611@gmail.com&gt;
Reviewed-on: http://review.gluster.org/7519
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-by: Santosh Pradhan &lt;spradhan@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Barrier: Barrier translator options configuration</title>
<updated>2014-04-28T05:25:32+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2014-03-03T12:30:59+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-nsr.git/commit/?id=22f47322d246c94d0bec8e893e4837a67d39f544'/>
<id>22f47322d246c94d0bec8e893e4837a67d39f544</id>
<content type='text'>
barrier enable/disable, barrier-timeout configuration in barrier translator.

Change-Id: I7cbf9cd4f5e55d42dcc6b7cd6827234566c7b6f3
BUG: 1060002
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7177
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
barrier enable/disable, barrier-timeout configuration in barrier translator.

Change-Id: I7cbf9cd4f5e55d42dcc6b7cd6827234566c7b6f3
BUG: 1060002
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7177
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/snapshot: Adding snap_vol_id and snap_uuid to missed_snap_list</title>
<updated>2014-04-28T05:00:20+00:00</updated>
<author>
<name>Avra Sengupta</name>
<email>asengupt@redhat.com</email>
</author>
<published>2014-04-07T05:25:28+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-nsr.git/commit/?id=d7b3e068290c41b13ecd664771814202d7d26881'/>
<id>d7b3e068290c41b13ecd664771814202d7d26881</id>
<content type='text'>
Persisting missing snapshot info on disk as well as in memory in
the following format:
-------------NODE-UUID--------------:--------------SNAP-UUID-------------=---------SNAP-VOL-ID------------:BRICKNUM:-------BRICKPATH--------:OPERATION:STATUS
927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=a17b4fe42c5a45f7a916438643edaa13:   3    :/brick/brick-dirs/brick3:    1    :   1
927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=a17b4fe42c5a45f7a916438643edaa13:   3    :/brick/brick-dirs/brick3:    3    :   1
927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=83a3cc05453b46b2a7eda4c9a9208638:   3    :/brick/brick-dirs/brick3:    1    :   1

This data will be stored on disk at /var/lib/glusterd/snaps/missed_snaps_list

In memory we maintain the data as a list of glusterd_missed_snap_info
in conf, the key for this list are the first two fields,
i.e NODE-UUID:SNAP-UUID.

For every NODE-UUID:SNAP-UUID, there can be multiple operations missed
on multiple bricks. So we maintain a list of glusterd_snap_op_t
for every node of glusterd_missed_snap_info

This list is maintained or updated during snapshot create, delete, and restore
operations which are the only operations that if missed, are recorded in this
list.

During snapshot create, if a node is down, or a brick is down, we don't
receive their mount point infos. snap_status of such bricks is marked as
-1, and their brick details are added to this list.

During snapshot delete, we check from originator node, if any other
nodes, holding bricks of the said snap are down. Those are also added to the list.
Also if the node is up, but the snapshot was pending for a snap
brick, and its snap_status is -1, we add that to the list too.
When a subsequent delete entry is processed for an already existing
create entry, we just mark the create entries status as done (2), and don't
add the delete entry to the list.

During snapshot restore, we check from originator node, if any other
nodes, holding bricks of the said snap are down. Those are also added to the list.
Also if the node is up, but the snapshot was pending for a snap
brick, and its snap_status is -1, we add that to the list too.
Like delete when a subsequent restore entry is processed for an already existing
create entry, we just mark the create entries status as done (2), and don't
add the restore entry to the list.

Change-Id: I54f63e28d3c40555d0f84528f38227103171f594
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7454
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Persisting missing snapshot info on disk as well as in memory in
the following format:
-------------NODE-UUID--------------:--------------SNAP-UUID-------------=---------SNAP-VOL-ID------------:BRICKNUM:-------BRICKPATH--------:OPERATION:STATUS
927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=a17b4fe42c5a45f7a916438643edaa13:   3    :/brick/brick-dirs/brick3:    1    :   1
927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=a17b4fe42c5a45f7a916438643edaa13:   3    :/brick/brick-dirs/brick3:    3    :   1
927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=83a3cc05453b46b2a7eda4c9a9208638:   3    :/brick/brick-dirs/brick3:    1    :   1

This data will be stored on disk at /var/lib/glusterd/snaps/missed_snaps_list

In memory we maintain the data as a list of glusterd_missed_snap_info
in conf, the key for this list are the first two fields,
i.e NODE-UUID:SNAP-UUID.

For every NODE-UUID:SNAP-UUID, there can be multiple operations missed
on multiple bricks. So we maintain a list of glusterd_snap_op_t
for every node of glusterd_missed_snap_info

This list is maintained or updated during snapshot create, delete, and restore
operations which are the only operations that if missed, are recorded in this
list.

During snapshot create, if a node is down, or a brick is down, we don't
receive their mount point infos. snap_status of such bricks is marked as
-1, and their brick details are added to this list.

During snapshot delete, we check from originator node, if any other
nodes, holding bricks of the said snap are down. Those are also added to the list.
Also if the node is up, but the snapshot was pending for a snap
brick, and its snap_status is -1, we add that to the list too.
When a subsequent delete entry is processed for an already existing
create entry, we just mark the create entries status as done (2), and don't
add the delete entry to the list.

During snapshot restore, we check from originator node, if any other
nodes, holding bricks of the said snap are down. Those are also added to the list.
Also if the node is up, but the snapshot was pending for a snap
brick, and its snap_status is -1, we add that to the list too.
Like delete when a subsequent restore entry is processed for an already existing
create entry, we just mark the create entries status as done (2), and don't
add the restore entry to the list.

Change-Id: I54f63e28d3c40555d0f84528f38227103171f594
BUG: 1061685
Signed-off-by: Avra Sengupta &lt;asengupt@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7454
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Update references to the maillinglist to gluster-devel@gluster.org</title>
<updated>2014-04-28T04:29:36+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2014-04-27T13:03:58+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-nsr.git/commit/?id=d2cdc392accdd35995370ee5b52aee5e5af7dee4'/>
<id>d2cdc392accdd35995370ee5b52aee5e5af7dee4</id>
<content type='text'>
gluster-devel@nongnu.org has moved to gluster-devel@gluster.org. All
occurrences in the current (non legacy) documentation and code have been
adjusted.

Change-Id: I053162e633f7ea14fd3eed239ded017df165147c
BUG: 1091705
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7573
Reviewed-by: Justin Clift &lt;justin@gluster.org&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
Tested-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
gluster-devel@nongnu.org has moved to gluster-devel@gluster.org. All
occurrences in the current (non legacy) documentation and code have been
adjusted.

Change-Id: I053162e633f7ea14fd3eed239ded017df165147c
BUG: 1091705
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/7573
Reviewed-by: Justin Clift &lt;justin@gluster.org&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
Tested-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
