| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
| |
Change-Id: Ic3ed2c96d8fc3a15fedaa80517a2c79c0c858963
BUG: 1075611
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/7652
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I23df6d179e9d66a71721e9844a34c5b96586f90f
BUG: 1075611
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/7462
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I84716cc07f3cbd8c1b2825a5676d6693fed6fade
BUG: 1075611
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/7578
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch allows a user to bump up the cluster op-version by doing
# gluster volume set all cluster.op-version <OP-VERSION>
The op-version will be bumped only if
- all the peers in the cluster support it, and
- the new op-version is greater than the current cluster op-version
This set operation will not do any other change other than changing and
saving the cluster op-version in the glusterd.info file. It will NOT,
- change any existing volume
- add the option to the global options list
- fix the cluster op-version to the given version, it can be bumped up
by other volume set commands.
Change-Id: I084b4fcc45e79dc2ca7b7680d7bb371bb175af39
BUG: 1092592
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/7603
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a brick is started, and the glusterfsd process requests
for volfile, the brick_name is sent in the req dict. In
glusterd, after fetching the spec the brick_name is looked
up in the missed_snap_list, and any missing snap creates on
the same brick are performed. After this, the glusterd
responds back with the specfile.
Also collate brick data from the node's hosting the bricks
during restore. In case the data is absent, the local node's
data is used. This is needed to ensure that, during a restore
we collect the information created when a missed snap create
is performed.
Change-Id: I47cefdeba96f2702be810965734cf0fac61d3d2d
BUG: 1061685
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7551
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fetch the mount directory path for a brick, during
volume create, add-brick, and replace-brick.
When a snap-create is missed, use this mount directory
information to create the brick path for the missed snap brick.
Change-Id: Iad3eec96a32cf340f26bdf3f28e2f529e4b77e31
BUG: 1061685
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7550
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Glusterd uses O_SYNC to write to temp file then performs renames
to the actual file and performs fsync on parent directory. Until
this rename happens syncing writes to the file can be deferred.
In this patch O_SYNC open of temp file is removed and fsync of the
fd before rename is done.
Change-Id: Ie7da161b0daec845c7dcfab4154cc45c2f49d825
BUG: 908277
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/7370
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I8de4d47bb2a54b915243ea029cce2585fba34876
BUG: 1089172
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/7651
Reviewed-by: Justin Clift <justin@gluster.org>
Tested-by: Justin Clift <justin@gluster.org>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
If '/var' is a symlink which is on OSX, 'glusterd'
initialization fails which is not necessary fix it.
Change-Id: I83adc16cfc0e0deaa18acf74ba99299ba4a21d60
BUG: 1061685
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/7558
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While creating a volume and adding a brick validation for _POSIX_PATH_MAX is
done on absolute pathname instead of relative pathname due to which a brickpath
having less than _POSIX_PATH_MAX may also fail the validation if the directory
length is greater than (_POSIX_PATH_MAX -strlen(brickpath/volume name).
Also this fix addresses one cli response message correction which says the
volume file is too long instead of brick path is too long (when brickpath
length validation doesn't fail and vol file length validation fails.)
It is also important to note that with the current design of volfile naming, it
can not be guranteed that volname and brickpath can have max of _POSIX_PATH_MAX
characters.
Change-Id: I1283d1f9dea96ae797620002c8723719f26a866d
BUG: 1085330
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/7420
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Also changed the order of peers retrieval and snapshot retrieval
upon glusterd start, so that the snapshot bricks can be properly
resolved while cleaning up the snapshots.
Change-Id: I120704e4412a9cadb8d90a9b7969f2b4a1196bc5
BUG: 1061685
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/7494
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. gluster volume set nfs-ganesha.enable ON/OFF
If the option is set to ON, the volume field in the nfs-ganesha configuartion file is
edited. Gluster-nfs is disabled on that volume and the volume is exported using
nfs-ganesha.
2.gluster volume set nfs-ganesha.host IP
This is used to provide the IP of the nfs-ganesha host.
Note : nfs-ganesha.host MUST be set before using nfs-ganesha.enable ON
The switch from gluster-nfs to nfs-ganesha is mostly done by the hook-scripts
in the post phase of the 'set' option. As a result, gluster volume reset does not
function as it is expected to. By default, nfs-ganesha will be set to off but the
process will not be killed.
Hence, a few changes have to be made post 'reset' option as well. Those changes
also have been added.
Change-Id: I7fdc14ee49d1724af96eda33c6a3ec08b1020788
BUG: 1092283
Signed-off-by: Meghana <mmadhusu@redhat.com>
Reviewed-on: http://review.gluster.org/7321
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If restores fails for some reason then we should revert
the restore operation. To do so we take the backup of
vols folder before doing a restore and if the restore
fails then we revert the changes done.
Change-Id: I97f72aec3a34fc122bf137beb336e94db3a04dff
BUG: 1061685
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-on: http://review.gluster.org/7548
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
previously when user triggred 'gluster volume remove-brick VOLNAME
BRICK start' then command' gluster volume rebalance <volname> status'
showing output even user has not triggred "rebalance start" and when
user triggred 'gluster volume rebalance <volname> start' then command
'gluster volume remove-brick VOLNAME BRICK status' showing output even
user has not run rebalance start and remove brick start.
regression test failed in previous patch. file test/dht.rc and
test/bug/bug-973073 edited to avoid regression test failure.
now with this fix it will differentiate rebalance and remove-brick
status messages.
Signed-off-by: ggarg <ggarg@redhat.com>
Change-Id: I7f92ad247863b9f5fbc0887cc2ead07754bcfb4f
BUG: 1089668
Reviewed-on: http://review.gluster.org/7517
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, snapshots by default were activated on creation and there was
no option to activate or deactivate them on demand.
This will allow the user to activate and deactivate on demand.
The CLI goes as follows
1) Activate the snap using a command "gluster snapshot activate <snapname> [force]"
2) Deactivate the snap using a command "gluster snapshot deactivate <snapname>"
Note: Even now the snapshot will be activated during creation.
Change-Id: I0946d800780f26c63fa1fcaf29aabc900140448f
BUG: 1061685
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Reviewed-on: http://review.gluster.org/7476
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
read-only xlator is moved from server graph to client graph
so that AFR & DHT healing can take place at server
Change-Id: I140ec962330c59d3b44f9bc8084a1544a1fd6c54
BUG: 1061685
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-on: http://review.gluster.org/7582
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NFS/SHD process disconnecting from glusterd, when the respective service
are down, would lead to repeated logging of disconnect related messages,
owing to the rpc reconnect logic in glusterfs(d). This patch addresses
that by logging the disconnect only on the first disconnect event.
Change-Id: I4008d2436721f4ba093270df4ccb3fc885f22ca0
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/7468
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
snapshot
Quota config and cksum file needs to be copied before taking a
snapshot, so that when a snapshot is restored these files is
copied back to the original place, and the restored snap volume
can make use of these quota files.
Before taking a snapshot the quota files are copied to
/var/lib/glusterd/snaps/<snapname>/quota/
Change-Id: Id175f28d4ee47be64d7491c6aae81a1794928490
BUG: 1061685
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/7527
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
snapshot.
geo-rep status and conf files needs to be copied before taking a snapshot.
The idea here is, when the snapshot is restored, these config and status
files needs to be placed back in geo-replication folder so that
geo-replication can start with the same state it was when taking
a snapshot.
Details :
Before a snapshot is taken, Copy the status and config files present
in /var/lib/glusterd/geo-replication/.
The files copied are gsyncd.conf and status files of each session
belonging to a volume whose snapshot is about to be taken.
Change-Id: I0234ecd846883350c59777c2505290729de0ce05
BUG: 1061685
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/7495
Reviewed-by: Kotresh HR <khiremat@redhat.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If entries like state_file or pid-file are missing in the gsyncd.conf
or if the gsyncd.conf is also missing, glusterd looks for the missing
configs in the gsyncd_template.conf
status will display "Config Corrupted" as long as the entry is missing in
the config file. Missing state-file entry in both config and template
will not allow starting a geo-rep session.
However stop force will successfully stop an already running session,
if the state-file entries are missing in both the config file and
the template, as long as either of them have a pid-file entry.
if the pid-file entry is missing in the gsyncd.conf file, starting a
geo-rep session will not be allowed.
if the pid-file entry is missing in an already started session, then
stop force will fetch it from the config template and stop the session.
if the pid-file entry is missing in both the config and the template,
stop force will fail with appropriate error stating pid-file entry is missing.
Change-Id: I81d7cbc4af085d82895bbef46ca732555aa5365d
BUG: 1059092
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/6856
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As we have new barrier translator in place, we are making use of
that during snapshot phase.
During snapshot create (pre-commit), we enable the barrier feature
and after the commit we disable it.
Change-Id: I94212b1c06b0d9b12255ee98313e2d8549b34b17
BUG: 1061685
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/7561
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I7db752390bb742fb9f6cacce84563ff782ae352b
BUG: 1087677
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/7608
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I8efa08cc9832ad509fba65a88bb0cddbaf056404
BUG: 1075611
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/7475
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement force option in snapshot create i.e
1) Creation of snapshot fails if the original volume
bricks are down
2) With a force option creation of snapshot will continue
even if the original volume bricks are down.
This was the fix for bugs 1089527 and 1083502
Change-Id: I8de0242adf8ee0af00db9fa8701d86fabc12e7fc
BUG: 1090042
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Reviewed-on: http://review.gluster.org/7520
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem : snapshot delete used to fail when executed in loop,
as there was race between process kill and umount.
Solution : Before an umount is issued check if the process
is still running, If so then issue for process termination.
Give three tries for doing umount operation
Change-Id: I7f4315e5d7d4a156dd513ec77443ead6ccd37b2e
BUG: 1090449
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/7532
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
eg snapshot create is fired simultaneously on a node
Cause: In glusterd_mgmt_v3_initiate_snap_phases() , the function
glusterd_mgmt_v3_post_validate() asserts on the NULL value of
req_dic. req_dic is not initialized as
glusterd_mgmt_v3_initiate_lockdown() is not able to acquire the lock
and comes to the "out" section, before initializing req_dic
(via glusterd_mgmt_v3_build_payload)
Fix: Call glusterd_mgmt_v3_post_validate() only if the lock is
acquired.
Change-Id: I7cb55b6c0013ad1c8bbb922a62c34aab097bafe9
BUG: 1090047
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Reviewed-on: http://review.gluster.org/7500
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
command should only accept the decimal numeric value.
Syntax : gluster snapshot config [volname]
[snap-max-hard-limit <count>]
[snap-max-soft-limit <percentage>]
Problem : Snapshot config used to consider the alphanumeric value
staring with digit as an integer (Example: "9abc" is converted to "9").
Solution : Refined the code to check if the entered value is numeric.
This patch also fixes some of the minor problems related to snapshot
config.
1) Output correction in gluster snapshot config snap-max-soft-limit.
2) setting the soft limit to greater than 100% displays that "Invalid
snap-max-soft-limit 0". The error message used to display "zero" in
the output, Changed this to display relevant value.
3) Setting greater than allowed snap-max-hard-limit output needs to
have space in between.
Change-Id: Ie7c7045722fe57b2b3c50c873664b67c28eb3853
BUG: 1087203
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/7457
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I39c362c0908166707e10e8820cc1ee9a0989dcbe
BUG: 1089172
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/7584
Reviewed-by: Anand Avati <avati@redhat.com>
Tested-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch refactors the existing client ping timer implementation, and makes
use of the common code for implementing both client ping timer and the
glusterd ping timer.
A new gluster rpc program for ping is introduced. The ping timer is only
started for peers that have this new program. The deafult glusterd ping
timeout is 30 seconds. It is configurable by setting the option
'ping-timeout' in glusterd.vol .
Also, this patch introduces changes in the glusterd-handshake path. The client
programs for a peer are now set in the callback of dump_versions, for both
the older handshake and the newer op-version handshake. This is the only place
in the handshake process where we know what programs a peer supports.
Change-Id: I035815ac13449ca47080ecc3253c0a9afbe9016a
BUG: 1038261
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/5202
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds a new
'gluster volume barrier <VOLNAME> {enable|disable}'
cli command. This helps in testing the brick op code path when testing
the barrier xlator.
This patch can be reverted later if not required for end users.
Change-Id: Icd86a2d13e7f276dda1ecbb2593d60638ece7dcd
BUG: 1060002
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6958
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces a new 'barrier' brick-op which will be used to
activate/deactivate the barriering on the bricks. This includes
barriering in the barrier xlator and in the changelog xlator. All the
required code has been including a bricks select function, a payload
builder and a brick-op handler.
Change-Id: I91d9d77f691c2e89823f7dc4e84900ec40dc4dd2
BUG: 1060002
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6943
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
type DISPLAY.
Problem : Currently we are acquiring a lock if we give
"gluster snapshot config <volname>". As this is just a
Read-Only command, we need not acquire a lock.
Solution : This patch checks if the command given is of
type DISPLAY. If so, then glusterd_v3_mgmt framework is
not called, as reading information from local node is
enough.
This Patch also fixes "Assertion failed: volname" while
doing the system config change when snap create was in
progress.
Change-Id: Ie8991f2cd746987b11152006e113e8706516138b
BUG: 1087677
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/7458
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Gluster does not display performance.nfs.* options in help.
In Gluster NFS, write-behind is the only performance xlator
which gets loaded. Gluster volume set help should display all
the options provided by write-behind xlator.
Change-Id: I4a41151a6c15eeed8e8d123a6044c6f0c42b56b0
BUG: 1090826
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/7546
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During a peer-handshake, after the volumes have synced, and the list of
missed snapshots have synced, the node will perform the pending deletes
and restores on this list. At this point, the current snapshot list in
the node will be updated, and hence in case of conflicts arising during
snapshot handshake, the peer hosting the bricks will be given precedence
Likewise, if there will be a conflict, and both peers will be in the same
state, i.e either both would be hosting bricks or both would not be hosting
bricks, then a decision can't be taken and a peer-reject will happen.
glusterd_compare_and_update_snap() implements the following algorithm to
perform the above task:
Step 1: Start.
Step 2: Check if the peer is missing a delete on the said snap.
If yes, goto step 6.
Step 3: Check if there is a conflict between the peer's data and the
local snap. If no, goto step 5.
Step 4: As there is a conflict, check if both the peer and the local nodes
are hosting bricks. Based on the results perform the following:
Peer Hosts Bricks Local Node Hosts Bricks Action
Yes Yes Goto Step 7
No No Goto Step 7
Yes No Goto Step 8
No Yes Goto Step 6
Step 5: Check if the local node is missing the peer's data.
If yes, goto step 9.
Step 6: It's a no-op. Goto step 10
Step 7: Peer Reject. Goto step 10
Step 8: Delete local node's data.
Step 9: Accept Peer Data.
Step 10: Stop
Change-Id: I79be0f0f5f2a4f5c72277a4e77c2be732af432e1
BUG: 1061685
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7525
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During a glusterd handshake, a dictionary is passed among
the peers which contains, info of volumes, global opts,
and now also info of snaps and list of missed snaps
As it now contains more than just volume specific data,
renaming the dict in the code-base from "vols" to "peer_data"
Change-Id: Ib457172789ddd0d8978b08bceab0988c48e9eea7
BUG: 1061685
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7524
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
node reboot.
The lvm snapshots of the bricks are mounted at /var/run/gluster/snaps/ or
/run/gluster/snaps. These paths being on a tempfs, on reboot are removed.
So when glusterd starts, we need to recreate these paths, activate the
respective logical volumes (lvm snapshots of the bricks), and mount
these logical volumes at their respective paths.
Change-Id: Ic5ef61e79a25d9830df717c592391965fe09db62
BUG: 1061685
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7452
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replacing is_volume_restored(gf_boolean_t) with
restored_from_snap(uuid_t) in glusterd_volinfo_
Also removed gd_restore_snap_volume from glusterd-volgen.c
to glusterd-snapshot.c
Change-Id: Ic615a1658cfaffa98d4590506ac82f20bf709ad6
BUG: 1089906
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7455
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
barrier enable/disable, barrier-timeout configuration in barrier translator.
Change-Id: I7cbf9cd4f5e55d42dcc6b7cd6827234566c7b6f3
BUG: 1060002
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/7177
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Persisting missing snapshot info on disk as well as in memory in
the following format:
-------------NODE-UUID--------------:--------------SNAP-UUID-------------=---------SNAP-VOL-ID------------:BRICKNUM:-------BRICKPATH--------:OPERATION:STATUS
927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=a17b4fe42c5a45f7a916438643edaa13: 3 :/brick/brick-dirs/brick3: 1 : 1
927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=a17b4fe42c5a45f7a916438643edaa13: 3 :/brick/brick-dirs/brick3: 3 : 1
927cb5fe-63da-48f5-82f6-e6a09ddc81c4:8258b18f-d408-483d-8239-204039dc6397=83a3cc05453b46b2a7eda4c9a9208638: 3 :/brick/brick-dirs/brick3: 1 : 1
This data will be stored on disk at /var/lib/glusterd/snaps/missed_snaps_list
In memory we maintain the data as a list of glusterd_missed_snap_info
in conf, the key for this list are the first two fields,
i.e NODE-UUID:SNAP-UUID.
For every NODE-UUID:SNAP-UUID, there can be multiple operations missed
on multiple bricks. So we maintain a list of glusterd_snap_op_t
for every node of glusterd_missed_snap_info
This list is maintained or updated during snapshot create, delete, and restore
operations which are the only operations that if missed, are recorded in this
list.
During snapshot create, if a node is down, or a brick is down, we don't
receive their mount point infos. snap_status of such bricks is marked as
-1, and their brick details are added to this list.
During snapshot delete, we check from originator node, if any other
nodes, holding bricks of the said snap are down. Those are also added to the list.
Also if the node is up, but the snapshot was pending for a snap
brick, and its snap_status is -1, we add that to the list too.
When a subsequent delete entry is processed for an already existing
create entry, we just mark the create entries status as done (2), and don't
add the delete entry to the list.
During snapshot restore, we check from originator node, if any other
nodes, holding bricks of the said snap are down. Those are also added to the list.
Also if the node is up, but the snapshot was pending for a snap
brick, and its snap_status is -1, we add that to the list too.
Like delete when a subsequent restore entry is processed for an already existing
create entry, we just mark the create entries status as done (2), and don't
add the restore entry to the list.
Change-Id: I54f63e28d3c40555d0f84528f38227103171f594
BUG: 1061685
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7454
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In a handshake, create a union of the missed_snap_lists of the two peers.
If an entry is present, its no op.
If an entry is pendng, and the peer entry is done, mark own entry as done.
If an entry is done, and the peer ertry is pending, its a no-op.
If its a new entry, add it.
Change-Id: Idbfa49cc34871631ba8c7c56d915666311024887
BUG: 1061685
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7453
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs
Working functionality on MacOSX
- GlusterD (management daemon)
- GlusterCLI (management cli)
- GlusterFS FUSE (using OSXFUSE)
- GlusterNFS (without NLM - issues with rpc.statd)
Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac
BUG: 1089172
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Signed-off-by: Dennis Schafroth <dennis@schafroth.com>
Tested-by: Harshavardhana <harsha@harshavardhana.net>
Tested-by: Dennis Schafroth <dennis@schafroth.com>
Reviewed-on: http://review.gluster.org/7503
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
brickinfo->brick_id was introduced to establish persistence of client xlator
names and AFR chanelog attributes (http://review.gluster.org/7155). The
snapshot volumes must also use the same IDs during snapshot create and
restore to maintain persistence.
Change-Id: I13d66d19b63520061ba9ec5f0ce661cf3b9eeafe
BUG: 1066778
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/7477
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
| |
Change-Id: Ic292dcd8e477066c1079f0f1e170f5153459b029
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/7514
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is an overwhelming no. of instances of the following pattern in
glusterd module.
...
char *dynstr = gf_strdup (str);
if (!dynstr)
goto err;
ret = dict_set_dynstr (dict, key, dynstr);
if (ret)
goto err;
...
With this changes it would look as below,
ret = dict_set_dynstr_with_alloc (dict, key, str);
if (ret)
goto err;
Change-Id: I6a47b1cbab4834badadc48c56d0b5c8c06c6dd4d
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/7379
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ic4b701a6621578848ff67ae4ecb5a10b5f32f93b
BUG: 1075611
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/7372
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch improves the validation for the 'peer detach' command.
A check for if volumes exist with some bricks on the peer being detached
validation is added in peer detach code flow (even force would have this
validation).
This patch also gurantees that peer detach doesn't fail for a volume with all
its brick on the peer which is getting detached and there are no other bricks on
this peer.
The following steps need to be followed for removing a downed and unrecoverable
peer.
* If a replacement system is available
- add it to the cluster
- use replace-brick to migrate bricks of the downed peer to the new
peer (since data cannot be recovered anyway use the 'replace-brick
commit force' command)
or,
If no replacement system is available,
- remove bricks of the downed peer using 'remove-brick'
Change-Id: Ie85ac5b66e87bec365fdedd8352b645bb25e1c33
BUG: 983590
Signed-off-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/5325
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial patch for the Snapshot feature. Current patch
includes following features:
* Snapshot create
* Snapshot delete
* Snapshot restore
* Snapshot list
* Snapshot info
* Snapshot status
* Snapshot config
Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9
BUG: 1061685
Signed-off-by: shishir gowda <sgowda@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7128
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I5eca01a131307ba3be2aed4922eea73025ff284c
BUG: 1081013
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/7360
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From glusterd log:
----------
E [glusterd-store.c:1981:glusterd_store_retrieve_volume] 0-: Unknown key:
brick-0
----------
The message is emitted from glusterd_store_retrieve_volume() when it reads the
volinfo file because it doesn't do anything with the key-value pair. Suppress the
error. The key is needed by glusterd_store_retrieve_bricks() which anyway
re-reads it.
Also change the log level to WARNING since we do not error out if an unknown key
is got while parsing the volinfo file.
Change-Id: Icd7962d9e16e0f90e6a37ee053dcafe97d2cab94
BUG: 1079279
Reviewed-on: http://review.gluster.org/7314
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
cluster op-version must be atleast 4 for add/remove brick to proceed.
This change is required for the new afr-changelog xattr changes that
will be done for glusterFS 3.6 (http://review.gluster.org/#/c/7155/).
In add-brick, the check is done only when replica count is increased
because only that will affect the AFR xattrs.
In remove-brick, the check is unconditional failing which there will be
inconsistencies in the client xlator names amongst the volfiles of
different peers.
Change-Id: If981da2f33899aed585ab70bb11c09a093c9d8e6
BUG: 1066778
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/7122
Reviewed-by: Kaushal M <kaushal@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|