| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The aux mount is created on the first limit/remove_limit/list command
and it remains until volume is stopped / deleted / (quota is disabled)
, where we do a lazy unmount. If the process is uncleanly terminated,
then the mount entry remains and we get (Transport disconnected) error
on subsequent attempts to run quota list/limit-usage/remove commands.
Second issue, There is also a risk of inadvertent rm -rf on the
/var/run/gluster causing data loss for the user. Ideally, /var/run is
a temp path for application use and should not cause any data loss to
persistent storage.
Solution:
1) unmount the aux mount after each use.
2) clean stale mount before mounting, if any.
One caveat with doing mount/unmount on each command is that we cannot
use same mount point for both list and limit commands.
The reason for this is that list command needs mount to be accessible
in cli after response from glusterd, So it could be unmounted by a
limit command if executed in parallel (had we used same mount point)
Hence we use separate mount points for list and limit commands.
> Reviewed-on: https://review.gluster.org/16938
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Manikandan Selvaganesh <manikandancs333@gmail.com>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
> (cherry picked from commit 2ae4b4058691b324535d802f4e6d24cce89a10e5)
Change-Id: I4f9e39da2ac2b65941399bffb6440db8a6ba59d0
BUG: 1449782
Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com>
Reviewed-on: https://review.gluster.org/17242
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Check if S32gluster_enable_shared_storage.sh is present
at /var/lib/glusterd/hooks/1/set/post/ at staging
before proceeding with the command. Fail the command
with the appropriate error message in case it is not
present.
> Reviewed-on: http://review.gluster.org/15718
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
(cherry picked from commit 29587a91716e1e55bd172d63340c40249fb343c9)
Change-Id: I84e3912f1cdffb927f8a40d74d52be43ee69388b
BUG: 1377448
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/15741
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ia24ad18c43d56a751988e562323ede26d7785848
BUG: 1317278
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/14519
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd creates export conf file for ganesha using hook script during
volume start and ganesha_manage_export() for volume set command. But this
routine is not added in glusterd restart scenario.
Consider the following case, in a three node cluster a volume got exported
via ganesha while one of the node is offline(glusterd is not running).
When the node comes back online, that volume is not exported on that node
due to the above mentioned issue.
Also I have removed unused variables from glusterd_handle_ganesha_op()
For this patch to work pcs cluster should running on that be node.
Upstream reference
>Change-Id: I5b2312c2f3cef962b1f795b9f16c8f0a27f08ee5
>BUG: 1330097
>Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
>Reviewed-on: http://review.gluster.org/14063
>Smoke: Gluster Build System <jenkins@build.gluster.com>
>NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
>CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
>Reviewed-by: soumya k <skoduri@redhat.com>
>Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
>(cherry picked from commit f71e2fa49af185779b9f43e146effd122d4e9da0)
Change-Id: I5b2312c2f3cef962b1f795b9f16c8f0a27f08ee5
BUG: 1336801
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: http://review.gluster.org/14397
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Tested-by: Kaleb KEITHLEY <kkeithle@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously quota crawl was done from the single mount point,
this is very slow process if there are huge number of files exists
in the volume
This RFE will now spawn crawl process for each brick in the
volume, and files are looked in parallel independently for each
brick. This improves the speed of crawling process for
entire files-system
This patch also fixes below problem
* Previously, mountdir was created under '/tmp'.
If someone tries to cleanup '/tmp'/ directory
then it is very dangerous that we loose volume data
So create a mount point under /var/run/gluster/tmp
instead
* Previously, file-system crawl is performed from all the nodes,
which is a redundant operation and performance will degrade
The problem is fixed with this patch
Change-Id: Icabedeb44182139ace9c8106793803122388cab8
BUG: 1290766
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/12952
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As per community consensus, we have decided to rename
nsr to jbr(Journal-Based-Replication). This is the patch
to rename the "nsr" code to "jbr"
Change-Id: Id2a9837f2ec4da89afc32438b91a1c302bb4104f
BUG: 1328043
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/13899
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd_is_brickpath_available
glusterd_is_brickpath_available () used to call realpath() for checking the
whether the new brick path matches with the existing ones. The problem with this
is if the underlying file system is bad for any one of the existing bricks then
realpath() would fail and we wouldn't allow to create the new brick even if it
should be allowed.
Fix is to use string comparison with having a new field real_path in brickinfo
to store the absolute path
Change-Id: I1250ea5345f00fca0f6128056ebd08750d604f0a
BUG: 1299710
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/13258
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
op on glusterd restart
While remove brick is in progress, if glusterd is restarted since decommission
flag is not persisted in the store the same value is not retained back resulting
in glusterd not blocking remove brick commit when rebalance is already in
progress.
Change-Id: Ibbf12f3792d65ab1293fad1e368568be141a1cd6
BUG: 1303269
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/13323
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During reblance restart after glusterd restarted, we are not
connecting to rebalance process from glusterd, because the
defrag variable in volinfo will be null.
Initializing the variable will connect the rpc
Change-Id: Id820cad6a3634a9fc976427fbe1c45844d3d4b9b
BUG: 1303028
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/13319
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allows the user to convert an afr-volume to a nsr-volume
by using cluster.nsr option in the volume set command
gluster volume set <volname> cluster.nsr <on/off>
Change-Id: Ia1c5aa89d27535f7275d474cf312dc5efb8e222f
BUG: 1158654
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/12943
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The start command doesnt restart the tier deamon if the deamon
is running at one node. hence to bring up the tierd on the nodes
where the deamon is down, the force command is implemented.
It skips the check for tierd running.
Change-Id: I0037d3e5ecfe56637d0da201a97903c435d26436
BUG: 1292112
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12983
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When exporting/importing voinfo during handshake,
quota conf and quota xattr version were using same key
'quota-version' and updated wrong values when importing
quota version values.
Change-Id: If939d6f5bc4851d4114963877be72dda21834f0f
BUG: 1287996
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/12865
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CLI command for bitrot scrub status will be :
gluster volume bitrot <volname> scrub status
Above command will show the statistics of bitrot scrubber.
Upon execution of this command it will show some common
scrubber tunable value of volume <VOLNAME> followed by
statistics of scrubber statistics of individual nodes.
sample ouput for single node:
Volume name : <VOLNAME>
State of scrub: Active
Scrub frequency: biweekly
Bitrot error log location: /var/log/glusterfs/bitd.log
Scrubber error log location: /var/log/glusterfs/scrub.log
=========================================================
Node name:
Number of Scrubbed files:
Number of Unsigned files:
Last completed scrub time:
Duration of last scrub:
Error count:
=========================================================
This is just infrastructure. list of bad file, last scrub
time, error count value will be taken care by
http://review.gluster.org/#/c/12503/ and
http://review.gluster.org/#/c/12654/ patches.
Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1
BUG: 1207627
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10231
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently new feature tiering have GD_OP_DETACH_TIER and GD_OP_TIER_MIGRATE
enum in the middle of the glusterd_op_ enum array. In multi nodes
cluster when one of the node upgraded from lower version to higher
version and upon executing command can end up in a mismatch in enum ops
at the receiver ends causing command execution fail.
Fix is to put every new feature glusterd operation enum code to last of
the enum array.
Change-Id: I640f811065e8c84add624237aa80fed43fde5967
BUG: 1276643
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/12473
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a quota is disable and the clean-up process terminated
without completely cleaning-up the quota xattrs.
Now when quota is enabled again, this can mess-up the accounting
A version number is suffixed for all quota xattrs and this version
number is specific to marker xaltor, i.e when quota xattrs are
requested by quotad/client marker will remove the version suffix in the
key before sending the response
Change-Id: I1ca2c11460645edba0f6b68db70d476d8d26e1eb
BUG: 1272411
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/12386
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While taking snapshot, the export file used by the volume should
copy to snap directory. So that when restore of snapshot happens,
the volume can retain all its configuration for exporting via
nfs-ganesha. The export file is stored at "/etc/ganesha/export" in
the following format "export.<volname>.conf"
The fix handles given cases in the following manner :
case a: The nfs-ganesha(global) is ON during snapshot and restore.
i.) Volume was exported during snapshot. When we restore snapshot,
then volume should be exported back with old configuration file.
ii.) Volume was unexported during snapshot. When we restore snapshot,
then volume should unexported again.
case b: The nfs-ganesha is ON during snapshot and OFF during restore
Volume was exported during snapshot. When we restore snapshot, the
conf will be copied to corresponding location and if nfs-ganesha enabled
again, then volume will be exported.
For the clones, export conf file will created in /etc/ganesha/export and then
export it via ganesha.
Change-Id: Ideecda15bd4db58e991cf6c8de7bb93f3db6cd20
BUG: 1257709
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: http://review.gluster.org/12034
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In glusterd_snapshot_clone_postvalidate(), we were deleting
snap object and snap vol, by looking up snapname. Hence, it
was deleting the orignal snapshot from which the clone was
being created
Instead it should fetch the clonename, the respective
clone vol, and its corresponding snap object, and delete them.
Also glusterd_snap_remove(), needs to differentiate a clone
snap object from a snaphsot snap object, as in case of a clone
snap object, we don't have any persisted data in
/var/run/gluster/snaps/ and hence is shouldn't try to delete
anything there.
Change-Id: I02bb22a3898d5720e318a02d6cc32d25f75d317d
BUG: 1272339
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/12364
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
afr uses the translator name for locking purpose,
so it is mandatory to keep afr/ec xlators name constant
across graph change
currently when a tier is attached, afr names are appended
either with hot or cold. ie that breaks the above
mentioned constraint.
Change-Id: I3699dcdaa8190bab3ba81cbc01e8fa126d37ba0d
BUG: 1261276
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/12134
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when we trigger a detach tier start on a tier vol,
it shows in the volume status task as "remove brick" instead of "Detach tier"
Status of volume: vol1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.42.171:/data/gluster/hbr1 49154 0 Y 25098
Cold Bricks:
Brick 10.70.42.171:/data/gluster/p1 49152 0 Y 25101
Brick 10.70.42.171:/data/gluster/p2 49153 0 Y 25112
NFS Server on localhost N/A N/A N N/A
Task Status of Volume vol1
------------------------------------------------------------------------------
Task : Tier migrate
ID : e11d5a3d-b1ae-4c3f-8f95-b28993c60939
Status : in progress
Status of volume: vol1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.42.171:/data/gluster/hbr1 49154 0 Y 25098
Cold Bricks:
Brick 10.70.42.171:/data/gluster/p1 49152 0 Y 25101
Brick 10.70.42.171:/data/gluster/p2 49153 0 Y 25112
NFS Server on localhost N/A N/A N N/A
Task Status of Volume vol1
------------------------------------------------------------------------------
Task : Detach tier
ID : 76d700b1-5bbd-43ed-95fd-1640b2b4af31
Status : completed
Change-Id: I4bd3b340d4e700e8afed00e1478b8a8b54dfe2e2
BUG: 1261837
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12149
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The no. of epoll worker threads can be configured
by adding the following option into glusterd.vol.
option event-threads <NUM-OF-EPOLL_WORKERS>
BUG: 1242421
Change-Id: I2a9e2d81c64beaf54872081f9ce45355cf4dfca7
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/11630
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
brick
The brick path we use to create shared storage is
/var/run/gluster/ss_brick.
The problem with using this brick path is /var/run/gluster
is a tmpfs and all the brick/shared storage data will be wiped
off when the node restarts. Hence using /var/lib/glusterd/ss_brick
as the brick path for shared storage volume as this brick and
the shared storage volume is internally created by us (albeit on
user's request), and contains only internal state data and no user data.
Change-Id: I808d1aa3e204a5d2022086d23bdbfdd44a2cfb1c
BUG: 1218573
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/11533
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch is part one change to prevent data loss
in a replicate volume on doing a replace-brick commit
force operation.
Problem: After doing replace-brick commit force, there is a
chance that self heal happens from the replaced (sink) brick
rather than the source brick leading to data loss.
Solution: During the commit phase of replace brick, after old
brick is brought down, create a temporary mount and perform
setfattr operation (on virtual xattr) indicating AFR to mark
the replaced brick as sink.
As a part of this change replace-brick command is being changed
to use mgmt_v3 framework rather than op-state-machine framework.
Many thanks to Krishnan Parthasarathi for helping me out on this.
Change-Id: If0d51b5b3cef5b34d5672d46ea12eaa9d35fd894
BUG: 1207829
Signed-off-by: Anuradha <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/10076
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: In GLUSTERD_GET_DEFRAG_PROCESS we are using PATH_MAX (4096)
as the max size of the input for target path, but we have allocated
NAME_MAX (255) size of buffer for the target.
Now this crash is not seen with source install, but is seen with RPMS.
The reason is _foritfy_fail. This check happens when _FORTIFY_SOURCE
flag is enabled. This option tries to figure out possible
overflow scenarios like the bug here and does crash the process.
Change-Id: I26261be85936d2e94a526fdcaa8d3249f8af11c3
BUG: 1228093
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/11090
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
shared storage
Introducing a global volume set option(cluster.enable-shared-storage)
which helps create and set-up the shared storage meta volume.
gluster volume set all cluster.enable-shared-storage enable
On enabling this option, the system analyzes the number of peers
in the cluster, which are currently connected, and chooses three
such peers(including the node the command is issued from). From these
peers a volume(gluster_shared_storage) is created. Depending on the
number of peers available the volume is either a replica 3
volume(if there are 3 connected peers), or a replica 2 volume(if there
are 2 connected peers). "/var/run/gluster/ss_brick" serves as the
brick path on each node for the shared storage volume. We also mount
the shared storage at "/var/run/gluster/shared_storage" on all the nodes
in the cluster as part of enabling this option. If there is only one node
in the cluster, or only one node is up then the command will fail
Once the volume is created, and mounted the maintainance of the
volume like adding-bricks, removing bricks etc., is expected to
be the onus of the user.
On disabling the option, we provide the user a warning, and on
affirmation from the user we stop the shared storage volume, and unmount
it from all the nodes in the cluster.
gluster volume set all cluster.enable-shared-storage disable
Change-Id: Idd92d67b93f444244f99ede9f634ef18d2945dbc
BUG: 1222013
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/10793
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ENUM RETCODE ERROR
-------------------------------------------------------------
EG_INTRNL 30800 Internal Error
EG_OPNOTSUP 30801 Gluster Op Not Supported
EG_ANOTRANS 30802 Another Transaction in Progress
EG_BRCKDWN 30803 One or more brick is down
EG_NODEDWN 30804 One or more node is down
EG_HRDLMT 30805 Hard Limit is reached
EG_NOVOL 30806 Volume does not exist
EG_NOSNAP 30807 Snap does not exist
EG_RBALRUN 30808 Rebalance is running
EG_VOLRUN 30809 Volume is running
EG_VOLSTP 30810 Volume is not running
EG_VOLEXST 30811 Volume exists
EG_SNAPEXST 30812 Snapshot exists
EG_ISSNAP 30813 Volume is a snap volume
EG_GEOREPRUN 30814 Geo-Replication is running
EG_NOTTHINP 30815 Bricks are not thinly provisioned
Change-Id: I49a170cdfd77df11fe677e09f4e063d99b159275
BUG: 1212413
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/10588
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of including config.h in each file, and have the additional
config.h included from the compiler commandline (-include option).
When a .c file tests for a certain #define, and config.h was not
included, incorrect assumtions were made. With this change, it can not
happen again.
BUG: 1222319
Change-Id: I4f9097b8740b81ecfe8b218d52ca50361f74cb64
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/10808
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When promotion/demotion daemon starts, it uses the same pidfile
as rebalance. This patch will introduce a different pid file
for the same.
Change-Id: Ic484c53f51e00ae6b2d697748a9600b14829e23b
BUG: 1221970
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/10792
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The key concept here is to determine whether a directory is "clean" by
comparing its last-known-good topology to the current one for the
volume. These are stored as "commit hashes" on the directory and the
volume root respectively. The volume's commit hash changes whenever a
brick is added or removed, and a fix-layout is done. A directory's
commit hash changes only when a full rebalance (not just fix-layout)
is done on it. If all bricks are present and have a directory
commit hash that matches the volume commit hash, then we can assume
that every file is in its "proper" place. Therefore, if we look for
a file in that proper place and don't find it, we can assume it's not
on any other subvolume and *safely* skip the global (broadcast to all)
lookup.
Change-Id: Id6ce4593ba1f7daffa74cfab591cb45960629ae3
BUG: 1219637
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Signed-off-by: Shyam <srangana@redhat.com>
Reviewed-on: http://review.gluster.org/7702
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
>> gluster volume info patchy
Volume Name: patchy
Type: Tier
Volume ID: 8bf1a1ca-6417-484f-821f-18973a7502a8
Status: Created
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Replicate
Number of Bricks: 1 x 2 = 2
Brick1: hostname:/home/brick30
Brick2: hostname:/home/brick31
Cold Bricks:
Cold Tier Type : Disperse
Number of Bricks: 1 x (4 + 2) = 6
Brick3: hostname:/home/brick20
Brick4: hostname:/home/brick21
Brick5: hostname:/home/brick23
Brick6: hostname:/home/brick24
Brick7: hostname:/home/brick25
Brick8: hostname:/home/brick26
Change-Id: I7b9025af81263ebecd641b4b6897b20db8b67195
BUG: 1212400
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/10339
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fix adds support to view the number of promoted or demoted
files from the cli. The mechanism is isolmorphic to checking
the status of volumes being rebalanced.
gluster volume rebalance <vol> tier status
Change-Id: I1b11ca27355ceec36c488967c23531202030e205
BUG: 1213063
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Signed-off-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-on: http://review.gluster.org/10292
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace-brick operation with data migration support have been
deprecated from gluster.
With this fix replace brick command will support only one commad
gluster volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force}
Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b
BUG: 1094119
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10101
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Command gluster volume status <VOLNAME> should show the status of bitrot
and scrubber daemon and its pid information.
Along with displaying bitrot and scrubber daemon information in gluster
volume status command there should be command to show its individual status
separately.
Command to show individual status of bitrot and scrubber daemon will
following.
command to show only bitd daemon information will be
gluster volume status <VOLNAME> bitd
command to show only scrubber daemon information
gluster volume status <VOLNAME> scrub
Change-Id: Id86aae1156c8c599347c98e2a538f294d37376e4
BUG: 1209752
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10175
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When ganesha.enable is set to on and features.ganesha is
enabled, there are a few behaviour changes that should
be seen in other volume operations.
1. ganesha.enable can be set to 'on' only
when features.ganesha is set to 'enable'
2.When gluster vol is started, and if ganesha.enable
key was set to 'on', it should automatically export the volume
via NFS-Ganesha.
3.When ganesha.enable is set to 'on', and a volume
is stopped, that volume should be unexported via NFS-Ganesha.
4. gluster vol reset <volname>
If ganesha.enable was set to on, then unexport the
volume via NFS-Ganesha.
5. gluster vol reset all
If features.ganesha is set to enable, as part
of reset all, set it to disable. This translates
to teardown cluster.
All the above problems are fixed by checking the global key
and value, depending on the value, specific functions are called.
And also, functions related to global commands
are moved to cli-cmd-global.c
Commit phase of features.ganesha enable/disable
runs the ganesha-ha.sh setup/teardown respectively.
Before the script begins, it is important that the
NFS-Ganesha service starts on all the HA nodes.
Having the start service commands in the
commit phase could lead to problems.
Moving the pre-requisite service start
commands to the 'stage' phase.
Change-Id: I5a256f94f8e1310ddcd5369f329b7168b2a24c47
BUG: 1200265
Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com>
Reviewed-on: http://review.gluster.org/10283
Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On restarting of glusterd first it will start all the bricks present
in the volume then it will start all the services. During starting of
all the services it may pass volinfo as a NULL. It will cause Assert
failure in glusterd_bitdsvc_manager function and will cause a glusterd
crash.
Change-Id: Ia14cf5022da88516cdd576eb2d1e0e7b17a3782b
BUG: 1207029
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10241
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using a uint64_t for the peerinfo generation number was overkill for how
the generation number is used within GlusterD. It also prevented
GlusterD from running on 32-bit architechtures, as uatomic_add_return
doesn't support 64-bit values on 32-bit architechtures.
This change was developed on the git branch at [1]. This commit is a
combination of the following commits on the development branch.
b78dba4 Use 32-bit generation number
2c37e4b Change other generation number variables to uint32_t
[1]: https://github.com/kshlm/glusterfs/tree/urcu
Change-Id: I0f310f56a4fb97d6bcbc23255a379ed5bb1ed9e1
BUG: 1205186
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/10425
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org>
Tested-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Logic for adding the 'glusterd_brickinfo->group' member and using it to
find the brick positon has been taken from http://review.gluster.org/#/c/9919.
Thanks to Jeff Darcy for that.
This patch is a part of the arbiter logic implementation for 3 way AFR
details of which can be found at http://review.gluster.org/#/c/9656/
Change-Id: Idbfe4f29ee8e098e0102def8f38b32314316b188
BUG: 1199985
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/10257
Tested-by: NetBSD Build System
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Transaction peer lists were used in GlusterD to peers belonging to a
transaction. This was needed to prevent newly added peers performing
partial transactions, which could be incorrect.
This was accomplished by creating a seperate transaction peers list at
the beginning of every transaction. A transaction peers list referenced
the peerinfo data structures of the peers which were present at the
beginning of the transaction. RCU protection of peerinfos referenced by
the transaction peers list is a hard problem and difficult to do
correctly.
To have proper RCU protection of peerinfos, the transaction peers lists
have been replaced by an alternative method to identify peers that
belong to a transaction. The alternative method is to the global peers
list along with generation numbers to identify peers that should belong
to a transaction.
This change introduces a global peer list generation number, and a
generation number for each peerinfo object. Whenever a peerinfo object
is created, the global generation number is bumped, and the peerinfos
generation number is set to the bumped global generation.
With the above changes, the algorithm to identify peers belonging to a
transaction with RCU protection is as follows,
- At the beginning of a transaction, the current global generation
number is saved
- To identify if a peers belonging to the transaction,
- Start a RCU read critical section
- For each peer in the global peers list,
- If the peers generation number is not greater than the saved
generation number, continue with the action on the peer
- End the RCU read critical section
The above algorithm guarantees that,
- The peer list is not modified when a transaction is iterating through
it
- The transaction actions are only done on peers that were present when
the transaction started
But, as a transaction could iterate over the peers list multiple times,
the algorithm cannot guarantee that same set of peers will be selected
every time. A peer could get deleted between two iterations of the list
within a transaction. This problem existed with transaction peers list
as well, but unlike before now it will not lead to invalid memory access
and potential crashes. This problem will be addressed seprately.
This change was developed on the git branch at [1]. This commit is a
combination of the following commits on the development branch.
52ded5b Add timespec_cmp
44aedd8 Add create timestamp to peerinfo
7bcbea5 Fix some silly mistakes
13e3241 Add start time to opinfo
17a6727 Use timestamp comparisions to identify xaction peers instead
of a xaction peer list
3be05b6 Correct check for peerinfo age
70d5b58 Use read-critical sections for peer list iteration
ba4dbca Use peerinfo timestamp checks in op-sm instead of xaction peer
list
d63f811 Add more peer status checks when iterating peers list in
glusterd-syncop
1998a2a Timestamp based peer list traversal of mgmtv3 xactions
f3c1a42 Remove transaction peer lists
b8b08ee Remove unused labels
32e5f5b Remove 'npeers' usage
a075fb7 Remove 'npeers' from mgmt-v3 framework
12c9df2 Use generation number instead of timestamps.
9723021 Remove timespec_cmp
80ae2c6 Remove timespec.h include
a9479b0 Address review comments on 10147/4
[1]: https://github.com/kshlm/glusterfs/tree/urcu
Change-Id: I9be1033525c0a89276f5b5d83dc2eb061918b97f
BUG: 1205186
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/10147
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On Linux systems we should use the libuuid from the distribution and not
bundle and statically link the contrib/uuid/ bits.
libglusterfs/src/compat-uuid.h has been introduced and should become an
abstraction layer for different UUID APIs. Non-Linux operating systems
should implement their compatibility layer there.
Once all operating systems have an implementation in compat-uuid.h, we
can remove contrib/uuid/ from the repository completely.
Change-Id: I345e5357644be2521685e00358bb8c83c4ea0577
BUG: 1206587
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/10129
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterfs relies on Linux uuid implementation, which
API is incompatible with most other systems's uuid. As
a result, libglusterfs has to embed contrib/uuid,
which is the Linux implementation, on non Linux systems.
This implementation is incompatible with systtem's
built in, but the symbols have the same names.
Usually this is not a problem because when we link
with -lglusterfs, libc's symbols are trumped. However
there is a problem when a program not linked with
-lglusterfs will dlopen() glusterfs component. In
such a case, libc's uuid implementation is already
loaded in the calling program, and it will be used
instead of libglusterfs's implementation, causing
crashes.
A possible workaround is to use pre-load libglusterfs
in the calling program (using LD_PRELOAD on NetBSD for
instance), but such a mechanism is not portable, nor
is it flexible. A much better approach is to rename
libglusterfs's uuid_* functions to gf_uuid_* to avoid
any possible conflict. This is what this change attempts.
BUG: 1206587
Change-Id: I9ccd3e13afed1c7fc18508e92c7beb0f5d49f31a
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/10017
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove xaction_peers from glusterd_conf_t which was left out by
http://review.gluster.org/#/c/9980/ patch
Change-Id: I8494ec181ec11922861d7bad12c46d45e036637b
BUG: 1204727
Signed-off-by: anand <anekkunt@redhat.com>
Reviewed-on: http://review.gluster.org/10006
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* BitRot enable/disable CLI per volume
* Volfile generation for Scrubber
* Relevant glusterd infrastructure
Change-Id: I1212af63f93ecc52b22ee6da920e1664f66a1e39
BUG: 1170075
Original-Author: Raghavendra Bhat <raghavendra@redhat.com>
Original-Author: Venky Shankar <vshankar@redhat.com>
Original-Author: Gaurav Kumar Garg <ggarg@redhat.com>
Original-Author: Anand Nekkunti <anekkunt@redhat.com>
Reviewed-on: http://review.gluster.org/9986
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Implement the skeleton of bit-rot xlator.
Original-Author: Raghavendra Bhat <raghavendra@redhat.com>
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Signed-off-by: Anand Nekkunti <anekkunt@redhat.com>
Change-Id: If33218bdc694f5f09cb7b8097c4fdb74d7a23b2d
BUG: 1170075
Reviewed-on: http://review.gluster.org/9710
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds the arbiter translator into the tree. This is a server
side xlator used for replica 3 volumes. It sits above posix and will be
loaded on the 3rd (last) brick of every afr subvolume in a replica 3
configuration. It intercepts inode read/write operations: reads are
unwound with ENOTCONN, inode writes are unwound with success without
actually passing them down to posix. Metadata operations are allowed to
pass through.
The CLI for creating a 3 way replica with arbiter is also added but kept
disabled (A 'normal' 3 way replica is created instead).
This patch is a part of the arbiter logic implementation for 3 way AFR,
details of which can be found at http://review.gluster.org/#/c/9656/
Change-Id: I395b81f49d5da52c466daf5c8518f1bbad9c16fa
BUG: 1199985
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/9840
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A tiered volume is a normal volume with some number of new bricks
representing "hot" storage. The "hot" bricks can be attached or
detached dynamically to a normal volume. When this happens, a new graph
is constructed. The root of the new graph is an instance of the tier
translator. One subvolume of the tier translator leads to the old volume,
and another leads to the new hot bricks.
attach-tier <VOLNAME> [<replica> <COUNT>] <NEW-BRICK> ... [force]
volume detach-tier <VOLNAME> [replica <COUNT>] <BRICK>
... <start|stop|status|commit|force>
gluster volume rebalance <volume> tier start
gluster volume rebalance <volume> tier stop
gluster volume rebalance <volume> tier status
The "tier start" CLI command starts a server side daemon. The daemon
initiates file level migration based on caching policies. The daemon's
status can be monitored and stopped.
Note development on the "tier status" command is incomplete. It will be
added in a subsequent patch.
When the "hot" storage is detached, the tier translator is removed
from the graph and the tiered volume reverts to its original state as
described in the volume's info file.
For more background and design see the feature page [1].
[1]
http://www.gluster.org/community/documentation/index.php/Features/data-classification
Change-Id: Ic8042ce37327b850b9e199236e5be3dae95d2472
BUG: 1194753
Signed-off-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-on: http://review.gluster.org/9753
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
**********************************************************************
ChangeTimeRecorder(CTR) Xlator |
**********************************************************************
ChangeTimeRecorder(CTR) is server side xlator(translator) which sits
just above posix xlator. The main role of this xlator is to record the
access/write patterns on a file residing the brick. It records the
read(only data) and write(data and metadata) times and also count on
how many times a file is read or written. This xlator also captures
the hard links to a file(as its required by data tiering to move
files).
CTR Xlator is the consumer of libgfdb.
To Enable/Disable CTR Xlator:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
gluster volume set <volume-name> features.ctr-enabled {on/off}
To Enable/Disable Frequency Counter Recording in CTR Xlator:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
gluster volume set <volume-name> features.record-counters {on/off}
Change-Id: I5d3cf056af61ac8e3f8250321a27cb240a214ac2
BUG: 1194753
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Reviewed-on: http://review.gluster.org/9935
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CLI command for bitrot features.
volume bitrot <volname> enable|disable
Above command will enable/disable bitrot feature for particular volume.
BUG: 1170075
Change-Id: Ie84002ef7f479a285688fdae99c7afa3e91b8b99
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Signed-off-by: Anand nekkunti <anekkunt@redhat.com>
Signed-off-by: Dominic P Geevarghese <dgeevarg@redhat.com>
Reviewed-on: http://review.gluster.org/9866
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A new global CLI option has been introduced for NFS-Ganesha.
gluster features.ganesha enable/disable.
This option is persistent and shall be inherited
by new volumes created after this option is set.
gluster features.ganesha enable
It carries out the following functions:
1. Disables gluster-nfs across the cluster
2. Starts NFS-Ganesha server on a subset of nodes and exports '/'.
3. Creates the HA cluster for NFS-Ganesha.
4. Writes the option into the global config file.
gluster features.ganesha disable
1. Stops NFS-Ganesha server.
2. Tears down the HA cluster for NFS-Ganesha
With this change the older volume set
options with keys "nfs-ganesha.host"
and "nfs-ganesha.enable" will no longer
be supported. This commit has only has the
CLI related changes. Another patch will
be submitted to support this feature entirely.
Change-Id: Ie4b66a16c23b33b795738654b9a68f8e2c34efe3
BUG: 1188184
Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com>
Reviewed-on: http://review.gluster.org/9538
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A dummy translator has been introduced as a place
holder for functions related to managing NFS-Ganesha
exports. A volume set option is introduced to
manage volume level exports.
gluster vol set <volname> ganesha.enable ON/OFF
1. gluster volume set <volname> ganesha.enable ON
It creates the export config file with a unique export ID.
Sends a DBus signal to export this volume dynamically.
2. gluster vol set <volname> ganesha.enable OFF
Unexports the specific volume. Deletes the specfic
config file related to the volume.
This change also removes the handling of the older
keys "nfs-ganesha.enable" and "nfs-ganesha.host"
Change-Id: I8d4a0b542326a6a0c8e4711600b106274d666587
BUG: 1188184
Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com>
Reviewed-on: http://review.gluster.org/9585
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The peer list and the peerinfo objects are now protected using RCU.
Design patterns described in the Paul McKenney's RCU dissertation [1]
(sections 5 and 6) have been used to convert existing non-RCU protected
code to RCU protected code.
Currently, we are only targetting guaranteeing the existence of the
peerinfo objects, ie., we are only looking to protect deletes, not all
updaters. We chose this, as protecting all updates is a much more
complex task.
The steps used to accomplish this are,
1. Remove all long lived direct references to peerinfo objects (apart
from the peerinfo list). This includes references in glusterd_peerctx_t
(RPC), glusterd_friend_sm_event_t (friend state machine) and others.
This way no one has a reference to deleted peerinfo object.
2. Replace the direct references with indirect references, ie., use
peer uuid and peer hostname as indirect references to the peerinfo
object. Any reader or updater now uses the indirect references to get to
the actual peerinfo object, using glusterd_peerinfo_find. Cases where a
peerinfo cannot be found are handled gracefully.
3. The readers get and use the peerinfo object only within a RCU read
critical section. This prevents the object from being deleted/freed when
in actual use.
4. The deletion of a peerinfo object is done in a ordered manner
(glusterd_peerinfo_destroy). The object is first removed from the
peerinfo list using an atomic list remove, but the list head is not
reset to allow existing list readers to complete correctly. We wait for
readers to complete, before resetting the list head. This removes the
object from the list completely. After this no new readers can get a
reference to the object, and it can be freed.
This change was developed on the git branch at [2]. This commit is a
combination of the following commits on the development branch.
d7999b9 Protect the glusterd_conf_t->peers_list with RCU.
0da85c4 Synchronize before INITing peerinfo list head after removing
from list.
32ec28a Add missing rcu_read_unlock
8fed0b8 Correctly exit read critical section once peer is found.
63db857 Free peerctx only on rpc destruction
56eff26 Cleanup style issues
e5f38b0 Indirection for events and friend_sm
3c84ac4 In __glusterd_probe_cbk goto unlock only if peer already
exists
141d855 Address review comments on 9695/1
aaeefed Protection during peer updates
6eda33d Revert "Synchronize before INITing peerinfo list head after
removing from list."
f69db96 Remove unneeded line
b43d2ec Address review comments on 9695/4
7781921 Address review comments on 9695/5
eb6467b Add some missing semi-colons
328a47f Remove synchronize_rcu from
glusterd_friend_sm_transition_state
186e429 Run part of glusterd_friend_remove in critical section
55c0a2e Fix gluster (peer status/ pool list) with no peers
93f8dcf Use call_rcu to free peerinfo
c36178c Introduce composite struct, gd_rcu_head
[1]: http://www.rdrop.com/~paulmck/RCU/RCUdissertation.2004.07.14e1.pdf
[2]: https://github.com/kshlm/glusterfs/tree/urcu
Change-Id: Ic1480e59c86d41d25a6a3d159aa3e11fbb3cbc7b
BUG: 1191030
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/9695
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch replaces usage of the libglusterfs lists data structures and
API in glusterd with the lists data structures and API from liburcu. The
liburcu data structes and APIs are a drop-in replacement for
libglusterfs lists.
All usages have been changed to keep the code consistent, and free from
confusion.
NOTE: glusterd_conf_t->xprt_list still uses the libglusterfs data
structures and API, as it holds rpc_transport_t objects, which is not a
part of glusterd and is not being changed in this patch.
This change was developed on the git branch at [1]. This commit is a
combination of the following commits on the development branch.
6dac576 Replace libglusterfs lists with liburcu lists
a51b5ab Fix compilation issues
d98a06f Fix merge issues
a5d918e Remove merge remnant
1cca113 More style cleanup
1917be3 Address review comments on 9624/1
8d10f13 Use cds_lists for glusterd_svc_t
524ad5d Add rculist header in glusterd-conn-helper.c
646f294 glusterd: add list_add_order API honouring rcu
[1]: https://github.com/kshlm/glusterfs/tree/urcu
Change-Id: Ic613c5b6e496a677b9d3de15fc042a0492109fb0
BUG: 1191030
Signed-off-by: Kaushal M <kaushal@redhat.com>
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9624
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-by: Anand Nekkunti <anekkunt@redhat.com>
|