| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For first time or if "gluster nfs-ganesha enable" fails the global option
"nfs-ganesha" won't be stored in glusterd's dictionary. In both cases the
"gluster nfs-ganesha disable" throws following error :
"nfs-ganesha: failed: nfs-ganesha is already (null)d."
Also this patch provides the missing prompt for nfs-ganesha disable in 3.10
Change-Id: I7c9fd6dabedc0cfb14c5190b3554bc63a6bc0340
BUG: 1427759
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: https://review.gluster.org/16791
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: soumya k <skoduri@redhat.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this, we will be able to trigger statedumps on remote Gluster
clients, mainly targetted for applications using libgfapi.
Design:
SIGUSR signal is the most comman way of taking a statedump in Gluster.
But it cannot be used for libgfapi based processes, as the process
loading the library might have already consumed SIGUSR signal. Hence
going by the command way.
One has to issue a Gluster command to initiate a statedump on the
libgfapi based client. The command takes hostname and PID as an
argument. All the glusterds in the cluster, check if they are connected
to the specified hostname, and send an RPC request to all the connected
clients from that hostname (via the mgmt connection).
> URL: http://review.gluster.org/16357
> BUG: 1169302
> Signed-off-by: Poornima G <pgurusid@redhat.com>
> [ndevos: minor fixes and split patch in smaller pieces]
> Reviewed-on-master: https://review.gluster.org/9228
> Reviewed-by: Niels de Vos <ndevos@redhat.com>
> Tested-by: Niels de Vos <ndevos@redhat.com>
> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
> Reviewed-by: Samikshan Bairagya <samikshan@gmail.com>
BUG: 1418981
Change-Id: Icbe4d2f026b32a2c7d5535e1bfb2cdaaff042e91
Signed-off-by: Shyam <srangana@redhat.com>
Reviewed-on: https://review.gluster.org/16601
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
back-port of : https://review.gluster.org/#/c/16463/
PROBLEM: gluster v rebalance <volname> tier start works even after
the switch of tier to service framework.
This lets the user have two tierd for the same volume.
FIX: checking for each process will make the new code hard
to maintain. So we are removing the support for old commands.
>Change-Id: I5b0974b2dbb74f0bee8344b61c7f924300ad73f2
>BUG: 1415590
>Signed-off-by: hari gowtham <hgowtham@redhat.com>
>Reviewed-on: https://review.gluster.org/16463
>Smoke: Gluster Build System <jenkins@build.gluster.org>
>Tested-by: hari gowtham <hari.gowtham005@gmail.com>
>NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
>CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
>Reviewed-by: N Balachandran <nbalacha@redhat.com>
>Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Change-Id: Ib996d89b1bd250176a3f5eeb369b71b0a4f95968
BUG: 1419868
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: https://review.gluster.org/16555
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: hari gowtham <hari.gowtham005@gmail.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tierd is implemented by separating from rebalance process.
The commands affected:
1) Attach tier will trigger this process instead of old one
2) tier start and tier start force will also trigger this process.
3) volume status [tier] will show tier daemon as a process instead
of task and normal tier status and tier detach status works.
4) tier stop implemented.
5) detach tier implemented separately along with new detach tier
status
6) volume tier volname status will work using the changes.
7) volume set works
This patch has separated the tier translator from the legacy
DHT rebalance code. It now sends the RPCs from the CLI
to glusterd separate to the DHT rebalance code.
The daemon is now a service, similar to the snapshot daemon,
and can be viewed using the volume status command.
The code for the validation and commit phase are the same
as the earlier tier validation code in DHT rebalance.
The “brickop” phase has been changed so that the status
command can use this framework.
The service management framework is now used.
DHT rebalance does not use this framework.
This service framework takes care of :
*) spawning the daemon, killing it and other such processes.
*) volume set options , which are written on the volfile.
*) restart and reconfigure functions. Restart is to restart
the daemon at two points
1)after gluster goes down and comes up.
2) to stop detach tier.
*) reconfigure is used to make immediate volfile changes.
By doing this, we don’t restart the daemon.
it has the code to rewrite the volfile for topological
changes too (which comes into place during add and remove brick).
With this patch the log, pid, and volfile are separated
and put into respective directories.
Change-Id: I3681d0d66894714b55aa02ca2a30ac000362a399
BUG: 1313838
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/13365
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: hari gowtham <hari.gowtham005@gmail.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier this command works when master and slave volume args are
given. Now it works even when no volume args are given (i.e. 'gluster
volume geo-replication status detail) and also when master volume is given
(i.e.'gluster volume geo-replication <master-vol> status detail').
BUG: 1410071
Signed-off-by: SonaArora <sarora@redhat.com>
Change-Id: I1a39241a4a5d70c20b2326599b3cccd8f1f6dc78
Reviewed-on: http://review.gluster.org/16347
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: Kotresh HR <khiremat@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When there are already existing non-granular indices created that are
yet to be healed, if granular-entry-heal option is toggled from 'off' to
'on', AFR self-heal whenever it kicks in, will try to look for granular
indices in 'entry-changes'. Because of the absence of name indices,
granular entry healing logic will fail to heal these directories, and
worse yet unset pending extended attributes with the assumption that
are no entries that need heal.
To get around this, a new CLI is introduced which will invoke glfsheal
program to figure whether at the time an attempt is made to enable
granular entry heal, there are pending heals on the volume OR there
are one or more bricks that are down. If either of them is true, the
command will be failed with the appropriate error.
New CLI: gluster volume heal <VOL> granular-entry-heal {enable,disable}
Change-Id: I1f4fe8162813b9068e198965d94169fee4adc099
BUG: 1370410
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/15747
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Checkpoint command accepts "now" or any other Time
in "%Y-%m-%d %H:%M:%S" format as label.
Validation added with this patch for the input label. Checkpoint set
will fail for invalid label.
BUG: 1388401
Change-Id: I23518c151ab4b294f64cae3b78baaacb3d8f7b82
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: http://review.gluster.org/15721
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Saravanakumar Arumugam <sarumuga@redhat.com>
Reviewed-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ice70003f7295d2d8af9877e1ecf6f3c81422b30c
BUG: 1376693
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: http://review.gluster.org/15805
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: soumya k <skoduri@redhat.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
http://review.gluster.org/14085 fixes a "pragma leak" where the
generated rpc/xdr headers have a pair of pragmas that disable these
warnings. With the warnings disabled, many unused variables have
crept into the code base.
And 14085 won't pass its own smoke test until all these warnings are
fixed.
Change-Id: Ib78fed1fab00ba22a90f2148a343170488fa05cf
BUG: 1369124
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/15523
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The command basically allows replace brick with src and
dst bricks as same.
Usage:
gluster v reset-brick <volname> <hostname:brick-path> start
This command kills the brick to be reset. Once this command is run,
admin can do other manual operations that they need to do,
like configuring some options for the brick. Once this is done,
resetting the brick can be continued with the following options.
gluster v reset-brick <vname> <hostname:brick> <hostname:brick> commit {force}
Does the job of resetting the brick. 'force' option should be used
when the brick already contains volinfo id.
Problem: On doing a disk-replacement of a brick in a replicate volume
the following 2 scenarios may occur :
a) there is a chance that reads are served from this replaced-disk brick,
which leads to empty reads. b) potential data loss if next writes succeed
only on replaced brick, and heal is done to other bricks from this one.
Solution: After disk-replacement, make sure that reset-brick command is
run for that brick so that pending markers are set for the brick and it
is not chosen as source for reads and heal. But, as of now replace-brick
for the same brick-path is not allowed. In order to fix the above
mentioned problem, same brick-path replace-brick is needed.
With this patch reset-brick commit {force} will be allowed even when
source and destination <hostname:brickpath> are identical as long as
1) destination brick is not alive
2) source and destination brick have the same brick uuid and path.
Also, the destination brick after replace-brick will use the same port
as the source brick.
Change-Id: I440b9e892ffb781ea4b8563688c3f85c7a7c89de
BUG: 1266876
Signed-off-by: Anuradha Talur <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/12250
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Ashish Pandey <aspandey@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CID = 1357865, 1357866
Change-Id: I05eb345e9357d889872d259a600759392b188bb3
BUG: 789278
Signed-off-by: Muthu-vigneshwaran <mvignesh@redhat.com>
Reviewed-on: http://review.gluster.org/15148
Tested-by: Muthu Vigneshwaran
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Prashanth Pai <ppai@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
http://review.gluster.org/14085 fixes a/the "leak" - via the
generated rpc/xdr headers - of pragmas that mask these warnings.
However 14085 won't pass the smoke test until all the warnings are
fixed.
Change-Id: Ifc33762cf62259961ceb35ae9ac3cbec7094b703
BUG: 1369124
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/15238
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently there is no existing CLI that can be used to get the
local state representation of the cluster as maintained in glusterd
in a readable as well as parseable format.
The CLI added has the following usage:
# gluster get-state [daemon] [odir <path/to/output/dir>] [file <filename>]
This would dump data points that reflect the local state
representation of the cluster as maintained in glusterd (no other
daemons are supported as of now) to a file inside the specified
output directory. The default output directory and filename is
/var/run/gluster and glusterd_state_<timestamp> respectively. The
option for specifying the daemon name leaves room to add support for
other daemons in the future. Following are the data points captured
as of now to represent the state from the local glusterd pov:
* Peer:
- Primary hostname
- uuid
- state
- connection status
- List of hostnames
* Volumes:
- name, id, transport type, status
- counts: bricks, snap, subvol, stripe, arbiter, disperse,
redundancy
- snapd status
- quorum status
- tiering related information
- rebalance status
- replace bricks status
- snapshots
* Bricks:
- Path, hostname (for all bricks these info will be shown)
- port, rdma port, status, mount options, filesystem type and
signed in status for bricks running locally.
* Services:
- name, online status for initialised services
* Others:
- Base port, last allocated port
- op-version
- MYUUID
Change-Id: I4a45cc5407ab92d8afdbbd2098ece851f7e3d618
BUG: 1353156
Signed-off-by: Samikshan Bairagya <samikshan@gmail.com>
Reviewed-on: http://review.gluster.org/14873
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The bitrot scrubber takes 'hourly/daily/biweekly/monthly'
as the values for 'scrub-frequency'. There is no way
to schedule the scrubbing when the admin wants it.
Ondemand scrubbing brings in the new option 'ondemand'
with which the admin can start scrubbing ondemand.
It starts the scrubbing immediately.
Ondemand scrubbing is successful only if the scrubber
is in 'Active (Idle)' (waiting for it's next frequency
cycle to start scrubbing). It is not entertained when
the scrubber is in 'Paused' or already running.
Here is the command line syntax.
gluster volume bitrot <vol name> scrub ondemand
Change-Id: I84c28904367eed827a7dae8d6a535c14b28e9f4d
BUG: 1366195
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/15111
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
get_new_dict/dict_destroy is causing confusion where, dict_new/dict_destroy or
get_new_dict/dict_unref are used instead of dict_new/dict_unref.
Change-Id: I4cc69f5b6711d720823395e20fd624a0c6c1168c
BUG: 1296043
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/13183
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bitrot scrub-frequency supports "hourly|daily|weekly|biweekly|monthly".
But it is painful for testing as minimum scrub-interval is an hour
Hence introducing a scrub interval of minute to ease testing.
It is intentionally not exposed in bitrot command help as it is
only for testing.
e.g.,
gluster vol bitrot <volname> scrub-frequency minute
Change-Id: I155a65298d3fad5ae9e529d9c7d4b0d25fa297c0
BUG: 1351537
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/14836
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Set the stime xattr at all the brick roots to (0,0) if the argument
reset-sync-time has been provided on the command-line.
To avoid testing against directory specific stime, the remote
stime is assumed to be minus_infinity, if the root directory
stime is set to (0,0), before the directory scan begins.
This triggers a full volume resync to slave in the case of a
geo-rep session recreation with the same master-slave volume
pair.
Command synopsis:
gluster volume geo-replication <MASTERVOL> <SLAVE>::<SLAVEVOL> delete \
[reset-sync-time]
Update gluster cli man page to include new sub-command reset-sync-time.
Change-Id: Ie4ce03b9425ed9bb81eda8681058c0fc6f990948
BUG: 1311926
Signed-off-by: Milind Changire <mchangir@redhat.com>
Reviewed-on: http://review.gluster.org/14051
Reviewed-by: Kotresh HR <khiremat@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Aravinda VK <avishwan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A downstream coverity run has revealed few of the following coverity defects.
Since the downstream code is a clone of a specific upstream branch the defects
hold true for the upstream as well.
Defect type: NEGATIVE_RETURNS
xlators/mgmt/glusterd/src/glusterd-rpc-ops.c:641: negative_returns: "op_errno"
is passed to a parameter that cannot be negative.
Defect type: BUFFER_SIZE_WARNING
xlators/mgmt/glusterd/src/glusterd-volume-ops.c:2124: buffer_size_warning:
Calling strncpy with a maximum size argument of 261 bytes on destination array
"volinfo->volname" of size 261 bytes might leave the destination string
unterminated.
Defect type: BUFFER_SIZE_WARNING
xlators/mgmt/glusterd/src/glusterd-volgen.c:4888: buffer_size_warning: Calling
strncpy with a maximum size argument of 261 bytes on destination array
"volinfo->volname" of size 261 bytes might leave the destination string
unterminated.
Defect type: STRING_OVERFLOW
xlators/mgmt/glusterd/src/glusterd-volgen.c:3449: string_overflow: You might
overrun the 256 byte destination string "tmp_volname" by writing 261 bytes from
"volinfo->volname".
Defect type: BUFFER_SIZE_WARNING
xlators/mgmt/glusterd/src/glusterd-utils.c:3392: buffer_size_warning: Calling
strncpy with a maximum size argument of 261 bytes on destination array
"new_volinfo->volname" of size 261 bytes might leave the destination string
unterminated.
Defect type: NO_EFFECT
xlators/mgmt/glusterd/src/glusterd-utils.c:7359: remediation: Was
"rebal->rebalance_id" formerly declared as a pointer?
Defect type: USE_AFTER_FREE
xlators/mgmt/glusterd/src/glusterd-utils.c:7115: pass_freed_arg: Passing freed
pointer "volinfo" as an argument to "glusterd_friend_contains_vol_bricks".
Defect type: DEADCODE
cli/src/cli-cmd-parser.c:1767: dead_error_begin: Execution cannot reach this
statement: "ret = -1;".
Change-Id: Ie941bdf31923e2f39618dd94bfae16fdb3ad65f1
BUG: 789278
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/14818
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A negative case like `gluster volume create volname arbiter 3 /bricks{1..3}`
must not crash. 'arbiter' keyword is valid only for (3 way) replica volumes.
The .t that is added will crash and create a core *without* the fix when
run but will still pass all TESTs. Since the regression framework fails
the .t if it creates a core, we can consider it a valid test 'that
fails without the fix'.
Change-Id: Ie2d7ced66025ea3617d30f6f823b22401e6d2fde
BUG: 1346821
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/14738
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Do not refer to obsolete syntax when throwing an error on detach tier.
Throw a warning if the user tries to commit a detach tier before
starting (or completing) the decommission process.
Change-Id: I9df28c1b42b8ab1790e1db5c0cd518432146c1d1
BUG: 1337227
Signed-off-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-on: http://review.gluster.org/14438
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1.Provide a command to convert replica 2 volumes to arbiter volumes.
Existing self-heal logic will automatically heal the file hierarchy into
the arbiter brick, the progress of which can be monitored using the
heal info command.
Syntax: gluster volume add-brick <VOLNAME> replica 3 arbiter 1
<HOST:arbiter-brick-path>
2. Add checks when removing bricks from arbiter volumes:
- When converting from arbiter to replica 2 volume, allow only arbiter
brick to be removed.
- When converting from arbiter to plain distribute volume, allow only if
arbiter is one of the bricks that is removed.
3. Some clean-up:
- Use GD_MSG_DICT_GET_SUCCESS instead of GD_MSG_DICT_GET_FAILED to
log messages that are not failures.
- Remove unused variable `brick_list`
- Move 'brickinfo->group' related functions to glusted-utils.
Change-Id: Ic87b8c7e4d7d3ab03f93e7b9f372b314d80947ce
BUG: 1318289
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/14126
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also missing bang (!) in #!/bin/bash in shell scripts.
Change-Id: I567a4be8f0f31f6285550f243fe802895f6bc43b
BUG: 1336793
Reported-by: Patrick Matthäi <pmatthaei@debian.org>
Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/14398
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kotresh HR <khiremat@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Extended the CLI to include support for split-brain resolution based on
mtime. The command syntax is:
$:gluster volume heal <VOLNAME> split-brain latest-mtime <FILE>
where <FILE> can be either the full file name as seen from the root of the
volume (or) the gfid-string representation of the file.
Change-Id: I7a16f72ff1a4495aa69f43f22758a9404e958b4f
BUG: 1321322
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/13828
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
setting any user.* option can have multiple words in the key and hence skip the
process of trimming whitespaces.
Change-Id: Ia295428d7cda55e5470fef668325bea497f2333d
BUG: 1316499
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/13669
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
snapshot info was using a different dictionary key
as compared to other snapshot commands. This was
throwing a recurring error log in cli.log
Change-Id: I9eb9a4541c10a45ec7673a23c89c85a7dce9f3ec
BUG: 1316819
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/13677
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
cli/src/cli-cmd-parser.c (chenk)
cli/src/cli-xml-output.c (spandit)
cli/src/cli.c (chenk)
libglusterfs/src/common-utils.c (vmallika)
libglusterfs/src/gfdb/gfdb_sqlite3.c (jfernand +1)
rpc/rpc-transport/socket/src/socket.c (?)
xlators/cluster/afr/src/afr-transaction.c (?)
xlators/cluster/dht/src/dht-common.h (srangana +2)
xlators/cluster/dht/src/dht-selfheal.c (srangana +2)
xlators/debug/io-stats/src/io-stats.c (R. Wareing)
xlators/features/barrier/src/barrier.c (vshastry)
xlators/features/bit-rot/src/bitd/bit-rot-scrub.h (vshankar +1)
xlators/features/shard/src/shard.c (kdhananj +1)
xlators/mgmt/glusterd/src/glusterd-ganesha.c (skoduri)
xlators/mgmt/glusterd/src/glusterd-handler.c (atinmu)
xlators/mgmt/glusterd/src/glusterd-op-sm.h (atinmu)
xlators/mgmt/glusterd/src/glusterd-snapshot.c (spandit)
xlators/mgmt/glusterd/src/glusterd-syncop.c (atinmu)
xlators/mgmt/glusterd/src/glusterd-volgen.c (atinmu)
xlators/protocol/client/src/client-messages.h (mselvaga +1)
xlators/storage/bd/src/bd-helper.c (M. Mohan Kumar)
xlators/storage/bd/src/bd.c (M. Mohan Kumar)
xlators/storage/posix/src/posix.c (nbalacha +1)
Change-Id: I85934fbcaf485932136ef3acd206f6ebecde61dd
BUG: 1293133
Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/13031
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When limit value is set to 0, quota assumes that limit is not
set and is not enforcing currently. The patch fixes this issue by
restricting the user from setting limit value = 0.
Change-Id: I316653165cd9766600f95b5f23e5b8abcc2b41e7
BUG: 1302554
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/13309
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The start command doesnt restart the tier deamon if the deamon
is running at one node. hence to bring up the tierd on the nodes
where the deamon is down, the force command is implemented.
It skips the check for tierd running.
Change-Id: I0037d3e5ecfe56637d0da201a97903c435d26436
BUG: 1292112
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12983
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently scrub status command is not displaying list of all the bad files. All
the bad files are avaliable in the bitd daemon.
With this patch it will dispaly list of all the bad file's in the scrub
status command.
Change-Id: If09babafaf5d7cf158fa79119abbf5b986027748
BUG: 1207627
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/12720
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CLI command for bitrot scrub status will be :
gluster volume bitrot <volname> scrub status
Above command will show the statistics of bitrot scrubber.
Upon execution of this command it will show some common
scrubber tunable value of volume <VOLNAME> followed by
statistics of scrubber statistics of individual nodes.
sample ouput for single node:
Volume name : <VOLNAME>
State of scrub: Active
Scrub frequency: biweekly
Bitrot error log location: /var/log/glusterfs/bitd.log
Scrubber error log location: /var/log/glusterfs/scrub.log
=========================================================
Node name:
Number of Scrubbed files:
Number of Unsigned files:
Last completed scrub time:
Duration of last scrub:
Error count:
=========================================================
This is just infrastructure. list of bad file, last scrub
time, error count value will be taken care by
http://review.gluster.org/#/c/12503/ and
http://review.gluster.org/#/c/12654/ patches.
Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1
BUG: 1207627
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10231
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Geo-replication uses default ssh port 22 for setup.
i.e., to distribute ssh keys to slaves. In container
environments, custom port number might be used.
Hence to support custom port number for ssh, option
is provided in geo-rep create command to take the
same.
Change-Id: I0fb61959b1c085342b8e4c21ac4e076fba5462f1
BUG: 1276028
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/12504
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Aravinda VK <avishwan@redhat.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
command
Change-Id: Idf7664d509156ce46ef4308ffc07fb556a0aedd2
BUG: 1268755
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/12297
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: mohammed rafi kc <rkavunga@redhat.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current detach-tier cli command support commit force.
Deprecating the same to force.
So the new syntax would be:
volume detach-tier <VOLNAME> <start|stop|status|commit|force>
Change-Id: Ie86dfd72341078c0a1be94767f523730911312ef
BUG: 1261862
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/12151
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently the tier feature piggy backs off the rebalance command
syntax to obtain status and this is clumsy. Introduce a new
tier command that can do tier specific operations, starting
with volume status to display counters.
Old commands:
gluster volume attach-tier <vol> [replica count] {bricklist..}
gluster volume detach-tier <vol> {start|stop|commit}
New commands:
gluster volume tier <vol> attach [replica count] {bricklist} |
detach {start|stop|commit} |
status
Change-Id: Ic07b3c6260588162de7d34380f8cbd3d8a7f35d3
BUG: 1255693
Signed-off-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-on: http://review.gluster.org/11984
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are three kinds of inline functions: plain inline, extern inline,
and static inline. All three have been removed from .c files, except
those in "contrib" which aren't our problem. Inlines in .h files, which
are overwhelmingly "static inline" already, have generally been left
alone. Over time we should be able to "lower" these into .c files, but
that has to be done in a case-by-case fashion requiring more manual
effort. This part was easy to do automatically without (as far as I can
tell) any ill effect.
In the process, several pieces of dead code were flagged by the
compiler, and were removed.
Change-Id: I56a5e614735c9e0a6ee420dab949eac22e25c155
BUG: 1245331
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/11769
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: snapshot delete all command fails with --xml option
Fix: Provided xml support for delete all command
Change-Id: I77cad131473a9160e188c783f442b6a38a37f758
BUG: 1257533
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-on: http://review.gluster.org/12027
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, if absolute path is not entered in
"gluster volume quota <vol-name> list <path>",
it just shows the header (Path Hard-limit Soft-limit...)
instead of showing an error message.
With this patch, it shows an error to enter the absolute path.
Change-Id: I2c3d34bfdc7b924d00b11f8649b73a5069cbc2dc
BUG: 1245558
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/11738
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I14c049c84c468b6415a1de45441b2fed94e8ed4b
BUG: 1240654
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/11566
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Snapshot fails for a volume configured with geo-rep
if geo-rep is created with root user with the following
syntax.
gluster vol geo-rep <master_vol> root@<slave_host>::<slave_vol>
It works fine if created with following syntax.
gluster vol geo-rep <master_vol> <slave_host>::<slave_vol>
Cause:
Geo-rep maintains persistent dictionary of slave associated
with master volume. The dictionary saves the slave info
along with 'root@' as sent from cli.
Snapshot while constructing the working dir path to copy
configuration files, constructs using this dictionary.
But the actual working dir is created with out considering
'root@'. Hence the issue.
Fix:
Fix is done at two layers.
1. Parse and negelect 'root@' in cli itself.
2. For existing geo-rep sessions and upgrade scenarios,
parse and neglect 'root@' in snapshot code as well.
Change-Id: If4e04f7f776ef71df4dd1e7e053ef75db98762b2
BUG: 1231789
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/11233
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Consider a volume that is exported via NFS-Ganesha. Stopping this
volume will automatically unexport the volume. Starting this volume
should automatically export it. Although the logic was already there,
there was a bug in it. Fixing the same by introducing a hook script.
Also with the new CLI options, the hook script S31ganesha-set.sh
is no longer required. Hence, removing the same.
Adding a comment to tell the user that one of the CLI
commands will take a few minutes to complete.
Change-Id: Ibff769ca04fef0c2a129c83fe31fc9c869350e8d
BUG: 1231738
Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com>
Reviewed-on: http://review.gluster.org/11247
Reviewed-by: soumya k <skoduri@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently bitrot using 120 second waiting time for object to be signed
after all fop's released. This signing waiting time value should be tunable.
Command for changing the signing waiting time will be
#gluster volume bitrot <VOLNAME> signing-time <waiting time value in second>
Change-Id: I89f3121564c1bbd0825f60aae6147413a2fbd798
BUG: 1228680
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/11105
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
shared storage
Introducing a global volume set option(cluster.enable-shared-storage)
which helps create and set-up the shared storage meta volume.
gluster volume set all cluster.enable-shared-storage enable
On enabling this option, the system analyzes the number of peers
in the cluster, which are currently connected, and chooses three
such peers(including the node the command is issued from). From these
peers a volume(gluster_shared_storage) is created. Depending on the
number of peers available the volume is either a replica 3
volume(if there are 3 connected peers), or a replica 2 volume(if there
are 2 connected peers). "/var/run/gluster/ss_brick" serves as the
brick path on each node for the shared storage volume. We also mount
the shared storage at "/var/run/gluster/shared_storage" on all the nodes
in the cluster as part of enabling this option. If there is only one node
in the cluster, or only one node is up then the command will fail
Once the volume is created, and mounted the maintainance of the
volume like adding-bricks, removing bricks etc., is expected to
be the onus of the user.
On disabling the option, we provide the user a warning, and on
affirmation from the user we stop the shared storage volume, and unmount
it from all the nodes in the cluster.
gluster volume set all cluster.enable-shared-storage disable
Change-Id: Idd92d67b93f444244f99ede9f634ef18d2945dbc
BUG: 1222013
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/10793
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
heal-op was being incorrectly set to
GF_SHD_OP_SBRAIN_HEAL_FROM_BIGGER_FILE.
Change-Id: I4d4461c7737feae30102e82f7788083017485669
BUG: 1221128
Signed-off-by: Anuradha <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/10771
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of including config.h in each file, and have the additional
config.h included from the compiler commandline (-include option).
When a .c file tests for a certain #define, and config.h was not
included, incorrect assumtions were made. With this change, it can not
happen again.
BUG: 1222319
Change-Id: I4f9097b8740b81ecfe8b218d52ca50361f74cb64
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/10808
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch reimplments existing scrub-frequency mechanism used
to schedule scrubber runs. Existing mechanism uses periodic
sleeps (waking up periodically on minimum granularity) and
performing a number of tracking checks based on counters and
sleep times. This patch does away with all the nifty counters
and uses timer-wheel to schedule scrub runs.
Scheduling changes are peformed by merely calculating the new
expiry time and calling mod_timer() [mod_timer_pending() in
some cases] making the code more debuggable and easier to
follow. This also introduces "hourly" scrubbing tunable as an
aid for testing scrubbing during development/testing cycle.
One could also implement on-demand scrubbing with ease: by
invoking mod_timer() with an expiry of one (1) second, thereby
scheduling a scrub run the very next second.
Change-Id: I6c7c5f0c6c9f886bf574d88c04cde14b76e60a8b
BUG: 1224596
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/10893
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Tested-by: NetBSD Build System
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix for handling cli output for attach-tier and
detach-tier
Change-Id: I4d17f4b09612754fe1b8cec6c2e14927029b9678
BUG: 1211562
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/10284
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some operations like add-brick,remove-brick,rebalance,
replace-brick are not supported on tiered volume.
But there is no code level check for this. This patch
will allow to do the same
Change-Id: I12689f4e902cf0cceaf6f7f29c71057305024977
BUG: 1205624
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/10349
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace-brick operation with data migration support have been
deprecated from gluster.
With this fix replace brick command will support only one commad
gluster volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force}
Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b
BUG: 1094119
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10101
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
inode quota is a new feature implemented in glusterfs-3.7
if quota is enabled in the older version and is upgraded
to a new version, we can hit setxattr spike during self-heal
of inode quotas. So, when a quota is enabled, turn off
inode-quotas with a xlator option.
With this patch, we still account for inode quotas but only
when a write operation is performed for a particular file.
User will be able to query inode quotas once the Inode-quota
xlator option is enabled.
Change-Id: I52fb28bf7024989ce7bb08ac63a303bf3ec1ec9a
BUG: 1209430
Signed-off-by: vmallika <vmallika@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/10152
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Global option gluster features.ganesha enable
writes into the global 'option' file. The snapshot
feature also writes into the same file.
To handle concurrent multiple transactions correctly,
a new lock has to be introduced on this file.
Every operation using this file needs
to contest for the new lock type.
Change-Id: Ia8a324d2a466717b39f2700599edd9f345b939a9
BUG: 1200254
Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com>
Reviewed-on: http://review.gluster.org/10130
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: soumya k <skoduri@redhat.com>
Tested-by: NetBSD Build System
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|