| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/12311
Change-Id: I9c71ae264665b7bba609c7f86cf42a52a6b47260
BUG: 1269702
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/12312
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch is backport of: http://review.gluster.org/10231
CLI command for bitrot scrub status will be :
gluster volume bitrot <volname> scrub status
Above command will show the statistics of bitrot scrubber.
Upon execution of this command it will show some common
scrubber tunable value of volume <VOLNAME> followed by
statistics of scrubber statistics of individual nodes.
sample ouput for single node:
Volume name : <VOLNAME>
State of scrub: Active
Scrub frequency: biweekly
Bitrot error log location: /var/log/glusterfs/bitd.log
Scrubber error log location: /var/log/glusterfs/scrub.log
=========================================================
Node name:
Number of Scrubbed files:
Number of Unsigned files:
Last completed scrub time:
Duration of last scrub:
Error count:
=========================================================
This is just infrastructure. list of bad file, last scrub
time, error count value will be taken care by
http://review.gluster.org/#/c/12503/ and
http://review.gluster.org/#/c/12654/ patches.
>> Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1
>> BUG: 1207627
>> Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
>> Reviewed-on: http://review.gluster.org/10231
>> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
>> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
>> Tested-by: Gluster Build System <jenkins@build.gluster.com>
Change-Id: I45ed94e5e0e78a1e007c30eb0b252f74cf3c9187
BUG: 1283881
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/12704
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a --resolve-gids commandline option to the glusterfs binary. This
option gets set when executing "mount -t glusterfs -o resolve-gids ...".
This option is most useful in combination with the "acl" mount option.
POSIX ACL permission checking is done on the FUSE-client side to improve
performance (in addition to the checking on the bricks).
The fuse-bridge reads /proc/$PID/status by default, and this file
contains maximum 32 groups. Any local (client-side) permission checking
that requires more than the first 32 groups will fail.
By enabling the "resolve-gids" option, the fuse-bridge will call
getgrouplist() to retrieve all the groups from the user accessing the
mountpoint. This is comparable to how "nfs.server-aux-gids" works.
Note that when a user belongs to more than ~93 groups, the volume option
server.manage-gids needs to be enabled too. Without this option, the
RPC-layer will need to reduce the number of groups to make them fit in
the RPC-header.
Cherry picked from commit 64a5bf3749c67fcc00773a2716d0c7b61b0b4417:
> Change-Id: I7ede90d0e41bcf55755cced5747fa0fb1699edb2
> BUG: 1246275
> Signed-off-by: Niels de Vos <ndevos@redhat.com>
> Reviewed-on: http://review.gluster.org/11732
> Tested-by: NetBSD Build System <jenkins@build.gluster.org>
> Reviewed-by: Ravishankar N <ravishankar@redhat.com>
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Change-Id: I7ede90d0e41bcf55755cced5747fa0fb1699edb2
BUG: 1246397
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/11875
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When new bricks are added in the middle of an on-going
fop like 'rm', the volfile changes without waiting for
the newly added bricks to get port. Fops are sent to all
bricks and may fail on some with ENOTCONN as these bricks
may not have a port yet.
This patch ensures that the volfile change happens only
after all the bricks have a port.
> Backport of http://review.gluster.org/#/c/11342/
> Change-Id: I7ed2413475f80d0cc8849fed33036ade8d75a191
> BUG: 1233151
> Signed-off-by: Sakshi <sabansal@redhat.com>
> Reviewed-on: http://review.gluster.org/11342
> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
> Tested-by: Atin Mukherjee <amukherj@redhat.com>
Change-Id: I7ed2413475f80d0cc8849fed33036ade8d75a191
BUG: 1265890
Signed-off-by: Sakshi <sabansal@redhat.com>
Reviewed-on: http://review.gluster.org/12223
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The @owner argument tells RPC layer the xlator that owns
the connection and to which xlator THIS needs be set during
network notifications like CONNECT and DISCONNECT.
Code paths that originate from the head of a (volume) graph and use
STACK_WIND ensure that the RPC local endpoint has the right xlator saved
in the frame of the call (callback pair). This guarantees that the
callback is executed in the right xlator context.
The client handshake process which includes fetching of brick ports from
glusterd, setting lk-version on the brick for the session, don't have
the correct xlator set in their frames. The problem lies with RPC
notifications. It doesn't have the provision to set THIS with the xlator
that is registered with the corresponding RPC programs. e.g,
RPC_CLNT_CONNECT event received by protocol/client doesn't have THIS set
to its xlator. This implies, call(-callbacks) originating from this
thread don't have the right xlator set too.
The fix would be to save the xlator registered with the RPC connection
during rpc_clnt_new. e.g, protocol/client's xlator would be saved with
the RPC connection that it 'owns'. RPC notifications such as CONNECT,
DISCONNECT, etc inherit THIS from the RPC connection's xlator.
Change-Id: I9dea2c35378c511d800ef58f7fa2ea5552f2c409
BUG: 1253212
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/11436
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
(cherry picked from commit f7668938cd7745d024f3d2884e04cd744d0a69ab)
Reviewed-on: http://review.gluster.org/11908
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/#/c/9893/
Cherry picked from 59e1c79ac6953e552fbdc8c627f5dc1d02eeacc8
>Change-Id: I29bdeefb755805858e3cb1817b679cb6f9a476a9
>BUG: 1194640
>Signed-off-by: Hari Gowtham <hgowtham@dhcp35-85.lab.eng.blr.redhat.com>
>Reviewed-on: http://review.gluster.org/9893
>Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
>Tested-by: NetBSD Build System <jenkins@build.gluster.org>
>Tested-by: Gluster Build System <jenkins@build.gluster.com>
>Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
Change-Id: I29bdeefb755805858e3cb1817b679cb6f9a476a9
BUG: 1217722
Signed-off-by: Hari Gowtham <hgowtham@dhcp35-85.lab.eng.blr.redhat.com>
Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/11279
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem 1 : glusterd was crashing due to race between clean up thread and rpc event thread.
Scenario:
As we can observed, X thread is in the process of exiting the process. It has already
run the exit handlers, which cleanup things that require cleaning up. This includes
liburcu resources. By the time Y thread called rcu_bp_register(), the liburcu resources
have been cleaned up. rcu_bp_register() tries to access these non-existent resources,
which leads to the segmentation fault.
Note1:
Crash happen when the process is almost at the point of stopping(exiting), it doesn't have any
serious impact to functionality apart from creating the core dump file and the log message.
Fix .Do proper clean up before calling exit().
Note2: Other xlator have clean up issues,so only glusterd clean up function invoked.
Note3: This patch also solve the selinux issue.
Problem 2 : glusterd runs as rpm_script_t when it's executed from the rpm scriptlet,files created
in this context are set as rpm_script_t, so glusterd unable to access these files when it runs
in glusterd_t context.
Fix: Fini clean up the files while glusterd exiting, so files are recreated by glusterd while
starting with proper SElinux context label.
Backport of :
>Change-Id: Idcfd087f51c18a729bdf44a146f9d294e2fca5e2
>BUG: 1209461
>Signed-off-by: anand <anekkunt@redhat.com>
>Reviewed-on: http://review.gluster.org/10894
>Tested-by: Gluster Build System <jenkins@build.gluster.com>
>Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
>Tested-by: NetBSD Build System <jenkins@build.gluster.org>
>Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
>Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Change-Id: I59579e675bd73d7a19f7b965bd2c3c0fcd95d241
BUG: 1230026
Signed-off-by: anand <anekkunt@redhat.com>
Reviewed-on: http://review.gluster.org/11155
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instantiate a process wide global instance of the timer wheel
data structure. Spawning glusterfs* process with option arg
"--global-timer-wheel" instantiates a global instance of
timer-wheel under global context (->ctx).
Translators can make use of this process wide instance [via a
call to glusterfs_global_timer_wheel()] instead of maintaining
an instance of their own and possibly consuming more memory.
Linux kernel too has a single instance of timer wheel where
subsystems such as IO, networking, etc.. make use of.
Bitrot daemon would be early consumers of this: bitrot translator
instances for multiple volumes would track objects belonging to
their respective bricks in this global expiry tracking data
structure. This is also a first step to move GlusterFS timer
mechanism to use timer-wheel.
> Change-Id: Ie882df607e07acaced846ea269ebf1ece306d6ae
> BUG: 1170075
> Signed-off-by: Venky Shankar <vshankar@redhat.com>
> Reviewed-on: http://review.gluster.org/10380
> Tested-by: NetBSD Build System
> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
Change-Id: I35c840daa9996a059699f8ea5af54c76ede7e09c
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
BUG: 1220041
Reviewed-on: http://review.gluster.org/10716
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Command gluster volume status <VOLNAME> should show the status of bitrot
and scrubber daemon and its pid information.
Along with displaying bitrot and scrubber daemon information in gluster
volume status command there should be command to show its individual status
separately.
Command to show individual status of bitrot and scrubber daemon will
following.
command to show only bitd daemon information will be
gluster volume status <VOLNAME> bitd
command to show only scrubber daemon information
gluster volume status <VOLNAME> scrub
Change-Id: Id86aae1156c8c599347c98e2a538f294d37376e4
BUG: 1218123
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10175
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Kaushal M <kaushal@redhat.com>
(cherry picked from commit da1416051d19d612d131acfde8589bc8658979b5)
Reviewed-on: http://review.gluster.org/10519
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sssd uses 300 seconds by default too. There is no need to overload sssd
with requests that it would have cached.
Cherry picked from commit 34833364e9839f0036bccd58ec0a8a963e69263e:
> BUG: 1215187
> Change-Id: I3f04ea8cc90180d863253a9f46d62b71810a7b34
> Signed-off-by: Niels de Vos <ndevos@redhat.com>
> Reviewed-on: http://review.gluster.org/10371
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Change-Id: I3f04ea8cc90180d863253a9f46d62b71810a7b34
BUG: 1215189
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/10523
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NFS now has the ability to use a separate file for "netgroups" and
"exports". An administrator should have the ability to check the
validity of the files before applying the configuration.
The "glusterfsd" command now has the following additional arguments that
can be used to check the configuration:
--print-netgroups: Validate the netgroups file and print it out
--print-exports: Validate the exports file and print it out
BUG: 1143880
Change-Id: I24c40d50110d49d8290f9fd916742f7e4d0df85f
URL: http://www.gluster.org/community/documentation/index.php/Features/Exports_Netgroups_Authentication
Original-author: Shreyas Siravara <shreyas.siravara@gmail.com>
CC: Richard Wareing <rwareing@fb.com>
CC: Jiffin Tony Thottan <jthottan@redhat.com>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/9365
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
... from all bricks in the volume
This patch is important in the context of MT epoll. With MT epoll,
notification events from client xlators could reach cluster xlators like
afr, dht, ec, stripe etc. in different orders.
For e.g, In a distributed replicate volume of 2 bricks, namely Brick1
and Brick2, the following network events are observed by a mount
process.
- connection to Brick1 is broken.
- connection to Brick1 has been restored.
- connection to Brick2 is broken.
- connection to Brick2 has been restored.
Without establishing a total ordering of events, we can't guarantee that
cluster xlators like afr, dht perceive them in the same order. While we
would expect afr (say) to perceive it as only one of Brick1 and Brick2
going down at any given time, it is possible for the notification of
Brick2 going offline to race with the notification of Brick1 coming back
online.
Change-Id: I78f5a52bfb05593335d0e9ad53ebfff98995593d
BUG: 1104462
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9591
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the ability to configure the number of event threads
for various gluster services.
Currently with the multi thread epoll patch, it is possible
to have more than one thread waiting on socket activity and
processing the same. This thread count is currently static,
which this commit makes dynamic.
The current services which use IO path, i.e brick processes,
any client process (nfs, FUSE, gfapi, heal,
rebalance, etc.a), gain 2 set parameters to control the number
of threads that are processing events. These settings are,
- client.event-threads <n>
- server.event-threads <n>
The client setting affects the client graph consumers, and the
server setting affects the brick processes. These are processed
and inited/reconfigured using the client/server protocol xlators.
Other services (say glusterd) would need to extend similar
configuration settings to take advantage of multi threaded event
processing.
At present glusterd is not enabled with this commit, as it does not
stand to gain from this multi-threading (as I understand it).
Change-Id: Id8422fc57a9f95a135158eb6477ccf9d3c9ea4d9
BUG: 1104462
Signed-off-by: Shyam <srangana@redhat.com>
Reviewed-on: http://review.gluster.org/9488
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Bring in option to disable memory accounting for a glusterfs process
This reverses the changes done by the commit
7fba3a88f1ced610eca0c23516a1e720d75160cd.
* Change the key from "memory-accounting" to "no-memory-accounting", as by
default all the glusterfs process enable memory accounting now. So to
disable memory accounting for some process, "no-mem-accounting" argument has
to be passed.
Change-Id: I39c7cefb0fe764ea3e48f4e73e1305b084c5f497
BUG: 1184366
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/9469
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This got introduced due to 656711d935000c16. Coverity
also picked this up as CIDs 1256176, 1256178, 1256180.
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Change-Id: If12fa0075634383975846181917a2f9650f790e3
BUG: 789278
Reviewed-on: http://review.gluster.org/9213
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For rdma type only volume client connection establishment
with server takes more than three seconds. Because for
tcp,rdma type volume, will have 2 ports one for tcp and
one for rdma, tcp port is stored with brickname and rdma
port is stored as "brickname.rdma" during pamap_sighin.
During the handshake when trying to get the brick port
for rdma clients, since we are not aware of server
transport type, we will append '.rdma' with brick name.
So for tcp,rdma volume there will be an entry with
'.rdma', but it will fail for rdma type only volume.
So we will try again, this time without appending '.rdma'
using a flag variable need_different_port, and it will succeed,
but the reconnection happens only after 3 seconds.
In this patch for rdma only type volume
we will append '.rdma' during the pmap_signin. So during the
handshake we will get the correct port for first try itself.
Since we don't need to retry , we can remove the
need_different_port flag variable.
Change-Id: Ie8e3a7f532d4104829dbe995e99b35e95571466c
BUG: 1153569
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/8934
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we try to mount a tcp,rdma volume as rdma
transport using FUSE protocol, then mount will
hang if the brick is down. When we kill a process,
signal will be received in glusterfsd process and
it will call pmap_signout with port listening on tcp only.
In case of the tcp,rdma there will be two ports,
and port which is listening for rdma will not
called for sign out.
So the mount process will try to connect to a port
which is not open and it will keep trying to connect.
This patch will call pmap_signout for rdma port also,
So when mount tries to get the brick port,it will fail.
Change-Id: I23676f65f96eb90b69b76478f7a21412a6aba70f
BUG: 1143886
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/8762
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
some code does not check xlator_mem_acct_init()
return, thus fails to capture wrong memory accounting
initialization. This patch fix the same.
Change-Id: I01eab19d6cef472afd850b0f964132c01523492a
BUG: 1123768
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Reviewed-on: http://review.gluster.org/7728
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The aux-gfid-mount works on non Linux systems, and it is required
to pass tests/basic/gfid-access.t
BUG: 764655
Change-Id: Ic6c8ef425e091440a139bbd25fadbf4f82e378cb
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8446
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Tested-by: Harshavardhana <harsha@harshavardhana.net>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ie5d437aa6c52c180fd8d54680c5f882e75c0bf7e
BUG: 1089172
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/8448
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Break-way from '/var/lib/glusterd' hard-coded previously,
instead rely on 'configure' value from 'localstatedir'
- Provide 's/lib/db' as default working directory for gluster
management daemon for BSD and Darwin based installations
- loff_t is really off_t on Darwin
- fix-off the warnings generated by clang on FreeBSD/Darwin
- Now 'tests/*' use GLUSTERD_WORKDIR a common variable for all
platforms.
- Define proper environment for running tests, define correct PATH
and LD_LIBRARY_PATH when running tests, so that the desired version
of glusterfs is used, regardless where it is installed.
(Thanks to manu@netbsd.org for this additional work)
Change-Id: I2339a0d9275de5939ccad3e52b535598064a35e7
BUG: 1111774
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8246
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The feature is controlled by presence of the following file:
/var/lib/glusterd/secure-access
See the comment near the definition of SECURE_ACCESS_FILE in glusterfs.h
for the rationale. With this enabled, the following rules apply to
connections:
UNIX-domain sockets never have SSL.
Management-port sockets (both connecting and accepting, in
daemons and CLI) have SSL based on presence of the file.
Other IP sockets have SSL based on the existing client.ssl and
server.ssl volume options.
Transport multi-threading is explicitly turned off in glusterd (it would
otherwise be turned on when SSL is) due to multi-threading issues.
Tests have been elided to avoid risk of leaving a file which will cause
all subsequent tests to run with management SSL still enabled.
IMPLEMENTATION NOTE
The implementation is a bit messy, and consists of two stages. First we
decide whether to set the relevant fields in our context structure, based
on presence of the sentinel file OR a command-line override. Later we
decide whether a particular connection should actually use SSL, based on the
context flags plus what kind of connection we're making[1] and what kind of
daemon we're in[2].
[1] inbound, outbound to glusterd port, other outbound
[2] glusterd, glusterfsd, other
TESTING NOTE
Instead of just running one special test for this feature, the ideal
would be to run all tests with management SSL enabled. However, it
would be inappropriate or premature to set up an optional feature in the
patch itself. Therefore, the method of choice is to submit a separate
patch on top, which modifies "cleanup" in include.rc to recreate the
secure-access file and associated SSL certificate/key files before each
test.
Change-Id: I0e04d6d08163893e24ec8c031748c5c447d7f780
BUG: 1114604
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/8094
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While accessing the procedures of given RPC program in,
rpcsvc_get_program_vector_sizer(), It was not checking boundary
conditions which would cause buffer overflow and subsequently SEGV.
Make sure rpcsvc_actor_t arrays have numactors number of actors.
FIX:
Validate the RPC procedure number before fetching the actor.
Special Thanks to: Murray Ketchion, Grant Byers
Change-Id: I8b5abd406d47fab8fca65b3beb73cdfe8cd85b72
BUG: 1096020
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/7726
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a brick is started, and the glusterfsd process requests
for volfile, the brick_name is sent in the req dict. In
glusterd, after fetching the spec the brick_name is looked
up in the missed_snap_list, and any missing snap creates on
the same brick are performed. After this, the glusterd
responds back with the specfile.
Also collate brick data from the node's hosting the bricks
during restore. In case the data is absent, the local node's
data is used. This is needed to ensure that, during a restore
we collect the information created when a missed snap create
is performed.
Change-Id: I47cefdeba96f2702be810965734cf0fac61d3d2d
BUG: 1061685
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7551
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The meta translator exposes details about glusterfs itself
in the form of a virtual namespace.
Loading the translator on the client side creates the
meta virtual view under $mntpoint/.meta by default. The
directory is not listed (even with ls -a) and can be
accessed by doing a "cd /mnt/.meta"
Change-Id: I5ffdf39203841a9562a8280a1f79dc76d4dded5d
BUG: 1089216
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/7509
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
memory accounting are constant time operations which
involve a few pointer dereferences and integer increments
(no loops or searches etc.)
benefits of having memory usage info outweigh the minor
accounting overheads
Change-Id: If9bc6db5ffd0e00f0fd64b2f6eed094bf3543996
BUG: 1089216
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/7543
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changelog barriers unlink, rename, rmdir fops on barrier 'on'
notification from glusterfsd mgmt layer and unbarriers the
same on barrier 'off' notification during snapshot.
Please see the following link for more details.
http://www.gluster.org/community/documentation/index.php/Changelog_Design_changes_for_snapshot
Signed-off-by: Kotresh H R <khiremat@redhat.com>
Change-Id: Iea9c62fafc86242f9404e03679b1941aa9c88c9a
Signed-off-by: Kotresh H R <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/7415
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Varun Shastry <vshastry@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I8efa08cc9832ad509fba65a88bb0cddbaf056404
BUG: 1075611
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/7475
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces a new 'barrier' brick-op which will be used to
activate/deactivate the barriering on the bricks. This includes
barriering in the barrier xlator and in the changelog xlator. All the
required code has been including a bricks select function, a payload
builder and a brick-op handler.
Change-Id: I91d9d77f691c2e89823f7dc4e84900ec40dc4dd2
BUG: 1060002
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6943
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs
Working functionality on MacOSX
- GlusterD (management daemon)
- GlusterCLI (management cli)
- GlusterFS FUSE (using OSXFUSE)
- GlusterNFS (without NLM - issues with rpc.statd)
Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac
BUG: 1089172
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Signed-off-by: Dennis Schafroth <dennis@schafroth.com>
Tested-by: Harshavardhana <harsha@harshavardhana.net>
Tested-by: Dennis Schafroth <dennis@schafroth.com>
Reviewed-on: http://review.gluster.org/7503
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ic4b701a6621578848ff67ae4ecb5a10b5f32f93b
BUG: 1075611
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/7372
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial patch for the Snapshot feature. Current patch
includes following features:
* Snapshot create
* Snapshot delete
* Snapshot restore
* Snapshot list
* Snapshot info
* Snapshot status
* Snapshot config
Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9
BUG: 1061685
Signed-off-by: shishir gowda <sgowda@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7128
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
... by retaining GLFS_NUM_MESSAGES as 33 which is its correct value.
Also replace all occurrences of gf_log with gf_msg/gf_msg_debug.
Change-Id: Ibfbe1d645de521e8d59ca406f78b1a8eb08aa7e0
BUG: 1075611
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/7371
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently there are quite a slew of logs in Gluster that do not
lend themselves to trivial analysis by various tools that help
collect and monitor logs, due to the textual nature of the logs.
This FEAT is to make this better by giving logs message IDs so
that the tools do not have to do complex log parsing to break
it down to problem areas and suggest troubleshooting options.
With this patch, a new set of logging APIs are introduced that
take additionally a message ID and an error number, so as to
print the message ID and the descriptive string for the error.
New APIs:
- gf_msg, gf_msg_debug/trace, gf_msg_nomem, gf_msg_callingfn
These APIs follow the functionality of the previous gf_log*
counterparts, and hence are 1:1 replacements, with the delta
that, gf_msg, gf_msg_callingfn take additional parameters as
specified above.
Defining the log messages:
Each invocation of gf_msg/gf_msg_callingfn, should provide an ID
and an errnum (if available). Towards this, a common message id
file is provided, which contains defines to various messages and
their respective strings. As other messages are changed to the
new infrastructure APIs, it is intended that this file is edited
to add these messages as well.
Framework enhanced:
The logging framework is also enhanced to be able to support
different logging backends in the future. Hence new configuration
options for logging framework and logging formats are introduced.
Backward compatibility:
Currently the framework supports logging in the traditional
format, with the inclusion of an error string based on the errnum
passed in. Hence the shift to these new APIs would retain the log
file names, locations, and format with the exception of an
additional error string where applicable.
Testing done:
Tested the new APIs with different messages in normal code paths
Tested with configurations set to gluster logs (syslog pending)
Tested nomem variants, inducing the message in normal code paths
Tested ident generation for normal code paths (other paths
pending)
Tested with sample gfapi program for gfapi messages
Test code is stripped from the commit
Pending work (not to be addressed in this patch (future)):
- Logging framework should be configurable
- Logging format should be configurable
- Once all messages move to the new APIs deprecate/delete older
APIs to prevent misuse/abuse using the same
- Repeated log messages should be suppressed (as a configurable
option)
- Logging framework assumes that only one init is possible, but
there is no protection around the same (in existing code)
- gf_log_fini is not invoked anywhere and does very little
cleanup (in existing code)
- DOxygen comments to message id headers for each message
Change-Id: Ia043fda99a1c6cf7817517ef9e279bfcf35dcc24
BUG: 1075611
Signed-off-by: ShyamsundarR <srangana@redhat.com>
Reviewed-on: http://review.gluster.org/6547
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previous cleanup of this function had removed some lines
which had left dead code. Just removing that.
Fix for coverity CID: 1167461 .
Change-Id: I2a34fc407ce0eb4c4ba759c8ce6574a00b37020a
BUG: 789278
Signed-off-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-on: http://review.gluster.org/6937
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Fix for coverity bug CID:1124340
Change-Id: Ibf8700bdeaaddade02e63470a773c5fe2aabc645
BUG: 789278
Signed-off-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-on: http://review.gluster.org/6984
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
BUG: 789278
CID: 1124805
Change-Id: I5926a4e59790f142d65f034726c9c369c2ad7808
Signed-off-by: Jose A. Rivera <jarrpa@redhat.com>
Reviewed-on: http://review.gluster.org/6909
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* As of now clients mounting within the storage pool using that machine's
ip/hostname are trusted clients (i.e clients local to the glusterd).
* Be careful when the request itself comes in as nfsnobody (ex: posix tests).
So move the squashing part to protocol/server when it creates a new frame
for the request, instead of auth part of rpc layer.
* For nfs servers do root-squashing without checking if it is trusted client,
as all the nfs servers would be running within the storage pool, hence will
be trusted clients for the bricks.
* Provide one more option for mounting which actually says root-squash
should/should not happen. This value is given priority only for the trusted
clients. For non trusted clients, the volume option takes the priority. But
for trusted clients if root-squash should not happen, then they have to be
mounted with root-squash=no option. (This is done because by default
blocking root-squashing for the trusted clients will cause problems for smb
and UFO clients for which the requests have to be squashed if the option is
enabled).
* For geo-replication and defrag clients do not do root-squashing.
* Introduce a new option in open-behind for doing read after successful open.
Change-Id: I8a8359840313dffc34824f3ea80a9c48375067f0
BUG: 954057
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/4863
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
volfile-max-fetch-attempts and fetch-attempts were not deprecated
properly at 'b610f1be7cd71b8f3e51c224c8b6fe0e7366c8cf'.
Provide a way for backward compatibility for broken third party apps.
Change-Id: I597b50df08823e74691c5a20a4da4d13aab4b7ff
BUG: 1045309
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Reviewed-on: http://review.gluster.org/6544
Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch only removes lines of code. For personal gratification, giving a
detailed explanation of what the problem was.
When glusterd spawns the local brick process, say when a reboot of the node
occurs,the glusterd_brick_start() and subsequently the
glusterd_volume_start_glusterfs() function gets called twice; from
glusterd_spawn_daemons() and glusterd_do_volume_quorum_action() respectively.
This causes a race, best described by a pseudo-code of current behaviour.
glusterd_volume_start_glusterfs()
{
if(!brick process running) {
step-a) reap pid file( i.e. unlink it)
step-b) fork a brick process which creates and locks pid file and
binds the process to a socket.
}
}
Time Event
---- -----
T1 Call-1 arrives, completes step-a, starts step-b
T2 Call-2 arrives, enters step-a as Call-1's forked child is not
yet running.
T3 Call-1's forked child is alive, creates pidfile and locks it,binds
its address to a socket.
T4 Call-2 performs step-a; i.e.unlinks the pid file created by Call-1 !!
(files can still be stil be unlinked despite a lockf on it)
T5 Call-2 does step-b, and the forked child process creates a *new* pid file
with it's pid and locks this file.
T6 But Call-2's brick process is not able to bind to socket as it
is already in use (courtesy T3) and hence exits (so no locks anymore on the pidfile).
Result:
- Pid file now contains PID of an extinct brick process.
- `gluster volume status` shows this PID value. It also notices that there is no
lock held on pid file by the currently running brick process (created by Call-1)
and hence shows N/A for the online status.
Also, as a result of events at T4, "ls -l /proc/<brick process PID>/fd/pidfile"
shows up as deleted.
Fix:
1.Do not unlink pid file. i.e. avoid step-a. Now at T5,Call-2's brick process
cannot obtain lock on pid file (Call-1's process still holds it) and exits.
2. Unrelated, but remove lock-unlock sequence in glusterfs_pidfile_setup()
which does not seem to be doing anything useful.
Change-Id: I18d3e577bb56f68d85d3e0d0bfd268e50ed4f038
BUG: 1035586
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/6786
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ica246f99b8cdb6c0cf0e9143f50be056e37d3b7f
BUG: 1045309
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/6550
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
1.Checking of frame before calling STACK_DESTROY (frame->root)
Signed-off-by: surabhi <sbhaloth@redhat.com>
Change-Id: I21d27a8b4e556c00cd123afe8512e010a1a1f80d
BUG: 789278
Signed-off-by: surabhi <sbhaloth@redhat.com>
Reviewed-on: http://review.gluster.org/6749
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issue:
1. In "unlink (export_path)" "export_path" might contain an arbitrary value left from earlier
computations.
2. In "(msg[0] != '\0')" msg might contain an arbitrary value
Change-Id: Icca8f557fd6b5e046dff1d5a84a72061975868d0
BUG: 789278
Signed-off-by: Lalatendu Mohanty <lmohanty@redhat.com>
Reviewed-on: http://review.gluster.org/6701
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
E.g. In glusterfs_volfile_fetch(), req.xdata.xdata_val is allocated
in dict_allocate_and_serialize() but not freed after mgmt_submit_request().
A survey of dict_allocate_and_serialize/_submit_request in
glusterfsd-mgmt.c shows no consistent pattern of freeing the xdata_val
and also the dict, which is a little disturbing. (Yes, clearly not
every place this occurs needs to be freed the same way.)
Change-Id: Ic306d60b157e97c822a562bfdf21896e40db632a
BUG: 1036102
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/6363
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
re-work.
Following are the cli commands that are new/re-worked:
======================================================
volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} |
volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} |
volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>}
volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [detail|clients|mem|inode|fd|callpool]
volume statedump <VOLNAME> [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history]
glusterd changes:
=================
* Quota limits are now set as extended attributes by glusterd from
the aux mount created by the cli.
* The gfids of the directories on which quota limits are set
for a given volume are stored in
/var/lib/glusterd/vols/<volname>/quota.conf file in binary format,
and whose cksum and version is stored in
/var/lib/glusterd/vols/<volname>/quota.cksum.
Original-author: Krutika Dhananjay <kdhananj@redhat.com>
Original-author: Krishnan Parthasarathi <kparthas@redhat.com>
BUG: 969461
Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/6003
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix the bug which was using the timeout value as a flag to indicate
if it was set (and hence would fail when timeout=0 would evaluate
as False)
Change-Id: Ie9a8f28d35603458cdac26c9a4e0343e7eda7344
BUG: 1032438
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/6308
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Remove bd_map xlator and CLI related changes.
Change-Id: If7086205df1907127c1a1fa4ba603f1c48421d09
BUG: 1028672
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/5747
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
remove server_ctx and locks_ctx from client_ctx directly and store as
into discrete entities in the scratch_ctx
hooking up dump will be in phase 3
BUG: 849630
Change-Id: I94cea328326db236cdfdf306cb381e4d58f58d4c
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/5678
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* The new and the oldgraphs which have been constructed whenever there is
a volfile change (either reconfigure of the existing graph or creating
a new graph) for comparison should be freed. Otherwise frequent graph
changes will lead to huge memory leak
Change-Id: I4faddb1aa9393b34cd2de6732e537a60f600026a
BUG: 948178
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/5388
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch also changes the behavior of glfs_set_logging().
If logfile argument is not provided to glfs_set_logging(),
libgfapi uses set_log_file_path() to create a logfile.
Change-Id: I49ec66c7f16f5604ff2f7cf7b365b08a05b5460d
BUG: 764890
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/5910
Reviewed-by: Anand Avati <avati@redhat.com>
Tested-by: Anand Avati <avati@redhat.com>
|