| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While investigating gfapi memory consumption with valgrind, valgrind
reported several memory access issues.
Also see the timer 'registry' being recreated (shortly) after being
freed during teardown due to the way it's currently written.
Passing ctx as data to gf_timer_proc() is prone to memory access
issues if ctx is freed before gf_timer_proc() terminates. (And in
fact this does happen, at least in valgrind.) gf_timer_proc() doesn't
need ctx for anything, it only needs ctx->timer, so just pass that.
Nothing ever calls gf_timer_registry_init(). Nothing outside of
timer.c that is. Making it and gf_timer_proc() static.
backport mainline:
> http://review.gluster.org/14247
> BUG: 1333925
Change-Id: Ia28454dda0cf0de2fec94d76441d98c3927a906a
BUG: 1342620
Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/14644
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Give the administrator a possibility to set oom_score_adj for glusterfs
process. Applies to Linux only.
This is a backport of cb8f5e01f639cb6e8715b33bb725210cb0493887.
Change-Id: Iff13c2f4cb28457871c6ebeff6130bce4a8bf543
BUG: 1341697
Signed-off-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Reviewed-on: http://review.gluster.org/14399
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/14605
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ia24ad18c43d56a751988e562323ede26d7785848
BUG: 1317278
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/14519
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also missing bang (!) in #!/bin/bash in shell scripts.
Backport of
> http://review.gluster.org/#/c/14398/
> BUG: 1336793
> Change-Id: I567a4be8f0f31f6285550f243fe802895f6bc43b
Change-Id: I003554daede9937a41010e085c74e77957c5c608
BUG: 1336794
Reported-by: Patrick Matthäi <pmatthaei@debian.org>
Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/14400
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Intro:
Currently glusterd maintain the portmap registry which contains ports that
are free to use between 49152 - 65535, this registry is initialized
once, and updated accordingly as an then when glusterd sees they are been
used.
Glusterd first checks for a port within the portmap registry and gets a FREE
port marked in it, then checks if that port is currently free using a connect()
function then passes it to brick process which have to bind on it.
Problem:
We see that there is a time gap between glusterd checking the port with
connect() and brick process actually binding on it. In this time gap it could
be so possible that any process would have occupied this port because of which
brick will fail to bind and exit.
Case 1:
To avoid the gluster client process occupying the port supplied by glusterd :
we have separated the client port map range with brick port map range more @
http://review.gluster.org/#/c/13998/
Case 2: (Handled by this patch)
To avoid the other foreign process occupying the port supplied by glusterd :
To handle above situation this patch implements a mechanism to return EADDRINUSE
error code to glusterd, upon which a new port is allocated and try to restart
the brick process with the newly allocated port.
Note: Incase of glusterd restarts i.e. runner_run_nowait() there is no way to
handle Case 2, becuase runner_run_nowait() will not wait to get the return/exit
code of the executed command (brick process). Hence as of now in such case,
we cannot know with what error the brick has failed to connect.
This patch also fix the runner_end() to perform some cleanup w.r.t
return values.
Backport of:
> Change-Id: Iec52e7f5d87ce938d173f8ef16aa77fd573f2c5e
> BUG: 1322805
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> Reviewed-on: http://review.gluster.org/14043
> Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com>
> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
> Smoke: Gluster Build System <jenkins@build.gluster.com>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Change-Id: Id7d8351a0082b44310177e714edc0571ad0f7195
BUG: 1333711
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-on: http://review.gluster.org/14235
Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currently, we always exit mount process with the pid as the exit number
which is return value of the waitpid(), it is not the exit value of the
child process
Solution:
Extract the actual exit code/status in case if the child terminated normally,
that is, by calling exit(3) or _exit(2), or by returning from main()
Change-Id: Iefec6e27b5a5a98a22f016e49967978853662e37
BUG: 1331042
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-on: http://review.gluster.org/14094
Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CID 1124846: string overflow
CID 1124363: checked return value
CID 1210982: unsigned compare
Change-Id: I5995d98c07750615657668535fcc23ac30b3523b
BUG: 789278
Signed-off-by: Sakshi Bansal <sabansal@redhat.com>
Reviewed-on: http://review.gluster.org/9608
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currently, say we have 10 Node gluster volume, and mounted it using
Node 1 (N1) as volfile server and the rest as backup volfile servers
$ mount -t glusterfs -obackup-volfile-servers=<N2>:<N3>:...:<N10> <N1>:/vol /mnt
if N1 goes down we still be able to access the same mount point,
but the problem is that if we add or remove bricks to the volume
whoes volfile server is down in our case N1, that info will not be
passed to client, because connection between glusterfs and glusterd (of N1)
will be disconnected due to which we cannot store files to the newly
added bricks until N1 comes back
Solution:
If N1 goes down iterate through the nodes specified in
backup-volfile-servers list and try to establish the connection between
glusterfs and glsuterd, hence we don't really have to wait
until N1 comes back to store files in newly added bricks that are
successfully added when N1 was down
Change-Id: I653c9f081a84667630608091bc243ffc3859d5cd
BUG: 1289916
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-on: http://review.gluster.org/13002
Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Poornima G <pgurusid@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport @ http://review.gluster.org/#/c/13626/3
Fix a typo error, consolidate the selinux and capability
check in getxattr and setxattr.
Change-Id: I4303de3d4dd00853169b07577311e03cbb912ed7
BUG: 1316327
Signed-off-by: Poornima G <pgurusid@redhat.com>
Reviewed-on: http://review.gluster.org/13653
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Vijay Bellur <vbellur@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Originally all security.* xattrs were forbidden if selinux is disabled,
which was causing Samba's acl_xattr module to not work, as it would
store the NTACL in security.NTACL. To fix this http://review.gluster.org/#/c/12826/
was sent, which forbid only security.selinux. This opened up a getxattr
call on security.capability before every write fop and others.
Capabilities can be used without selinux, hence if selinux is disabled,
security.capability cannot be forbidden. Hence adding a new mount
option called capability.
Only when "--capability" or "--selinux" mount option is used,
security.capability is sent to the brick, else it is forbidden.
Change-Id: I77f60e0fb541deaa416159e45c78dd2ae653105e
BUG: 1309462
Signed-off-by: Poornima G <pgurusid@redhat.com>
Reviewed-on: http://review.gluster.org/13540
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current glusterd code base having memory leak. This is because of
memory allocate by dict_allocate_and_serialize function in
"gd_syncop_mgmt_v3_lock" and "gd_syncop_mgmt_v3_unlock"
function is not freeing up meory upon exit.
Fix is to free the memory after exit of the above function.
Thanx Carlos and Roman for finding out the issue and fix.
Change-Id: Id67aa794c84969830ca7ea8c2374f80c64d7a639
BUG: 1287517
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Signed-off-by: Carlos Chinea <carlos.chinea@nokia.com>
Signed-off-by: Roman Tereshonkov <roman.tereshonkov@nokia.com>
Reviewed-on: http://review.gluster.org/12927
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Glusterd not working using ipv6 transport. The idea is with proper glusterd.vol configuration,
1. glusterd needs to listen on default port (240007) as IPv6 TCP listner.
2. Volume creation/deletion/mounting/add-bricks/delete-bricks/peer-probe
needs to work using ipv6 addresses.
3. Bricks needs to listen on ipv6 addresses.
All the above functionality is needed to say that glusterd supports ipv6 transport and this is broken.
Fix:
When "option transport.address-family inet6" option is present in glusterd.vol
file, it is made sure that glusterd creates listeners using ipv6 sockets only and also the same information is saved
inside brick volume files used by glusterfsd brick process when they are starting.
Tests Run:
Regression tests using ./run-tests.sh
IPv4: Ran manually till tests/basic/rpm.t .
IPv6: (Need to add the above mentioned config and also add an entry for "hostname ::1" in /etc/hosts)
Started failing at ./tests/basic/glusterd/arbiter-volume-probe.t and ran successfully till here
Unit Tests using Ipv6
peer probe
add-bricks
remove-bricks
create volume
replace-bricks
start volume
stop volume
delete volume
Change-Id: Iebc96e6cce748b5924ce5da17b0114600ec70a6e
BUG: 1117886
Signed-off-by: Nithin D <nithind1988@yahoo.in>
Reviewed-on: http://review.gluster.org/11988
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rebalance after sending a status notification immediately
destroys the frame. Now in its callback the frame is corrupted.
Rebalance crashes when this corrupted frame is accessed. To avoid
this we must destroy the frame after the callback is completed.
Change-Id: If383017a61f09275256e51c44a1efa28feace87b
BUG: 1300152
Signed-off-by: Sakshi <sabansal@redhat.com>
Reviewed-on: http://review.gluster.org/13262
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NSR needs logging that is different than our existing changelog in
several ways:
* Full data, not just metadata
* Pre-op, not post-op
* High performance
* Supports the concept of time-bounded "terms"
Others (for example EC) might need the same thing. This patch adds such
a translator. It also adds code to dump the resulting journals, and to replay
them using syncops, plus (very rudimentary) tests for all of the above.
Change-Id: I29680a1b4e0a9e7d5a8497fef302c46434b86636
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/12450
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
If index heal is launched when some of the bricks are down, glustershd of that
node sends a -1 op_ret to glusterd which eventually propagates it to the CLI.
Also, glusterd sometimes sends an err_str and sometimes not (depending on the
failure happening in the brick-op phase or commit-op phase). So the message that
gets displayed varies in each case:
"Launching heal operation to perform index self heal on volume testvol has been
unsuccessful"
(OR)
"Commit failed on <host>. Please check log file for details."
Fix:
1. Modify afr_xl_op() to return -1 even if index healing of atleast one brick
fails.
2. Ignore glusterd's error string in gf_cli_heal_volume_cbk and print a more
meaningful message.
The patch also fixes a bug in glusterfs_handle_translator_op() where if we
encounter an error in notify of one xlator, we break out of the loop instead of
sending the notify to other xlators.
Change-Id: I957f6c4b4d0a45453ffd5488e425cab5a3e0acca
BUG: 1302291
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/13303
Reviewed-by: Anuradha Talur <atalur@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterfsd fails if the glusterd is bind to specific-IP address.
This patch helps glusterfsd to get the volfile using Unix domain socket.
glusterfs -s <unix socket path> --volfile-server-transport unix
--volfile-id <volume-name> <mount-point>
The patch checks if the volfile-server-transport is of type "unix",
If It is then uses rpc_transport_unix_options_build to get the volfile.
Change-Id: I81b881e7ac5a3a4f2ac83c789c385cf547f0d53e
BUG: 1279484
Signed-off-by: Mohamed Ashiq <mliyazud@redhat.com>
Signed-off-by: Humble Devassy Chirammal <hchiramm@redhat.com>
Reviewed-on: http://review.gluster.org/12556
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CLI command for bitrot scrub status will be :
gluster volume bitrot <volname> scrub status
Above command will show the statistics of bitrot scrubber.
Upon execution of this command it will show some common
scrubber tunable value of volume <VOLNAME> followed by
statistics of scrubber statistics of individual nodes.
sample ouput for single node:
Volume name : <VOLNAME>
State of scrub: Active
Scrub frequency: biweekly
Bitrot error log location: /var/log/glusterfs/bitd.log
Scrubber error log location: /var/log/glusterfs/scrub.log
=========================================================
Node name:
Number of Scrubbed files:
Number of Unsigned files:
Last completed scrub time:
Duration of last scrub:
Error count:
=========================================================
This is just infrastructure. list of bad file, last scrub
time, error count value will be taken care by
http://review.gluster.org/#/c/12503/ and
http://review.gluster.org/#/c/12654/ patches.
Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1
BUG: 1207627
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10231
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
various xlators and other components are invoking system calls
directly instead of using the libglusterfs/syscall.[ch] wrappers.
If not using the system call wrappers there should be a comment
in the source explaining why the wrapper isn't used.
Change-Id: I1f47820534c890a00b452fa61f7438eb2b3f667c
BUG: 1267967
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/12276
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I9c71ae264665b7bba609c7f86cf42a52a6b47260
BUG: 1269696
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/12311
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When new bricks are added in the middle of an on-going
fop like 'rm', the volfile changes without waiting for
the newly added bricks to get port. Fops are sent to all
bricks and may fail on some with ENOTCONN as these bricks
may not have a port yet.
This patch ensures that the volfile change happens only
after all the bricks have a port.
Change-Id: I7ed2413475f80d0cc8849fed33036ade8d75a191
BUG: 1233151
Signed-off-by: Sakshi <sabansal@redhat.com>
Reviewed-on: http://review.gluster.org/11342
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The @owner argument tells RPC layer the xlator that owns
the connection and to which xlator THIS needs be set during
network notifications like CONNECT and DISCONNECT.
Code paths that originate from the head of a (volume) graph and use
STACK_WIND ensure that the RPC local endpoint has the right xlator saved
in the frame of the call (callback pair). This guarantees that the
callback is executed in the right xlator context.
The client handshake process which includes fetching of brick ports from
glusterd, setting lk-version on the brick for the session, don't have
the correct xlator set in their frames. The problem lies with RPC
notifications. It doesn't have the provision to set THIS with the xlator
that is registered with the corresponding RPC programs. e.g,
RPC_CLNT_CONNECT event received by protocol/client doesn't have THIS set
to its xlator. This implies, call(-callbacks) originating from this
thread don't have the right xlator set too.
The fix would be to save the xlator registered with the RPC connection
during rpc_clnt_new. e.g, protocol/client's xlator would be saved with
the RPC connection that it 'owns'. RPC notifications such as CONNECT,
DISCONNECT, etc inherit THIS from the RPC connection's xlator.
Change-Id: I9dea2c35378c511d800ef58f7fa2ea5552f2c409
BUG: 1235582
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/11436
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a --resolve-gids commandline option to the glusterfs binary. This
option gets set when executing "mount -t glusterfs -o resolve-gids ...".
This option is most useful in combination with the "acl" mount option.
POSIX ACL permission checking is done on the FUSE-client side to improve
performance (in addition to the checking on the bricks).
The fuse-bridge reads /proc/$PID/status by default, and this file
contains maximum 32 groups. Any local (client-side) permission checking
that requires more than the first 32 groups will fail.
By enabling the "resolve-gids" option, the fuse-bridge will call
getgrouplist() to retrieve all the groups from the user accessing the
mountpoint. This is comparable to how "nfs.server-aux-gids" works.
Note that when a user belongs to more than ~93 groups, the volume option
server.manage-gids needs to be enabled too. Without this option, the
RPC-layer will need to reduce the number of groups to make them fit in
the RPC-header.
Change-Id: I7ede90d0e41bcf55755cced5747fa0fb1699edb2
BUG: 1246275
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/11732
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I29bdeefb755805858e3cb1817b679cb6f9a476a9
BUG: 1194640
Signed-off-by: Hari Gowtham <hgowtham@dhcp35-85.lab.eng.blr.redhat.com>
Reviewed-on: http://review.gluster.org/9893
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem 1 : glusterd was crashing due to race between clean up thread and rpc event thread.
Scenario:
As we can observed, X thread is in the process of exiting the process. It has already
run the exit handlers, which cleanup things that require cleaning up. This includes
liburcu resources. By the time Y thread called rcu_bp_register(), the liburcu resources
have been cleaned up. rcu_bp_register() tries to access these non-existent resources,
which leads to the segmentation fault.
Note1:
Crash happen when the process is almost at the point of stopping(exiting), it doesn't have any
serious impact to functionality apart from creating the core dump file and the log message.
Fix .Do proper clean up before calling exit().
Note2: Other xlator have clean up issues,so only glusterd clean up function invoked.
Note3: This patch also solve the selinux issue.
Problem 2 : glusterd runs as rpm_script_t when it's executed from the rpm scriptlet,files created
in this context are set as rpm_script_t, so glusterd unable to access these files when it runs
in glusterd_t context.
Fix: Fini clean up the files while glusterd exiting, so files are recreated by glusterd while
starting with proper SElinux context label.
Change-Id: Idcfd087f51c18a729bdf44a146f9d294e2fca5e2
BUG: 1209461
Signed-off-by: anand <anekkunt@redhat.com>
Reviewed-on: http://review.gluster.org/10894
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of including config.h in each file, and have the additional
config.h included from the compiler commandline (-include option).
When a .c file tests for a certain #define, and config.h was not
included, incorrect assumtions were made. With this change, it can not
happen again.
BUG: 1222319
Change-Id: I4f9097b8740b81ecfe8b218d52ca50361f74cb64
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/10808
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Command gluster volume status <VOLNAME> should show the status of bitrot
and scrubber daemon and its pid information.
Along with displaying bitrot and scrubber daemon information in gluster
volume status command there should be command to show its individual status
separately.
Command to show individual status of bitrot and scrubber daemon will
following.
command to show only bitd daemon information will be
gluster volume status <VOLNAME> bitd
command to show only scrubber daemon information
gluster volume status <VOLNAME> scrub
Change-Id: Id86aae1156c8c599347c98e2a538f294d37376e4
BUG: 1209752
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10175
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sssd uses 300 seconds by default too. There is no need to overload sssd
with requests that it would have cached.
BUG: 1215187
Change-Id: I3f04ea8cc90180d863253a9f46d62b71810a7b34
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/10371
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instantiate a process wide global instance of the timer wheel
data structure. Spawning glusterfs* process with option arg
"--global-timer-wheel" instantiates a global instance of
timer-wheel under global context (->ctx).
Translators can make use of this process wide instance [via a
call to glusterfs_global_timer_wheel()] instead of maintaining
an instance of their own and possibly consuming more memory.
Linux kernel too has a single instance of timer wheel where
subsystems such as IO, networking, etc.. make use of.
Bitrot daemon would be early consumers of this: bitrot translator
instances for multiple volumes would track objects belonging to
their respective bricks in this global expiry tracking data
structure. This is also a first step to move GlusterFS timer
mechanism to use timer-wheel.
Change-Id: Ie882df607e07acaced846ea269ebf1ece306d6ae
BUG: 1170075
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/10380
Tested-by: NetBSD Build System
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NFS now has the ability to use a separate file for "netgroups" and
"exports". An administrator should have the ability to check the
validity of the files before applying the configuration.
The "glusterfsd" command now has the following additional arguments that
can be used to check the configuration:
--print-netgroups: Validate the netgroups file and print it out
--print-exports: Validate the exports file and print it out
BUG: 1143880
Change-Id: I24c40d50110d49d8290f9fd916742f7e4d0df85f
URL: http://www.gluster.org/community/documentation/index.php/Features/Exports_Netgroups_Authentication
Original-author: Shreyas Siravara <shreyas.siravara@gmail.com>
CC: Richard Wareing <rwareing@fb.com>
CC: Jiffin Tony Thottan <jthottan@redhat.com>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/9365
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
... from all bricks in the volume
This patch is important in the context of MT epoll. With MT epoll,
notification events from client xlators could reach cluster xlators like
afr, dht, ec, stripe etc. in different orders.
For e.g, In a distributed replicate volume of 2 bricks, namely Brick1
and Brick2, the following network events are observed by a mount
process.
- connection to Brick1 is broken.
- connection to Brick1 has been restored.
- connection to Brick2 is broken.
- connection to Brick2 has been restored.
Without establishing a total ordering of events, we can't guarantee that
cluster xlators like afr, dht perceive them in the same order. While we
would expect afr (say) to perceive it as only one of Brick1 and Brick2
going down at any given time, it is possible for the notification of
Brick2 going offline to race with the notification of Brick1 coming back
online.
Change-Id: I78f5a52bfb05593335d0e9ad53ebfff98995593d
BUG: 1104462
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/9591
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the ability to configure the number of event threads
for various gluster services.
Currently with the multi thread epoll patch, it is possible
to have more than one thread waiting on socket activity and
processing the same. This thread count is currently static,
which this commit makes dynamic.
The current services which use IO path, i.e brick processes,
any client process (nfs, FUSE, gfapi, heal,
rebalance, etc.a), gain 2 set parameters to control the number
of threads that are processing events. These settings are,
- client.event-threads <n>
- server.event-threads <n>
The client setting affects the client graph consumers, and the
server setting affects the brick processes. These are processed
and inited/reconfigured using the client/server protocol xlators.
Other services (say glusterd) would need to extend similar
configuration settings to take advantage of multi threaded event
processing.
At present glusterd is not enabled with this commit, as it does not
stand to gain from this multi-threading (as I understand it).
Change-Id: Id8422fc57a9f95a135158eb6477ccf9d3c9ea4d9
BUG: 1104462
Signed-off-by: Shyam <srangana@redhat.com>
Reviewed-on: http://review.gluster.org/9488
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Bring in option to disable memory accounting for a glusterfs process
This reverses the changes done by the commit
7fba3a88f1ced610eca0c23516a1e720d75160cd.
* Change the key from "memory-accounting" to "no-memory-accounting", as by
default all the glusterfs process enable memory accounting now. So to
disable memory accounting for some process, "no-mem-accounting" argument has
to be passed.
Change-Id: I39c7cefb0fe764ea3e48f4e73e1305b084c5f497
BUG: 1184366
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/9469
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This got introduced due to 656711d935000c16. Coverity
also picked this up as CIDs 1256176, 1256178, 1256180.
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Change-Id: If12fa0075634383975846181917a2f9650f790e3
BUG: 789278
Reviewed-on: http://review.gluster.org/9213
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For rdma type only volume client connection establishment
with server takes more than three seconds. Because for
tcp,rdma type volume, will have 2 ports one for tcp and
one for rdma, tcp port is stored with brickname and rdma
port is stored as "brickname.rdma" during pamap_sighin.
During the handshake when trying to get the brick port
for rdma clients, since we are not aware of server
transport type, we will append '.rdma' with brick name.
So for tcp,rdma volume there will be an entry with
'.rdma', but it will fail for rdma type only volume.
So we will try again, this time without appending '.rdma'
using a flag variable need_different_port, and it will succeed,
but the reconnection happens only after 3 seconds.
In this patch for rdma only type volume
we will append '.rdma' during the pmap_signin. So during the
handshake we will get the correct port for first try itself.
Since we don't need to retry , we can remove the
need_different_port flag variable.
Change-Id: Ie8e3a7f532d4104829dbe995e99b35e95571466c
BUG: 1153569
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/8934
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we try to mount a tcp,rdma volume as rdma
transport using FUSE protocol, then mount will
hang if the brick is down. When we kill a process,
signal will be received in glusterfsd process and
it will call pmap_signout with port listening on tcp only.
In case of the tcp,rdma there will be two ports,
and port which is listening for rdma will not
called for sign out.
So the mount process will try to connect to a port
which is not open and it will keep trying to connect.
This patch will call pmap_signout for rdma port also,
So when mount tries to get the brick port,it will fail.
Change-Id: I23676f65f96eb90b69b76478f7a21412a6aba70f
BUG: 1143886
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/8762
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
some code does not check xlator_mem_acct_init()
return, thus fails to capture wrong memory accounting
initialization. This patch fix the same.
Change-Id: I01eab19d6cef472afd850b0f964132c01523492a
BUG: 1123768
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Reviewed-on: http://review.gluster.org/7728
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The aux-gfid-mount works on non Linux systems, and it is required
to pass tests/basic/gfid-access.t
BUG: 764655
Change-Id: Ic6c8ef425e091440a139bbd25fadbf4f82e378cb
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8446
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Tested-by: Harshavardhana <harsha@harshavardhana.net>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ie5d437aa6c52c180fd8d54680c5f882e75c0bf7e
BUG: 1089172
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/8448
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Break-way from '/var/lib/glusterd' hard-coded previously,
instead rely on 'configure' value from 'localstatedir'
- Provide 's/lib/db' as default working directory for gluster
management daemon for BSD and Darwin based installations
- loff_t is really off_t on Darwin
- fix-off the warnings generated by clang on FreeBSD/Darwin
- Now 'tests/*' use GLUSTERD_WORKDIR a common variable for all
platforms.
- Define proper environment for running tests, define correct PATH
and LD_LIBRARY_PATH when running tests, so that the desired version
of glusterfs is used, regardless where it is installed.
(Thanks to manu@netbsd.org for this additional work)
Change-Id: I2339a0d9275de5939ccad3e52b535598064a35e7
BUG: 1111774
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/8246
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The feature is controlled by presence of the following file:
/var/lib/glusterd/secure-access
See the comment near the definition of SECURE_ACCESS_FILE in glusterfs.h
for the rationale. With this enabled, the following rules apply to
connections:
UNIX-domain sockets never have SSL.
Management-port sockets (both connecting and accepting, in
daemons and CLI) have SSL based on presence of the file.
Other IP sockets have SSL based on the existing client.ssl and
server.ssl volume options.
Transport multi-threading is explicitly turned off in glusterd (it would
otherwise be turned on when SSL is) due to multi-threading issues.
Tests have been elided to avoid risk of leaving a file which will cause
all subsequent tests to run with management SSL still enabled.
IMPLEMENTATION NOTE
The implementation is a bit messy, and consists of two stages. First we
decide whether to set the relevant fields in our context structure, based
on presence of the sentinel file OR a command-line override. Later we
decide whether a particular connection should actually use SSL, based on the
context flags plus what kind of connection we're making[1] and what kind of
daemon we're in[2].
[1] inbound, outbound to glusterd port, other outbound
[2] glusterd, glusterfsd, other
TESTING NOTE
Instead of just running one special test for this feature, the ideal
would be to run all tests with management SSL enabled. However, it
would be inappropriate or premature to set up an optional feature in the
patch itself. Therefore, the method of choice is to submit a separate
patch on top, which modifies "cleanup" in include.rc to recreate the
secure-access file and associated SSL certificate/key files before each
test.
Change-Id: I0e04d6d08163893e24ec8c031748c5c447d7f780
BUG: 1114604
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/8094
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While accessing the procedures of given RPC program in,
rpcsvc_get_program_vector_sizer(), It was not checking boundary
conditions which would cause buffer overflow and subsequently SEGV.
Make sure rpcsvc_actor_t arrays have numactors number of actors.
FIX:
Validate the RPC procedure number before fetching the actor.
Special Thanks to: Murray Ketchion, Grant Byers
Change-Id: I8b5abd406d47fab8fca65b3beb73cdfe8cd85b72
BUG: 1096020
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/7726
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a brick is started, and the glusterfsd process requests
for volfile, the brick_name is sent in the req dict. In
glusterd, after fetching the spec the brick_name is looked
up in the missed_snap_list, and any missing snap creates on
the same brick are performed. After this, the glusterd
responds back with the specfile.
Also collate brick data from the node's hosting the bricks
during restore. In case the data is absent, the local node's
data is used. This is needed to ensure that, during a restore
we collect the information created when a missed snap create
is performed.
Change-Id: I47cefdeba96f2702be810965734cf0fac61d3d2d
BUG: 1061685
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7551
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The meta translator exposes details about glusterfs itself
in the form of a virtual namespace.
Loading the translator on the client side creates the
meta virtual view under $mntpoint/.meta by default. The
directory is not listed (even with ls -a) and can be
accessed by doing a "cd /mnt/.meta"
Change-Id: I5ffdf39203841a9562a8280a1f79dc76d4dded5d
BUG: 1089216
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/7509
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
memory accounting are constant time operations which
involve a few pointer dereferences and integer increments
(no loops or searches etc.)
benefits of having memory usage info outweigh the minor
accounting overheads
Change-Id: If9bc6db5ffd0e00f0fd64b2f6eed094bf3543996
BUG: 1089216
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/7543
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changelog barriers unlink, rename, rmdir fops on barrier 'on'
notification from glusterfsd mgmt layer and unbarriers the
same on barrier 'off' notification during snapshot.
Please see the following link for more details.
http://www.gluster.org/community/documentation/index.php/Changelog_Design_changes_for_snapshot
Signed-off-by: Kotresh H R <khiremat@redhat.com>
Change-Id: Iea9c62fafc86242f9404e03679b1941aa9c88c9a
Signed-off-by: Kotresh H R <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/7415
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Varun Shastry <vshastry@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I8efa08cc9832ad509fba65a88bb0cddbaf056404
BUG: 1075611
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/7475
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces a new 'barrier' brick-op which will be used to
activate/deactivate the barriering on the bricks. This includes
barriering in the barrier xlator and in the changelog xlator. All the
required code has been including a bricks select function, a payload
builder and a brick-op handler.
Change-Id: I91d9d77f691c2e89823f7dc4e84900ec40dc4dd2
BUG: 1060002
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6943
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs
Working functionality on MacOSX
- GlusterD (management daemon)
- GlusterCLI (management cli)
- GlusterFS FUSE (using OSXFUSE)
- GlusterNFS (without NLM - issues with rpc.statd)
Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac
BUG: 1089172
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Signed-off-by: Dennis Schafroth <dennis@schafroth.com>
Tested-by: Harshavardhana <harsha@harshavardhana.net>
Tested-by: Dennis Schafroth <dennis@schafroth.com>
Reviewed-on: http://review.gluster.org/7503
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ic4b701a6621578848ff67ae4ecb5a10b5f32f93b
BUG: 1075611
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/7372
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial patch for the Snapshot feature. Current patch
includes following features:
* Snapshot create
* Snapshot delete
* Snapshot restore
* Snapshot list
* Snapshot info
* Snapshot status
* Snapshot config
Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9
BUG: 1061685
Signed-off-by: shishir gowda <sgowda@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7128
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|