| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Made the below changes
tests/basic/uss.t: removed the older way of getting the list of snapshots
bugs/bug/bug-1109770.t: added uss disable test also to check snapd behavior
Change-Id: I57b6bc8fa82bcaa544f483ad382e1bb4d11ef122
BUG: 1092850
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/8081
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I2b5784c48eedcccb17690de438addd29075926bd
BUG: 1092850
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/8104
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: Currently in the nameless lookup code path, if at the
end of the lookup, even if it detects that layout
anamolies are there, layout healing will not be done as
there is no code to heal it.
So there can be race between mkdir and lookup.
Assume mkdir is going on from some other mount point,
Say, M1. Directories are created on some nodes but layout
is not set yet.
Now from M2, nameless lookup goes, lookup will be success
full as the directory is present on some of the nodes, but
it won't heal layout. Now if create goes after lookup fop,
because layout is absent, file creation will fail.
Fix: Included the code of layout self-heal in the nameless
lookup path. At the end of lookup, layout will be computed
as it would have been in the named lookup, but it will be
set to those node only, where directory is present.
So after that if create fop goes, the probabiliy to get the
subvolume with proper hash-range is high now, so reduces
the race window.
Other: Whenever a directory is created, we have to choose a brick
from which we start allocating layout in a circular fashion.
To calculate this starting brick, I have changed the candidate
from name of the directory to gfid of the directory
But to compute where a given file belongs, we will still
use the name of the file. Hash computed from the name of the
file should belong to any one of the directory-hash-range
Calculation of hash for a file is acting as a consumer and the
setting of directory layout based on gfid is acting as a producer,
which are independent from each other.
Change-Id: I3808c55082cd1b5c72d2c77cbbc063f55aa38bee
BUG: 1095888
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/7493
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I9266bbf4f67a2135f9a81b32fe88620be11af6ea
BUG: 1109889
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/8084
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I7169be3532232754b9461c4e1b27bf6bc857f7a6
BUG: 1092850
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/8083
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I702372c6c8341b54710c531662e3fd738cfb5f9a
BUG: 1109770
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/8076
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Removed sleep with EXPECT_WITHIN
Heal full doesn't generate indices until the files/dirs are
recreated. So wait until they are re-created and then
wait for heal completion.
Change-Id: I82399f6a17f94ecc101db45b83d8ef7bfa9c64dd
BUG: 1092850
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/8069
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With change 28209283a67f13802cc0c1d3df07c676926810a2, the root squash option
is enabled by default even for the trusted clients. This disallowed quota
auxiliary mount from setting the limit.
This patch adds the quota aux mount process to list of 'special' clients which
have the root squash disabled by default.
Change-Id: Ie6583dd3deb170563daf001239c51bcff1ce078b
BUG: 1104692
Signed-off-by: Varun Shastry <vshastry@redhat.com>
Reviewed-on: http://review.gluster.org/7967
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I355ae02bed54753480279ddb058cc4b19ace6792
BUG: 1092850
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/8068
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
feature.
This patch provides an interface to enable or disable
the auto-delete feature.
Syntax : gluster snapshot config auto-delete <enable/disable>
DETAILS :
1) When auto-delete feature is disabled, If the the soft-limit is
reached then user is given a warning about exceeding soft-limit
along with successful snapshot creation message (oldest snapshot is
not deleted). And upon reaching hard-limit further snapshot creation
is not allowed.
Example :
------------------------------------------------------------------
|Case - 1: Upon reaching soft-limit
|
|Snapshot create : snap successfully created.
|Warning : soft-limit of volume (vol) is reached. Snapshot creation
|is not possible once hard-limit is reached.
|
|-----------------------------------------------------
|Case - 2: Upon reaching hard-limit
|
|Snapshot create : snap creation failed.
|Error : hard-limit of volume (vol) is reached, Hence it is not
|possible to take further snapshots. Please delete few snapshots
|of the volume (vol) before taking another snapshot.
------------------------------------------------------------------
2) When auto-delete feature is enabled, then as soon as the soft-limit
is reached the oldest snapshot is deleted for every successful snapshot
creation (same as existing method), With this it is made sure that
number of snapshot created is not more than snap-max-hard-limit.
Change-Id: Ie3ca64bbd2c763371f541cd2e378314e73b695b4
BUG: 1105415
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8017
Tested-by: Justin Clift <justin@gluster.org>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem with script:
EXPECT_WITHIN fails the test if the command it executes fails.
There is a possibility that the file script tries to 'cat' may not exist.
In those cases it would fail leading to spurious failures.
Fix:
Add a function which returns empty string when the file doesn't exist
and 'cat' file when it does exist.
Change-Id: I0abfb343f2fce1034ee4b5b680e3783c4f6e8486
BUG: 1092850
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/8057
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I found that order of execution in afr-v2 self-heal is causing
the links to disappear some times. I need to fix that issue
and then submit this test again
Change-Id: Ia886feb796b7854645813f486b7b7ac4e944ed17
BUG: 1101143
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/8055
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I39cc497f12c83aa055acb6e88e4c3e1e8774e577
BUG: 1089668
Signed-off-by: ggarg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/8050
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem :
This test case checks whether global option is updated
when glusterd goes down and comes back up.
There might be a scenario where the handshake is not
completed yet and we issue a volume info in between.
In that case when we try to grep a key from gluster volume info
output it might not be there.
Hence to get some time we can issue a peer status command
and verify that the peer count is correct.
Change-Id: I2933d55ce8c80517a555fa3c2e4cd768cde30abf
BUG: 1104642
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/8041
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Directory rename while a brick is down can cause gfid handle of that directory
to be deleted until next lookup happens on that directory.
*) Self-heal does not have intelligence to detect renames at the moment. So it
has to delete the directory 'd' using special flags, because it has to perform
'rm -rf' of that directory as it is not empty. Posix xlator implements this by
renaming the directory deleted to 'landfill' directory in '.glusterfs' where
janitor thread will perform actual rm -rf by traversing the directory. Janitor
thread wakes up every 10 minutes to check if there are any directories to be
deleted and deletes them. As part of deleting it also deletes the gfid-handles.
Steps to hit the problem:
1) On a replicate volume create a directory 'd', file in 'd' called 'f' so the
directory 'd' is not empty.
2) bring one of the bricks down (lets call it brick-a, the other one is brick-b
3) Rename d to d1
4) When brick-a comes online again, self-heal deletes directory 'd' and creates
directory 'd1' on brick-a for performing self-heal. So on brick-a,
gfid-handle of 'd' pointing to 'da is deleted and recreated to point to 'd1'.
5) This directory 'b' with all its directory hierarchy (for now just the file
'f') will be under 'landfill' directory.
6) When janitor thread wakes up and deletes directory 'd' and gfid-handle of
'd' without realizing that it is now pointing to 'd1'. Thus 'd1' loses its
gfid-handle
Fix:
Delete gfid-handle for a directory only when the gfid-handle is stale.
Change-Id: I21265b3bd3852f0967d916aaa21108ae5c9e7373
BUG: 1101143
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/7879
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
volume information.
Problem : global options maintained by glusterd was getting
synced only when there was change in volume information.
Solution : Import the global option irrespective of change in
volume information.
Change-Id: I9e59b3cb25bdc19601a09fcf8df2e31a8481ece0
BUG: 1104642
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/7970
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
code path.
Problem : If we try to set the volume snap limit to
more that 256, it always shows value cannot exceed 256,
irrespective of system max limit.
Solution : Dont do validation in CLI side.
Change-Id: I292c0bc91a1806cd4906fca0151dd98135e6e49a
BUG: 1098122
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/7777
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd_brick_op()
In glusterd_brick_op(), the txn_id mut be fetched before
failing the transaction for any other reason. Moving
the fetching of txn_id to the beginning of the function.
Also initializing txn_id to priv->global_txn_id where it
wasn't initialized.
Change-Id: I44d7daa444f00a626f24670c92324725f6c5fb35
BUG: 1102656
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7926
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PROBLEM:
As part of file creation, DHT sends a statfs call to all of its
sub-volumes and expects in return the local space consumption and
availability on each one of them. This information is used by DHT to
ensure that atleast min-free-disk amount of space is left on each
sub-volume in the event that there ARE other sub-volumes with more
space available.
But when quota-deem-statfs is enabled, quota xlator on every brick
unwinds the statfs call with volume-wide consumption of disk space.
This leads to miscalculation in min-free-disk algo, thereby misleading
DHT at some point, into thinking all sub-volumes have equal available
space, in which case DHT keeps sending new file creates to subvol-0,
causing it to become 100% full at some point although there ARE other
subvols with ample space available.
FIX:
The fix is to make quota_statfs() behave as if quota xlator weren't
enabled, thereby making every brick return only its local consumption
and disk space availability.
Change-Id: I211371a1eddb220037bd36a128973938ea8124c2
BUG: 1099890
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/7845
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Idbf27dbe088e646a8ab81cedc5818413795895ea
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Signed-off-by: Anand Subramanian <anands@redhat.com>
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/7700
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
node_state.info
credit: kaushal@redhat.com
spalai@redhat.com
Change-Id: I08d0771e2168a4a6ebd473e8a937b8b2eda1341a
BUG: 1075087
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/7214
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Also fixed nfs.rc so that regression build works on my fedora VM
Change-Id: Ife36343bf1a590430e24065b9bcdf5bed3ae546d
BUG: 1092850
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/7837
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I479ab941b3b2da3b16f624400fbd300f08326268
BUG: 1092850
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/7799
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I59a5e0cb78f2b670761a65272b8ab1d7bdb3668a
BUG: 1092850
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/7773
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In barrier notify function, if we fail to set the barrier option execution goes
to default_notify which returns 0 and command returns success.
Fix : We need not call the default_notify function when handling
GF_EVENT_TRANSLATOR_OP in barrier xlator's notify.
Change-Id: Ia2c361b43cca7791c29829d69dcd6fc7923102f6
BUG: 1092841
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/7609
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new volume option 'server.manage-gids' can be enabled in
environments where a user belongs to more than the current absolute
maximum of 93 groups. This option triggers the following behavior:
1. The AUTH_GLUSTERFS structure sent by GlusterFS clients (fuse, nfs or
libgfapi) will contain only one (1) auxiliary group, instead of
a full list. This reduces network usage and prevents problems in
encoding the AUTH_GLUSTERFS structure which should fit in 400 bytes.
2. The single group in the RPC Calls received by the server is replaced
by resolving the groups server-side. Permission checks and similar in
lower xlators are applied against the full list of groups where the
user belongs to, and not the single auxiliary group that the client
sent.
Change-Id: I9e540de13e3022f8b63ff893ecba511129a47b91
BUG: 1053579
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/7501
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- use '%' when we mean it for clarity
- in bash we need to evaluate counter decrements
Change-Id: Ibd17126945e8a335fa2671d658a2e0c71049fd1e
BUG: 874554
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-on: http://review.gluster.org/7687
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I2d6ebae3ced1910f2dee43eeb9fc430e9f31073f
BUG: 1061685
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/7587
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I635fc0fa955b33590f1c5b4dfec22d591ea8575c
BUG: 1032894
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/6592
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As glusterd_do_replace_brick() is spawned through gf_timer_call_after(),
by the time it's called the event is freed, and the txn_id is lost.
Hence using a calloc-ed copy, which will be freed as a part of rb_ctx
dict.
Change-Id: I3e309fe1a7ba96ad1d1ce01f4d2aa18178f59244
BUG: 1095097
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7686
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The meta translator exposes details about glusterfs itself
in the form of a virtual namespace.
Loading the translator on the client side creates the
meta virtual view under $mntpoint/.meta by default. The
directory is not listed (even with ls -a) and can be
accessed by doing a "cd /mnt/.meta"
Change-Id: I5ffdf39203841a9562a8280a1f79dc76d4dded5d
BUG: 1089216
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/7509
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While creating a volume and adding a brick validation for _POSIX_PATH_MAX is
done on absolute pathname instead of relative pathname due to which a brickpath
having less than _POSIX_PATH_MAX may also fail the validation if the directory
length is greater than (_POSIX_PATH_MAX -strlen(brickpath/volume name).
Also this fix addresses one cli response message correction which says the
volume file is too long instead of brick path is too long (when brickpath
length validation doesn't fail and vol file length validation fails.)
It is also important to note that with the current design of volfile naming, it
can not be guranteed that volname and brickpath can have max of _POSIX_PATH_MAX
characters.
Change-Id: I1283d1f9dea96ae797620002c8723719f26a866d
BUG: 1085330
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/7420
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
posix_handle_mkdir_hashes()
Whenever a new directory is created, its corresponding gfid file must
also be created. This was done first calling MAKE_HANDLE_PATH() to get
the path of the gfid file, then calling posix_handle_mkdir_hashes() to
create the parent directories of the gfid, and finally creating the
soft-link.
In normal circumstances, the gfid we want to create won't exist and
MAKE_HANDLE_PATH() will return a simple path to the new gfid. However if
the volume is damaged and a self-heal is running, it is possible that we
try to create an already existing gfid. In this case, MAKE_HANDLE_PATH()
will return a path to the directory instead of the path to the gfid.
To solve this problem, every time a path to a gfid is needed, a call to
MAKE_HANDLE_ABSPATH() is made instead of the call to MAKE_HANDLE_PATH().
Change-Id: Ic319cc38c170434db8e86e2f89f0b8c28c0d611a
BUG: 859581
Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-on: http://review.gluster.org/5075
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
previously when user triggred 'gluster volume remove-brick VOLNAME
BRICK start' then command' gluster volume rebalance <volname> status'
showing output even user has not triggred "rebalance start" and when
user triggred 'gluster volume rebalance <volname> start' then command
'gluster volume remove-brick VOLNAME BRICK status' showing output even
user has not run rebalance start and remove brick start.
regression test failed in previous patch. file test/dht.rc and
test/bug/bug-973073 edited to avoid regression test failure.
now with this fix it will differentiate rebalance and remove-brick
status messages.
Signed-off-by: ggarg <ggarg@redhat.com>
Change-Id: I7f92ad247863b9f5fbc0887cc2ead07754bcfb4f
BUG: 1089668
Reviewed-on: http://review.gluster.org/7517
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement force option in snapshot create i.e
1) Creation of snapshot fails if the original volume
bricks are down
2) With a force option creation of snapshot will continue
even if the original volume bricks are down.
This was the fix for bugs 1089527 and 1083502
Change-Id: I8de0242adf8ee0af00db9fa8701d86fabc12e7fc
BUG: 1090042
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Reviewed-on: http://review.gluster.org/7520
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
command should only accept the decimal numeric value.
Syntax : gluster snapshot config [volname]
[snap-max-hard-limit <count>]
[snap-max-soft-limit <percentage>]
Problem : Snapshot config used to consider the alphanumeric value
staring with digit as an integer (Example: "9abc" is converted to "9").
Solution : Refined the code to check if the entered value is numeric.
This patch also fixes some of the minor problems related to snapshot
config.
1) Output correction in gluster snapshot config snap-max-soft-limit.
2) setting the soft limit to greater than 100% displays that "Invalid
snap-max-soft-limit 0". The error message used to display "zero" in
the output, Changed this to display relevant value.
3) Setting greater than allowed snap-max-hard-limit output needs to
have space in between.
Change-Id: Ie7c7045722fe57b2b3c50c873664b67c28eb3853
BUG: 1087203
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/7457
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This patch solves the inconsistent quota usage logging when soft limit reached.
Change-Id: I47e7f1e65ed4b8306a999a20cc8f6b1772d47627
BUG: 1087198
Signed-off-by: Varun Shastry <vshastry@redhat.com>
Reviewed-on: http://review.gluster.org/7451
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
previously it was failed without showing any information why it is failed.
Now with this fix, when "." or any non alphanumeric character
present in volume name, it will give error messages
Change-Id: I17e8e69c08345c4d760f3ba333fe841e754bc9c8
BUG: 921215
Signed-off-by: ggarg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/7364
Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
git@forge.gluster.org:~schafdog/glusterfs-core/osx-glusterfs
Working functionality on MacOSX
- GlusterD (management daemon)
- GlusterCLI (management cli)
- GlusterFS FUSE (using OSXFUSE)
- GlusterNFS (without NLM - issues with rpc.statd)
Change-Id: I20193d3f8904388e47344e523b3787dbeab044ac
BUG: 1089172
Signed-off-by: Harshavardhana <harsha@harshavardhana.net>
Signed-off-by: Dennis Schafroth <dennis@schafroth.com>
Tested-by: Harshavardhana <harsha@harshavardhana.net>
Tested-by: Dennis Schafroth <dennis@schafroth.com>
Reviewed-on: http://review.gluster.org/7503
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RFE: Support wildcard in "nfs.rpc-auth-allow" and
"nfs.rpc-auth-reject". e.g.
*.redhat.com
192.168.1[1-5].*
192.168.1[1-5].*, *.redhat.com, 192.168.21.9
Along with wildcard, support for subnetwork or IP range e.g.
192.168.10.23/24
The option will be validated for following categories:
1) Anonymous i.e. "*"
2) Wildcard pattern i.e. string containing any ('*', '?', '[')
3) IPv4 address
4) IPv6 address
5) FQDN
6) subnetwork or IPv4 range
Currently this does not support IPv6 subnetwork.
Change-Id: Iac8caf5e490c8174d61111dad47fd547d4f67bf4
BUG: 1086097
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/7485
Reviewed-by: Poornima G <pgurusid@redhat.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial patch for the Snapshot feature. Current patch
includes following features:
* Snapshot create
* Snapshot delete
* Snapshot restore
* Snapshot list
* Snapshot info
* Snapshot status
* Snapshot config
Change-Id: I2f46920c0d61c515f6a60e0f8b46fff886d9f6a9
BUG: 1061685
Signed-off-by: shishir gowda <sgowda@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Signed-off-by: Vijaikumar M <vmallika@redhat.com>
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/7128
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GlusterFS protocol currently uses AUTH_GLUSTERFS_V2 in the RPC/AUTH
header. This header contains the uid, gid and auxiliary groups of the
user/process that accesses the Gluster Volume.
The AUTH_GLUSTERFS_V2 structure allows up to 65535 auxiliary groups to
be passed on. Unfortunately, the RPC/AUTH header is limited to 400 bytes
by the RPC specification: http://tools.ietf.org/html/rfc5531#section-8.2
In order to not cause complete failures on the client-side when trying
to encode a AUTH_GLUSTERFS_V2 that would result in more than 400 bytes,
we can calculate the expected size of the other elements:
1 | pid
1 | uid
1 | gid
1 | groups_len
XX | groups_val (GF_MAX_AUX_GROUPS=65535)
1 | lk_owner_len
YY | lk_owner_val (GF_MAX_LOCK_OWNER_LEN=1024)
----+-------------------------------------------
5 | total xdr-units
one XDR-unit is defined as BYTES_PER_XDR_UNIT = 4 bytes
MAX_AUTH_BYTES = 400 is the maximum, this is 100 xdr-units.
XX + YY can be 95 to fill the 100 xdr-units.
Note that the on-wire protocol has tighter requirements than the
internal structures. It is possible for xlators to use more groups and
a bigger lk_owner than that can be sent by a GlusterFS-client.
This change prevents overflows when allocating the RPC/AUTH header. Two
new macros are introduced to calculate the number of groups that fit in
the RPC/AUTH header, when taking the size of the lk_owner in account. In
case the list of groups exceeds the maximum possible, only the first
groups are passed over the RPC/GlusterFS protocol to the bricks.
A warning is added to the logs, so that most system administrators will
get informed.
The reducing of the number of groups is not a new inventions. The
RPC/AUTH header (AUTH_SYS or AUTH_UNIX) that NFS uses has a limit of 16
groups. Most, if not all, NFS-clients will reduce any bigger number of
groups to 16. (nfs.server-aux-gids can be used to workaround the limit
of 16 groups, but the Gluster NFS-server will be limited to a maximum of
93 groups, or fewer in case the lk_owner structure contains more items.)
Change-Id: I8410e59d0fd246d601b54b961d3ae9cb5a858c10
BUG: 1053579
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/7202
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
BUG: 1084653
Change-Id: I057bbd2e50803344552314b32d2d0e6240bf9604
Signed-off-by: Justin Clift <justin@gluster.org>
Reviewed-on: http://review.gluster.org/7404
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
BUG: 1084147
Change-Id: Ie1ff8852a501690e681072c54620d305b5e20d6a
Signed-off-by: Justin Clift <justin@gluster.org>
Reviewed-on: http://review.gluster.org/7395
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem : When gluster volume remove-brick is executed with out any option, it
defaults to force commit which results in data loss.
Fix : remove-brick can not be executed with out explicit option, user needs to
provide the option in the command line else the command will throw back an usage
error.
Earlier usage : volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ...
[start|stop|status|commit|force]
Current usage : volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ...
<start|stop|status|commit|force>
Change-Id: I2a49131f782a6c0dcd03b4dc8ebe5907999b0b49
BUG: 1077682
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/7292
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Shyamsundar Ranganathan <sam.somari@gmail.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Probelm : __is_file_migratable used to return ENOTSUP
for all the cases. Hence, it will add to the failure
count. And the remove-brick status will show failure
for all the files.
Solution : Added 'ret = -2' to gf_defrag_handle_hardlink to
be deemed as success. Otherwise dht_migrate_file will try to
migrate each of the hard link, which not intended.
Change-Id: Iff74f6634fb64e4b91fc5d016e87ff1290b7a0d6
BUG: 1066798
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/7124
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
start
Problem : For remove-brick start operation all client volfiles are reconfigured
except nfs server volfile. Hence, even after layout is fixed by the rebalance
process, the nfs clients dont see the change and go on creating directories and
files in the decommissioned brick which leads to data loss after remove-brick
commit.
Solution : Reconfigure the nfs server volfile for remove-brick start
credit: kaushal@redhat.com
spalai@redhat.com
Change-Id: Ib8cd8b45a9e1f888d5e00dff65cdf77c1613a2af
BUG: 1070734
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/7162
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Remove client side self-healing completely (opendir, openfd, lookup)
- Re-work readdir-failover to work reliably in case of NFS
- Remove unused/dead lock recovery code
- Consistently use xdata in both calls and callbacks in all FOPs
- Per-inode event generation, used to force inode ctx refresh
- Implement dirty flag support (in place of pending counts)
- Eliminate inode ctx structure, use read subvol bits + event_generation
- Implement inode ctx refreshing based on event generation
- Provide backward compatibility in transactions
- remove unused variables and functions
- make code more consistent in style and pattern
- regularize and clean up inode-write transaction code
- regularize and clean up dir-write transaction code
- regularize and clean up common FOPs
- reorganize transaction framework code
- skip setting xattrs in pending dict if nothing is pending
- re-write self-healing code using syncops
- re-write simpler self-heal-daemon
Change-Id: I1e4080c9796c8a2815c2dab4be3073f389d614a8
BUG: 1021686
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-on: http://review.gluster.org/6010
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A program that calls mmap() on a newly created sparse file, may receive
a SIGBUS signal. If SIGBUS is not handled, a segmentation fault will
occur and the program will exit.
A bug in the write-behind translator can cause the creation of a sparse
file created with open(), seek(), write() to be cached. The last write()
may not be sent to the server, until write-behind deems this necessary.
* open(.., O_TRUNC, ...)/creat() the file, it is 0 bytes big
* seek() into the file, use offset 31
* write() 1 byte to the file
* the range from byte 0-30 are unwritten so called 'sparse'
The following illustration tries to capture this:
Legend:
[ = start of file
_ = unallocated/unwritten bytes
# = allocated bytes in the file
] = end of file
[_______________#]
| |
'- byte 0 '- byte 31
Without this change, reading from byte 0-30 will return an error, and
reading the same area through an mmap()'d pointer will trigger a SIGBUS.
Reading from this range did not trigger the outstanding write() to be
flushed. The brick that receives the read() (translated over the network
from mmap()) does not know that the file has been extended, and returns
-EINVAL. This error gets transported back from the brick to the
glusterfs-fuse client, and translated by the Linux kernel/VFS into
SIGBUS triggered by mmap().
In order to solve this, a new attribute to the wb_inode structure is
introduced; the current size of the file. All FOPs that can modify the
size, are expected to update wb_inode->size. This makes it possible for
extending writes with an offset bigger than EOF to mark the unwritten
area as modified/pending.
Change-Id: If5ba6646732e6be26568541ea9b12852a5d0b988
BUG: 1058663
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/6835
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
When a replica brick is added to a volume, set-user-ID and set-group-ID
permission bits of files are not set correctly in the new brick. The issue
is in the posix_setattr() call where we do a chmod followed by a chown.
But according to the man pages for chown:
When the owner or group of an executable file are changed by an unprivileged
user the S_ISUID and S_ISGID mode bits are cleared. POSIX does not specify
whether this also should happen when root does the chown().
Fix:
Swap the chmod and chown calls in posix_setattr()
Change-Id: I094e47a995c210d2fdbc23ae7a5718286e7a9cf8
BUG: 1058797
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/6862
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|