| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
libglusterfs devel package headers are referenced in code using
include semantics for a program, this while it works can be better
especially when dealing with out of tree xlator builds or in
general out of tree devel package usage.
Towards this, the following changes are done,
- moved all devel headers under a glusterfs directory
- Included these headers using system header notation <> in all
code outside of libglusterfs
- Included these headers using own program notation "" within
libglusterfs
This change although big, is just moving around the headers and
making it correct when including these headers from other sources.
This helps us correctly include libglusterfs includes without
namespace conflicts.
Change-Id: Id2a98854e671a7ee5d73be44da5ba1a74252423b
Updates: bz#1193929
Signed-off-by: ShyamsundarR <srangana@redhat.com>
|
| |
|
|
|
|
|
|
|
| |
CID: 1394649 1394657
Issue: Explicit null dereferenced
Change-Id: Ic1040ffa5548e1ecd49cfdc9a8716be445cbdf0f
Updates: bz#789278
Signed-off-by: Susant Palai <spalai@redhat.com>
|
| |
|
|
|
| |
Change-Id: Ia84cc24c8924e6d22d02ac15f611c10e26db99b4
Signed-off-by: Nigel Babu <nigelb@redhat.com>
|
| |
|
|
|
|
|
|
|
| |
Add classification to those translators which has `xlator_api_t`
already defined and used.
Updates: #430
Change-Id: I9d2772cb2c4ed4ab06aaa546500cf3b7d00bddac
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
| |
|
|
|
|
|
|
| |
Addresses CID: 1394648, 1394653
Change-Id: Ie75d4a268bba090faa5c3fe0e87f0e5cef3ff773
updates: bz#789278
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a plugin which provides an interface to retrive files from amazon-s3
which are archived in to s3.
Users need to give the above information for cloudsync to retrieve the file
from s3.
TODO:
1- A separate commit in to developer-guide will detail about the usage
of this plugin in more detail.
2- Need to create target file in aws-bucket with "gfid" names. Helps avoiding
name collisions.
Change-Id: I2e4a586f4e3f86164de9178e37673a07f317e7d9
Updates: #387
Signed-off-by: Susant Palai <spalai@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch brings the configuration option for plugins.
For new plugins, an entry has to be created in to cs_plugin structure e.g.
struct cs_plugin plugins[] = {
{
.name = "amazons3",
.library = "libamazons3.so",
.description = "amazon s3 store."
},
{.name = NULL},
};
Library field describes the name of the shared library for the plugin.
To configure plugin type "feature.cloudsync-storetype" option need
to be set to the remote-store type. e.g.
gluster volume set VOLNAME cloudsync-storetype amazons3. This should be same
as the ".name" field in cs_plugin structure.
cs_init will pick this up in run time to load the plugin.
Change-Id: I2cec10b206f71ac4e71d472631a3a5badf278b59
fixes: bz#1576842
Signed-off-by: Susant Palai <spalai@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The values for inode/fd was populated from the ctx received
from the server xlator.
Without brickmux, every brick from a volume belonged to a
single brick from the volume.
So searching the server and populating it worked.
With brickmux, a number of bricks can be confined to a single
process. These bricks can be from different volumes too (if
we use the max-bricks-per-process option).
If they are from different volumes, using the server xlator
to populate causes problem.
Fix:
Use the brick to validate and populate the inode/fd status.
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd
fixes: bz#1566067
|
|
|
spec-files:
https://review.gluster.org/#/c/18854/
Overview:
* Cloudsync maintains three file states in it's inode-ctx i.e
1 - LOCAL,
2 - REMOTE,
3 - DOWNLOADING.
* A data modifying fop is allowed only if the state is LOCAL.
If the state is REMOTE or DOWNLOADING, client will download
or wait for the download to finish initiated by other client.
* Multiple download and upload from different clients are synchronized
by inodelk.
* In POSIX a state check is done (part of different commit)before
allowing the fop to continue. If the state is remote/downloading the
fop is unwound with EREMOTE. The client will then download the file
and continue with the fop again.
* Basic Algo for fop (let's say write fop):
- If LOCAL -> resume fop
- If REMOTE ->
- INODELK
- STAT (this gets state and heal the state if needed)
- DOWNLOAD
- resume fop
Note:
* Developers will need to write plugins for download, based on the
remote store they choose. In phase-1, support will be added for
one remote store per volume. In future, more options for multiple
remote stores will be explored.
TODOs:
- Implement stat/lookup/readdirp to return size info from xattr
- Make plugins configurable
- Implement unlink fop
- Add metrics collection
- Add sharding support
Design Contributions:
Aravinda V K <avishwan@redhat.com>
Amar Tumballi <amarts@redhat.com>
Ram Ankireddypalle <areddy@commvault.com>
Susant Palai <spalai@redhat.com>
updates: #387
Change-Id: Iddf711ee7ab4e946ae3e472ff62791a7b85e6d4b
Signed-off-by: Susant Palai <spalai@redhat.com>
|