| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
Remove bd_map xlator and CLI related changes.
Change-Id: If7086205df1907127c1a1fa4ba603f1c48421d09
BUG: 1028672
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/5747
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for a new ZEROFILL fop. Zerofill writes zeroes to a file in
the specified range. This fop will be useful when a whole file needs to
be initialized with zero (could be useful for zero filled VM disk image
provisioning or during scrubbing of VM disk images).
Client/application can issue this FOP for zeroing out. Gluster server
will zero out required range of bytes ie server offloaded zeroing. In
the absence of this fop, client/application has to repetitively issue
write (zero) fop to the server, which is very inefficient method because
of the overheads involved in RPC calls and acknowledgements.
WRITESAME is a SCSI T10 command that takes a block of data as input and
writes the same data to other blocks and this write is handled
completely within the storage and hence is known as offload . Linux ,now
has support for SCSI WRITESAME command which is exposed to the user in
the form of BLKZEROOUT ioctl. BD Xlator can exploit BLKZEROOUT ioctl to
implement this fop. Thus zeroing out operations can be completely
offloaded to the storage device , making it highly efficient.
The fop takes two arguments offset and size. It zeroes out 'size' number
of bytes in an opened file starting from 'offset' position.
This patch adds zerofill support to the following areas:
- libglusterfs
- io-stats
- performance/md-cache,open-behind
- quota
- cluster/afr,dht,stripe
- rpc/xdr
- protocol/client,server
- io-threads
- marker
- storage/posix
- libgfapi
Client applications can exloit this fop by using glfs_zerofill introduced in
libgfapi.FUSE support to this fop has not been added as there is no system call
for this fop.
Changes from previous version 3:
* Removed redundant memory failure log messages
Changes from previous version 2:
* Rebased and fixed build error
Changes from previous version 1:
* Rebased for latest master
TODO :
* Add zerofill support to trace xlator
* Expose zerofill capability as part of gluster volume info
Here is a performance comparison of server offloaded zeofill vs zeroing
out using repeated writes.
[root@llmvm02 remote]# time ./offloaded aakash-test log 20
real 3m34.155s
user 0m0.018s
sys 0m0.040s
[root@llmvm02 remote]# time ./manually aakash-test log 20
real 4m23.043s
user 0m2.197s
sys 0m14.457s
[root@llmvm02 remote]# time ./offloaded aakash-test log 25;
real 4m28.363s
user 0m0.021s
sys 0m0.025s
[root@llmvm02 remote]# time ./manually aakash-test log 25
real 5m34.278s
user 0m2.957s
sys 0m18.808s
The argument log is a file which we want to set for logging purpose and
the third argument is size in GB .
As we can see there is a performance improvement of around 20% with this
fop.
Change-Id: I081159f5f7edde0ddb78169fb4c21c776ec91a18
BUG: 1028673
Signed-off-by: Aakash Lal Das <aakash@linux.vnet.ibm.com>
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/5327
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently to know the number of files to be healed, either user
has to go to backend and check the number of entries present in
indices/xattrop directory. But if a volume consists of large
number of bricks, going to each backend and counting the number
of entries is a time-taking task. Otherwise user can give
gluster volume heal vol-name info command but with this
approach if no. of entries are very hugh in the indices/
xattrop directory, it will comsume time.
So as a feature, new command is implemented.
Command 1: gluster volume heal vn statistics heal-count
This command will get the number of entries present in
every brick of a volume. The output displays only entries
count.
Command 2: gluster volume heal vn statistics heal-count
replica 192.168.122.1:/home/user/brickname
Here if we are concerned with just one replica.
So providing any one of the brick of a replica will get
the number of entries to be healed for that replica only.
Example:
Replicate volume with replica count 2.
Backend status:
--------------
[root@dhcp-0-17 xattrop]# ls -lia | wc -l
1918
NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy
entries so actual no. of entries to be healed are
1916.
[root@dhcp-0-17 xattrop]# pwd
/home/user/2ty/.glusterfs/indices/xattrop
Command output:
--------------
Gathering count of entries to be healed on volume volume3 has been successful
Brick 192.168.122.1:/home/user/22iu
Status: Brick is Not connected
Entries count is not available
Brick 192.168.122.1:/home/user/2ty
Number of entries: 1916
Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290
BUG: 1015990
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/6044
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"gluster volume heal volumename statistics" command gives the summary
of the afr crawl done based on the entries present in the xattrop
directory. Whenever afr crawls are attempted, the beginning time of
crawl, end time of crawl, no of files healed, heal-failed count and
number of files in split brain are shown along with the type of the
crawl. If crawl is already in progress then it will give the number
of files healed, heal failed count and number of files in split-brain
from the beginning of the crawl and instead of telling the end time of
the crawl, "CRAWL IN PROGRESS" message will be shown.
Output format:
command: "gluster volume heal volume-name statistics"
Output:
Gathering afr crawl statistics crawl statistics on volume volume-name
has been successful
------------------------------------------------
Crawl statistics for brick no 0
Hostname of brick 192.168.122.248
Starting time of crawl: Wed Jul 10 15:52:38 2013
Ending time of crawl: Wed Jul 10 15:52:38 2013
Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0
Starting time of crawl: Wed Jul 10 15:52:38 2013
Ending time of crawl: Wed Jul 10 15:52:38 2013
Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0
------------------------------------------------
Crawl statistics for brick no 1
Hostname of brick 192.168.122.1
Starting time of crawl: Wed Jul 10 15:52:42 2013
Ending time of crawl: Wed Jul 10 15:52:42 2013
Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0
Starting time of crawl: Wed Jul 10 15:52:42 2013
Ending time of crawl: Wed Jul 10 15:52:42 2013
Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0
--------------------------------------------------
Change-Id: I10bf9d10b005741db9973fb1352e0dd59ed99aa9
BUG: 949400
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/4790
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commands:
gluster system:: execute gsec_create
gluster volume geo-rep <master> <slave-url> create [push-pem] [force]
gluster volume geo-rep <master> <slave-url> start [force]
gluster volume geo-rep <master> <slave-url> stop [force]
gluster volume geo-rep <master> <slave-url> delete
gluster volume geo-rep <master> <slave-url> config
gluster volume geo-rep <master> <slave-url> status
The geo-replication is distributed. The session will be created, and
gsyncd will be spawned on all relevant nodes, instead of only one
node.
geo-rep: Collecting status detail related data
Added persistent store for saving information about
TotalFilesSynced, TotalSyncTime, TotalBytesSynced
Changes in the status information in socket:
Existing(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;
New(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978;
TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640;
Persistent details stored in
/var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status
Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111
BUG: 847839
Original Author: Avra Sengupta <asengupt@redhat.com>
Original Author: Venky Shankar <vshankar@redhat.com>
Original Author: Aravinda VK <avishwan@redhat.com>
Original Author: Amar Tumballi <amarts@redhat.com>
Original Author: Csaba Henk <csaba@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5132
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While looking at the newly introduced procedures FALLOCATE and DISCARD,
it seems that these were added with already existing procedure numbers.
This makes the protocol incompatible with existing roll-outs.
It is very confusing when new procedures are added somewhere in the
middle of the array. This will cause the number of existing procedures
to change. It is much preferred to add new procedures at the end of the
array. This changes not only corrects the enum that generates the
procedure numbers, but also the ordering in the client and server
fops-array for clarity.
Correcting this greatly simplifies adding support for these new
procedures in Wireshark and will prevent confusion to the people reading
network traces (with or without Wireshark).
Change-Id: Ib9e7978531d016c7230d756b855cb94cb0793b0f
BUG: 974976
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/5215
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for the DISCARD file operation. Discard punches a hole
in a file in the provided range. Block de-allocation is implemented
via fallocate() (as requested via fuse and passed on to the brick
fs) but a separate fop is created within gluster to emphasize the
fact that discard changes file data (the discarded region is
replaced with zeroes) and must invalidate caches where appropriate.
BUG: 963678
Change-Id: I34633a0bfff2187afeab4292a15f3cc9adf261af
Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-on: http://review.gluster.org/5090
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement support for the fallocate file operation. fallocate
allocates blocks for a particular inode such that future writes
to the associated region of the file are guaranteed not to fail
with ENOSPC.
This patch adds fallocate support to the following areas:
- libglusterfs
- mount/fuse
- io-stats
- performance/md-cache,open-behind
- quota
- cluster/afr,dht,stripe
- rpc/xdr
- protocol/client,server
- io-threads
- marker
- storage/posix
- libgfapi
BUG: 949242
Change-Id: Ice8e61351f9d6115c5df68768bc844abbf0ce8bd
Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-on: http://review.gluster.org/4969
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Usage: gluster system:: uuid get
This is needed since we generate uuid of a node in a lazy manner. ie, we
generate a uuid for the node only on the first volume or peer operation,
when the node needs an external identity. With this command, we can
force[1] the uuid generation, without a volume or peer operation performed.
[1]: Querying for uuid (or uuid get), forces uuid to come into
existence.
Change-Id: I62c8b6754117756aa4d773dd48af4ddeb1a1d878
BUG: 971661
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/5175
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit dad16a51ba7e6b1c57529423c57257dbce97ee93
Test script causing "silent" failures during execution.
Change-Id: I26dbb8ed22256071cb415cc3aff572ef8372600e
Reviewed-on: http://review.gluster.org/4268
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This command is necessary when the local disk/filesystem containing a brick
is unexpectedly lost and then recreated. Since 961bc80c, trying to start
the brick will fail because the trusted.glusterfs.volume-id xattr is
missing, and if we can't start it then we can't replace-brick or self-heal
so we're stuck in a permanently degraded state. This command provides a
way to label the empty brick with the proper volume ID so that further
repair actions become possible.
Change-Id: I1c1e5273a018b7a6b8d0852daf111ddc3fddfdc2
BUG: 860297
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/4259
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A new CLI command added to support cloning/snapshotting of a LV device
Syntax is:
$ gluster bd clone <volname>:<vg>/<lv> <newlv>
$ gluster bd snapshot <volname>:<vg>/<lv> <snap_lv> <size>
BUG: 805138
Change-Id: Idc2ac14525a3998329c742bf85a06326cac8cd54
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/3719
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Cli commands added to create/delete a LV device.
The following command creates lv in a given vg.
$ gluster bd create <volname>:<vgname>/<lvname> <size>
The following command deletes lv in a given vg.
$ gluster bd delete <volname>:<vgname>/<lvname>
BUG: 805138
Change-Id: Ie4e100eca14e2ee32cf2bb4dd064b17230d673bf
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/3718
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A commonly faced problem among glusterfs users is: after a fresh
installation of glusterfs in a virtual machine, the VM image is
cloned to make multiple instances of the server. This breaks
glusterd because right after glusterfs installation on the first
boot glusterd would have created the node UUID and this gets
inherited into the clone. The result is wierd behavior at the time
of peer probe where glusterd does not (yet) deal with UUID
collisions in a user friendly way.
To handle it gluster peer reset command is implemented which upon
execution changes the uuid of local glusterd.
Change-Id: If207dd2ad93ab94ef1a3253f409c21c442975f87
BUG: 811493
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.org/3637
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Feature-page:
http://www.gluster.org/community/documentation/index.php/Features/Server-quorum
Change-Id: I747b222519e71022462343d2c1bcd3626e1f9c86
BUG: 839595
Signed-off-by: Pranith Kumar K <pranithk@gluster.com>
Reviewed-on: http://review.gluster.org/3811
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Brings in a new rpc program MGMT_HANDSHAKE, which implements the op-version
handshake. This is required for bringing in the op-version feature as described
in http://www.gluster.org/community/documentation/index.php/Features/Opversion
Change-Id: I4333fd2714dbbd3a2a3fca5862cbb3c56615529e
BUG: 814534
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/3688
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
's/3_1/3_3/g' in case of glusterfs protocol
's/3_1_/_/g' in case of CLI and mgmt protocol
Change-Id: I6e6510d02c05f68f290c52ed284c04576326e12c
Signed-off-by: Amar Tumballi <amarts@redhat.com>
BUG: 764890
Reviewed-on: http://review.gluster.com/3632
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A commonly faced problem among glusterfs users is: after a fresh
installation of glusterfs in a virtual machine, the VM image is
cloned to make multiple instances of the server. This breaks
glusterd because right after glusterfs installation on the first
boot glusterd would have created the node UUID and this gets
inherited into the clone. The result is wierd behavior at the time
of peer probe where glusterd does not (yet) deal with UUID
collisions in a user friendly way.
With this patch the peer which got the probe request will compare
the uuid of the machine which send the probe request with its own
uuid and send the proper error to cli if the uuids are same.
Change-Id: I091741ec863431fb6480a09a3f4c68a0906a3339
BUG: 811493
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-on: http://review.gluster.com/3612
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterfs_ctx->notify can be used by any xlator to talk to
glusterfsd-mgmt.
Note- This is for any rpc communication initiated by the xlator,
and not from glusterd.
Change-Id: Ic0e4af106fe1e98d797ca621facda8839b87598a
BUG: 835757
Signed-off-by: shishir gowda <sgowda@redhat.com>
Reviewed-on: http://review.gluster.com/3618
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note that the license was not changed in any of the following:
.../argp-standalone/...
.../booster/...
.../cli/...
.../contrib/...
.../extras/...
.../glusterfsd/...
.../glusterfs-hadoop/...
.../mod_clusterfs/...
.../scheduler/...
.../swift/...
The license was not changed in any of the non-building xlators. The
license was not changed in any of the xlators that seemed — to me — to
be clearly server-side only, e.g. protocol/server
Note too that copyright was changed along with the license; I did
not change the copyright in files where the license did not change.
If you find any errors or ommissions please don't hesitate to let me know.
The complete list of files with the license change is:
libglusterfs/src/byte-order.h
libglusterfs/src/call-stub.c
libglusterfs/src/call-stub.h
libglusterfs/src/checksum.c
libglusterfs/src/checksum.h
libglusterfs/src/circ-buff.c
libglusterfs/src/circ-buff.h
libglusterfs/src/common-utils.c
libglusterfs/src/common-utils.h
libglusterfs/src/compat-errno.c
libglusterfs/src/compat-errno.h
libglusterfs/src/compat.c
libglusterfs/src/compat.h
libglusterfs/src/daemon.c
libglusterfs/src/daemon.h
libglusterfs/src/defaults.c
libglusterfs/src/defaults.h
libglusterfs/src/dict.c
libglusterfs/src/dict.h
libglusterfs/src/event-history.c
libglusterfs/src/event-history.h
libglusterfs/src/event.c
libglusterfs/src/event.h
libglusterfs/src/fd-lk.c
libglusterfs/src/fd-lk.h
libglusterfs/src/fd.c
libglusterfs/src/fd.h
libglusterfs/src/gf-dirent.c
libglusterfs/src/gf-dirent.h
libglusterfs/src/globals.c
libglusterfs/src/globals.h
libglusterfs/src/glusterfs.h
libglusterfs/src/graph-print.c
libglusterfs/src/graph-utils.h
libglusterfs/src/graph.c
libglusterfs/src/hashfn.c
libglusterfs/src/hashfn.h
libglusterfs/src/iatt.h
libglusterfs/src/inode.c
libglusterfs/src/inode.h
libglusterfs/src/iobuf.c
libglusterfs/src/iobuf.h
libglusterfs/src/latency.c
libglusterfs/src/latency.h
libglusterfs/src/list.h
libglusterfs/src/lkowner.h
libglusterfs/src/locking.h
libglusterfs/src/logging.c
libglusterfs/src/logging.h
libglusterfs/src/mem-pool.c
libglusterfs/src/mem-pool.h
libglusterfs/src/mem-types.h
libglusterfs/src/options.c
libglusterfs/src/options.h
libglusterfs/src/rbthash.c
libglusterfs/src/rbthash.h
libglusterfs/src/run.c
libglusterfs/src/run.h
libglusterfs/src/scheduler.c
libglusterfs/src/scheduler.h
libglusterfs/src/stack.c
libglusterfs/src/stack.h
libglusterfs/src/statedump.c
libglusterfs/src/statedump.h
libglusterfs/src/syncop.c
libglusterfs/src/syncop.h
libglusterfs/src/syscall.c
libglusterfs/src/syscall.h
libglusterfs/src/timer.c
libglusterfs/src/timer.h
libglusterfs/src/trie.c
libglusterfs/src/trie.h
libglusterfs/src/xlator.c
libglusterfs/src/xlator.h
libglusterfsclient/src/libglusterfsclient-dentry.c
libglusterfsclient/src/libglusterfsclient-internals.h
libglusterfsclient/src/libglusterfsclient.c
libglusterfsclient/src/libglusterfsclient.h
rpc/rpc-lib/src/auth-glusterfs.c
rpc/rpc-lib/src/auth-null.c
rpc/rpc-lib/src/auth-unix.c
rpc/rpc-lib/src/protocol-common.h
rpc/rpc-lib/src/rpc-clnt.c
rpc/rpc-lib/src/rpc-clnt.h
rpc/rpc-lib/src/rpc-transport.c
rpc/rpc-lib/src/rpc-transport.h
rpc/rpc-lib/src/rpcsvc-auth.c
rpc/rpc-lib/src/rpcsvc-common.h
rpc/rpc-lib/src/rpcsvc.c
rpc/rpc-lib/src/rpcsvc.h
rpc/rpc-lib/src/xdr-common.h
rpc/rpc-lib/src/xdr-rpc.c
rpc/rpc-lib/src/xdr-rpc.h
rpc/rpc-lib/src/xdr-rpcclnt.c
rpc/rpc-lib/src/xdr-rpcclnt.h
rpc/rpc-transport/rdma/src/name.c
rpc/rpc-transport/rdma/src/name.h
rpc/rpc-transport/rdma/src/rdma.c
rpc/rpc-transport/rdma/src/rdma.h
rpc/rpc-transport/socket/src/name.c
rpc/rpc-transport/socket/src/name.h
rpc/rpc-transport/socket/src/socket.c
rpc/rpc-transport/socket/src/socket.h
xlators/cluster/afr/src/afr-common.c
xlators/cluster/afr/src/afr-dir-read.c
xlators/cluster/afr/src/afr-dir-read.h
xlators/cluster/afr/src/afr-dir-write.c
xlators/cluster/afr/src/afr-dir-write.h
xlators/cluster/afr/src/afr-inode-read.c
xlators/cluster/afr/src/afr-inode-read.h
xlators/cluster/afr/src/afr-inode-write.c
xlators/cluster/afr/src/afr-inode-write.h
xlators/cluster/afr/src/afr-lk-common.c
xlators/cluster/afr/src/afr-mem-types.h
xlators/cluster/afr/src/afr-open.c
xlators/cluster/afr/src/afr-self-heal-algorithm.c
xlators/cluster/afr/src/afr-self-heal-algorithm.h
xlators/cluster/afr/src/afr-self-heal-common.c
xlators/cluster/afr/src/afr-self-heal-common.h
xlators/cluster/afr/src/afr-self-heal-data.c
xlators/cluster/afr/src/afr-self-heal-entry.c
xlators/cluster/afr/src/afr-self-heal-metadata.c
xlators/cluster/afr/src/afr-self-heal.h
xlators/cluster/afr/src/afr-self-heald.c
xlators/cluster/afr/src/afr-self-heald.h
xlators/cluster/afr/src/afr-transaction.c
xlators/cluster/afr/src/afr-transaction.h
xlators/cluster/afr/src/afr.c
xlators/cluster/afr/src/afr.h
xlators/cluster/afr/src/pump.c
xlators/cluster/afr/src/pump.h
xlators/cluster/dht/src/dht-common.c
xlators/cluster/dht/src/dht-common.h
xlators/cluster/dht/src/dht-diskusage.c
xlators/cluster/dht/src/dht-hashfn.c
xlators/cluster/dht/src/dht-helper.c
xlators/cluster/dht/src/dht-inode-read.c
xlators/cluster/dht/src/dht-inode-write.c
xlators/cluster/dht/src/dht-layout.c
xlators/cluster/dht/src/dht-linkfile.c
xlators/cluster/dht/src/dht-mem-types.h
xlators/cluster/dht/src/dht-rebalance.c
xlators/cluster/dht/src/dht-rename.c
xlators/cluster/dht/src/dht-selfheal.c
xlators/cluster/dht/src/dht.c
xlators/cluster/dht/src/nufa.c
xlators/cluster/dht/src/switch.c
xlators/cluster/stripe/src/stripe-helpers.c
xlators/cluster/stripe/src/stripe-mem-types.h
xlators/cluster/stripe/src/stripe.c
xlators/cluster/stripe/src/stripe.h
xlators/features/index/src/index-mem-types.h ¹
xlators/features/index/src/index.c ¹
xlators/features/index/src/index.h ¹
xlators/performance/io-cache/src/io-cache.c
xlators/performance/io-cache/src/io-cache.h
xlators/performance/io-cache/src/ioc-inode.c
xlators/performance/io-cache/src/ioc-mem-types.h
xlators/performance/io-cache/src/page.c
xlators/performance/io-threads/src/io-threads.c
xlators/performance/io-threads/src/io-threads.h
xlators/performance/io-threads/src/iot-mem-types.h
xlators/performance/md-cache/src/md-cache-mem-types.h
xlators/performance/md-cache/src/md-cache.c
xlators/performance/quick-read/src/quick-read-mem-types.h
xlators/performance/quick-read/src/quick-read.c
xlators/performance/quick-read/src/quick-read.h
xlators/performance/read-ahead/src/page.c
xlators/performance/read-ahead/src/read-ahead-mem-types.h
xlators/performance/read-ahead/src/read-ahead.c
xlators/performance/read-ahead/src/read-ahead.h
xlators/performance/symlink-cache/src/symlink-cache.c
xlators/performance/write-behind/src/write-behind-mem-types.h
xlators/performance/write-behind/src/write-behind.c
xlators/protocol/auth/addr/src/addr.c ¹
xlators/protocol/auth/login/src/login.c ¹
xlators/protocol/client/src/client-callback.c
xlators/protocol/client/src/client-handshake.c
xlators/protocol/client/src/client-helpers.c
xlators/protocol/client/src/client-lk.c
xlators/protocol/client/src/client-mem-types.h
xlators/protocol/client/src/client.c
xlators/protocol/client/src/client.h
xlators/protocol/client/src/client3_1-fops.c
¹ Copyright only, license reverted to original
Change-Id: If560e826c61b6b26f8b9af7bed6e4bcbaeba31a8
BUG: 820551
Signed-off-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.com/3304
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Event_notify can be used by others to communicate with glusterd.
A cbk event is also added for future use.
req has a op, and dict.
rsp has op_ret, op_errno, and dict.
With this, rebalance process can update the status before
exiting.
Signed-off-by: shishir gowda <shishirng@gluster.com>
Change-Id: If5c0ec00514eb3a109a790b2ea273317611e4562
BUG: 807126
Reviewed-on: http://review.gluster.com/3013
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The major changes are,
* "volume status" now supports getting details of the self-heal daemon processes
for replica volumes. A new cli options "shd", similar to "nfs", has been
introduced for this. "detail", "fd" and "clients" status ops are not supported
for self-heal daemons.
* The default/normal ouput of "volume status" has been enhanced to contain
information about nfs-server and self-heal daemon processes as well. Some tweaks
have been done to the cli output to show appropriate output.
Also, changes have been done to rebalance/remove-brick status, so that hostnames
are displayed instead of uuids.
Change-Id: I3972396dcf72d45e14837fa5f9c7d62410901df8
BUG: 803676
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.com/3016
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enables usage of volume monitoring operations "volume status", "volume top" and
"volume profile" for nfs servers. These operations can be performed on
nfs-servers by passing "nfs" as an option in cli. The output is similar to the
normal brick outputs for these commands.
The new syntaxes for the changed commands are as below,
#gluster volume profile <VOLNAME> {start|info|stop} [nfs]
#gluster volume top <VOLNAME> {[open|read|write|opendir|readdir [nfs]]
|[read-perf|write-perf [nfs|{bs <size> count <count>}]]}
[brick <brick>] [list-cnt <count>]
#gluster volume status [all | <VOLNAME> [nfs|<BRICK>]]
[detail|clients|mem|inode|fd|callpool]
Change-Id: Ia6eb50c60aecacf9b413d3ea993f4cdd90ec0e07
BUG: 795267
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.com/2820
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Id92d3276e65a6c0fe61ab328b58b3954ae116c74
BUG: 763820
Signed-off-by: Pranith Kumar K <pranithk@gluster.com>
Reviewed-on: http://review.gluster.com/2775
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently(with out this patch), on a disconnect the server cleans up
the transport which inturn closes the fd's and releases the locks acquired on
those fd's by that client. On a reconnect, client just reopens the fd's but
doesn't reacquire the locks. The application that had previously acquired
the locks still is under the assumption that it is the owner of those locks
which might have been granted to other clients(if they request) by the server
leading to data corruption.
This patch allows the client to reacquire the fcntl locks (held on the fd's)
during client-server handshake.
* The server identifies the client via process-uuid-xl (which is a combination
of uuid and client-protocol name, it is assumed to be unique) and lk-version
number.
* The client maintains a list of process-uuid-xl, lk-version pair for each
accepted connection. On a connect, the server traverses the list for a
matching pair, if a matching pair is not found the the server returns
lk-version with value 0, else it returns the lk-version it has in store.
* On a disconnect, the server and client enter grace period, and on the
completion of the grace period, the client bumps up its lk-version number
(which means, it will reacquire the locks the next time) and the server will
distroy the connection. If reconnection happens within the grace period, the
server will find the matching (process-uuid-xl, lk-version) pair in its list
which guarantees that the fd's and there corresponding locks are still valid
for this client.
Configurable options:
To set grace-timeout, the following options are
option server.grace-timeout value
option client.grace-timeout value
To enable or disable the lk-heal,
option lk-heal [on|off]
gluster volume set command can be used to configurable options
Change-Id: Id677ef1087b300d649f278b8b2aa0d94eae85ed2
BUG: 795386
Signed-off-by: Mohammed Junaid <junaid@redhat.com>
Reviewed-on: http://review.gluster.com/2766
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rebalance will not use any maintainance clients. It is replaced by syncops,
with the volfile. Brickop (communication between glusterd<->glusterfs process)
is used for status and stop commands.
Dept-first traversal of dir is maintained, but data is migrated as and when
encounterd.
fix-layout (dir)
do
Complete migrate-data of dir
fix-layout (subdir)
done
Rebalance state is saved in the vol file, for restart-ability.
A disconnect event and pidfile state determine the defrag-status
Signed-off-by: shishirng <shishirng@gluster.com>
Change-Id: Iec6c80c84bbb2142d840242c28db3d5f5be94d01
BUG: 763844
Reviewed-on: http://review.gluster.com/2540
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I8e7cd51d6e3dd968cced1ec4115b6811f2ab5c1b
BUG: 789858
Signed-off-by: Krishnan Parthasarathi <kp@gluster.com>
Reviewed-on: http://review.gluster.com/2552
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch enables gluster cli to output data in xml format. XML output can be
obtained by passing "--xml" as an argument.
A new "volume list" command, which lists the volumes present in a cluster, has
been added. This can be used for obtaining a quick list of volumes.
Several commands, including "volume top", "volume profile", "volume status" and
"volume info", "volume list", have custom XML output routines. Other commands
use either one of the 2 generic output routines, cli_xml_output_str() &
cli_xml_output_dict().
NOTE: When using "all" for "volume status" and "volume info" the XML output will
have multiple roots.
Change-Id: I6117baa02ec06fda116177dbd401f66521263ac6
BUG: 790713
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.com/2753
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch enhances and extends the "volume status" command with information
obtained from the statedump of the bricks of volumes.
Adds new status types : clients, inode, fd, mem, callpool
The new syntax of "volume status" is,
#gluster volume status [all|{<volname> [<brickname>]
[misc-details|clients|inode|fd|mem|callpool]}]
Change-Id: I8d019718465bbc3de727653a839de7238f45da5c
BUG: 765495
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.com/2637
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
so operations can be done on fd for extended attribute removal
Change-Id: Ie026f1b53793aeb4ae33e96ea5408c7a97f34bf6
Signed-off-by: Amar Tumballi <amar@gluster.com>
BUG: 766571
Reviewed-on: http://review.gluster.com/778
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@gluster.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I0d7eb0ac67ad83e5f21b60cc2acc514ac602ea42
BUG: 772469
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.com/2604
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amar@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Support "gluster volume status (all)" option to display all
volumes' status.
* On option "detail" appended to "gluster volume status *",
amount of storage free, total storage, and backend filesystem
details like inode size, inode count, free inodes, fs type,
device name of each brick is displayed.
* One can also obtain [detailed]status of only one brick.
* Format of the enhanced volume status command is:
"gluster volume status [all|<vol>] [<brick>] [detail]"
* Some generic functions have been added to common-utils:
skipword
get_nth_word
These functions enable parsing and fetching
of words in a sentence.
glusterd_get_brick_root (in glusterd)
These are self explanatory.
Change-Id: I6f40c1e19810f8504cd3b1786207364053d82e28
BUG: 765464
Signed-off-by: Rajesh Amaravathi <rajesh@redhat.com>
Reviewed-on: http://review.gluster.com/777
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amar@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
no more backward compatibility between glusterd <-> glusterd
Change-Id: Ibfcca1c7e315a90b2639c4cba8da19b11875051a
BUG: 3158
Reviewed-on: http://review.gluster.com/610
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Shishir Gowda <shishirng@gluster.com>
Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Performs cleanup on the detached peer and in the cluster after a
peer detach, and also adds a new check before starting detach.
Cleanup -
On the detached peer, cleanup removes the entries of those volumes
on the peer that do not have all their bricks on it. This prevents
these stale volumes from being added to a new cluster when peer is
attached to one.
In the cluster, all those volumes which have all their bricks on the
detached peer are removed.
Checks-
Checks if all the peers in the cluster are online and connected,
except the peer being detached, before starting detach. Using force
will bypass this check and do detach.
Change-Id: I4fef9ea3cc72ce8c4ce0a82b4ee8a1663a502061
BUG: 1926
Reviewed-on: http://review.gluster.com/431
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kp@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changes:
1. Add a new 'volume statedump' command, that performs statedumps of
all the bricks in the volume and saves them in a specified location.
2. Add new server option 'server.statedump-path'.
3. Remove multiple function definitions in glusterd.h
Statedump Information:
The 'volume statedump' command performs statedumps on all the bricks in
a given volume. The syntax of the command is,
gluster volume statedump <VOLNAME> [type]......
Types include,
* all
* mem
* iobuf
* callpool
* priv
* fd
* inode
Defaults to 'all' when no type is specified.
The statedump files are created by default in /tmp directory of the
server on which the bricks are present.
This path can be changed by setting the 'server.statedump-path' option.
The statedump files will be named as,
<brick-name>.<pid of brick process>.dump
Change-Id: I01c0e1a8aad490da818e086d89f292bd2ed06fd4
BUG: 1964
Reviewed-on: http://review.gluster.com/321
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amar@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This cmd is used in the context of proactive self-heal for replicated
volumes. User invokes the following cmd when (s)he suspects that self-heal
needs to be done on a particular volume,
gluster volume heal <VOLNAME>.
Change-Id: I3954353b53488c28b70406e261808239b44997f3
BUG: 3602
Reviewed-on: http://review.gluster.com/454
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mountbroker is configured in glusterd volfile through a DSL
which is restriced enough to be able to appear in the role
of the value of a volfile knob.
Basically the DSL describes set-theorical requirements
against the option set which is sent by the cli (in the
hope of getting a mount with these options).
If the requirements meet and the volume id and the uid
who is to "own" the mount can be unambigously deduced from
the given request, glusterd does the mount with the given
parameters.
The use case of geo-replication is sugared by means of volume
options which then generate a complete mount-broker option set.
Demo:
- add the following option to your glusterd volfile:
option mountbroker-root /tmp/mbr
option mountbroker.fool EQL(volfile-id=pop*|user-map-root=*|volfile-server=localhost)&MEET(user-map-root=john|user-map-root=jane)
- before starting glusterd, create /tmp/mbr owned by root with mode 0755
- with cli, do
$ gluster system:: mount fool volfile-id=pop33 user-map-root=jane volfile-server=localhost
- on succesful completion (volume pop33 exists and is started, jane is a valid username),
the mount path will be echoed to you
- you can get rid of the mount by
$ gluster system:: umount <mount-path>
Change-Id: I629cf64add0a45500d05becc3316f67cdb5b42ff
BUG: 3482
Reviewed-on: http://review.gluster.com/128
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vijay@gluster.com>
|
|
|
|
|
|
|
|
| |
Change-Id: Iea835b9e448e736016da2e44e3c9bfff93f2fa78
BUG: 3439
Reviewed-on: http://review.gluster.com/259
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@gluster.com>
|
|
|
|
|
|
|
|
| |
Change-Id: I2d10f2be44f518f496427f257988f1858e888084
BUG: 3348
Reviewed-on: http://review.gluster.com/200
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@gluster.com>
|
|
|
|
|
|
|
|
| |
Change-Id: I3914467611e573cccee0d22df93920cf1b2eb79f
BUG: 3348
Reviewed-on: http://review.gluster.com/182
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@gluster.com>
|
|
|
|
|
|
|
|
| |
Signed-off-by: Venky Shankar <venky@gluster.com>
Signed-off-by: Anand Avati <avati@gluster.com>
BUG: 2714 (implement cli log level command)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2714
|
|
|
|
|
|
|
|
| |
Signed-off-by: Csaba Henk <csaba@lowlife.hu>
Signed-off-by: Anand Avati <avati@gluster.com>
BUG: 2785 (gsyncd logs on slave side go to /dev/null)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2785
|
|
|
|
|
|
|
|
| |
Signed-off-by: shishir gowda <shishirng@gluster.com>
Signed-off-by: Vijay Bellur <vijay@dev.gluster.com>
BUG: 2516 (Implement gluster volume top command)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2516
|
|
|
|
|
|
|
|
| |
Signed-off-by: Junaid <junaid@gluster.com>
Signed-off-by: Vijay Bellur <vijay@dev.gluster.com>
BUG: 2473 (Support for volume and directory level quota)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2473
|
|
|
|
|
|
|
|
| |
Signed-off-by: Pranith Kumar K <pranithk@gluster.com>
Signed-off-by: Vijay Bellur <vijay@dev.gluster.com>
BUG: 1965 (need a cmd to get io-stat details)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1965
|
|
|
|
|
|
|
|
| |
Signed-off-by: Amar Tumballi <amar@gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 2333 (make glusterd more rpc friendly)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2333
|
|
|
|
|
|
|
|
| |
Signed-off-by: Junaid <junaid@gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 1570 (geosync related changes)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1570
|
|
|
|
|
|
|
|
| |
Signed-off-by: Pranith Kumar K <pranithk@gluster.com>
Signed-off-by: Anand V. Avati <avati@dev.gluster.com>
BUG: 1966 (Unnecessarily verbose logs at the default log level)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1966
|
|
|
|
|
|
|
|
| |
Signed-off-by: Pranith Kumar K <pranithk@gluster.com>
Signed-off-by: Vijay Bellur <vijay@dev.gluster.com>
BUG: 1838 (handle peer detach gracefully in case of lost frames)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1838
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Write the reconfigured options in 'info' file to make it persistant
- Implementation of volume set <volname> history
- Implementation of volume reset <volname>
Signed-off-by: Kaushik BV <kaushikbv@gluster.com>
Signed-off-by: Vijay Bellur <vijay@dev.gluster.com>
BUG: 1159 ()
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1159
|