summaryrefslogtreecommitdiffstats
path: root/doc/gluster.8
blob: 28811c5b3d4adf940f1c2b271720b397ad61e8fc (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172

.\"  Copyright (c) 2006-2012 Red Hat, Inc. <http://www.redhat.com>
.\"  This file is part of GlusterFS.
.\"
.\"  This file is licensed to you under your choice of the GNU Lesser
.\"  General Public License, version 3 or any later version (LGPLv3 or
.\"  later), or the GNU General Public License, version 2 (GPLv2), in all
.\"  cases as published by the Free Software Foundation.
.\"
.\"
.TH Gluster 8 "Gluster command line utility" "07 March 2011" "Gluster Inc."
.SH NAME
gluster - Gluster Console Manager (command line utility)
.SH SYNOPSIS
.B gluster
.PP
To run the program and display gluster prompt:
.PP
.B gluster
.PP
(or)
.PP
To specify a command directly:
.PP
.B gluster
.I [commands] [options]

.SH DESCRIPTION
The Gluster Console Manager is a command line utility for elastic volume management. You can run the gluster command on any export server. The command enables administrators to perform cloud operations, such as creating, expanding, shrinking, rebalancing, and migrating volumes without needing to schedule server downtime.
.SH COMMANDS

.SS "Volume Commands"
.PP
.TP

\fB\ volume info [all|<VOLNAME>] \fR
Display information about all volumes, or the specified volume.
.TP
\fB\ volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK> ... \fR
Create a new volume of the specified type using the specified bricks and transport type (the default transport type is tcp).
To create a volume with both transports (tcp and rdma), give 'transport tcp,rdma' as an option.
.TP
\fB\ volume delete <VOLNAME> \fR
Delete the specified volume.
.TP
\fB\ volume start <VOLNAME> \fR
Start the specified volume.
.TP
\fB\ volume stop <VOLNAME> [force] \fR
Stop the specified volume.
.TP
\fB\ volume rename <VOLNAME> <NEW-VOLNAME> \fR
Rename the specified volume.
.TP
\fB\ volume set <VOLNAME> <OPTION> <PARAMETER> [<OPTION> <PARAMETER>] ... \fR
Set the volume options.
.TP
\fB\ volume help \fR
Display help for the volume command.
.SS "Brick Commands"
.PP
.TP
\fB\ volume add-brick <VOLNAME> <NEW-BRICK> ... \fR
Add the specified brick to the specified volume.
.TP
\fB\ volume remove-brick <VOLNAME> <BRICK> ... \fR
Remove the specified brick from the specified volume.
.IP
.B Note:
If you remove the brick, the data stored in that brick will not be available. You can migrate data from one brick to another using
.B replace-brick
option.
.TP
\fB\ volume replace-brick <VOLNAME> (<BRICK> <NEW-BRICK>) start|pause|abort|status|commit \fR
Replace the specified brick.
.TP
\fB\ volume rebalance <VOLNAME> start \fR
Start rebalancing the specified volume.
.TP
\fB\ volume rebalance <VOLNAME> stop \fR
Stop rebalancing the specified volume.
.TP
\fB\ volume rebalance <VOLNAME> status \fR
Display the rebalance status of the specified volume.
.SS "Log Commands"
.TP
\fB\ volume log filename <VOLNAME> [BRICK] <DIRECTORY> \fB
Set the log directory for the corresponding volume/brick.
.TP
\fB\ volume log locate <VOLNAME> [BRICK] \fB
Locate the log file for corresponding volume/brick.
.TP
\fB\ volume log rotate <VOLNAME> [BRICK] \fB
Rotate the log file for corresponding volume/brick.
.SS "Peer Commands"
.TP
\fB\ peer probe <HOSTNAME> \fR
Probe the specified peer. In case the <HOSTNAME> given belongs to an already probed peer, the peer probe command will add the hostname to the peer if required.
.TP
\fB\ peer detach <HOSTNAME> \fR
Detach the specified peer.
.TP
\fB\ peer status \fR
Display the status of peers.
.TP
\fB\ peer help \fR
Display help for the peer command.
.SS "Snapshot Commands"
.PP
.TP
\fB\ snapshot create <snapname> <volname> [description <description>] [force] \fR
Creates a snapshot of a GlusterFS volume. User can provide a snap-name and a description to identify the snap. The description cannot be more than 1024 characters. To be able to take a snapshot, volume should be present and it should be in started state.
.TP
\fB\ snapshot restore <snapname> \fR
Restores an already taken snapshot of a GlusterFS volume. Snapshot restore is an offline activity therefore if the volume is online (in started state) then the restore operation will fail. Once the snapshot is restored it will not be available in the list of snapshots.
.TP
\fB\ snapshot delete <snapname> \fR
Deletes the specified snapshot.
.TP
\fB\ snapshot list [volname] \fR
Lists all snapshots taken. If volname is provided, then only the snapshots belonging to that particular volume is listed.
.TP
\fB\ snapshot info [snapname | (volume <volname>)] \fR
This command gives information such as snapshot name, snapshot UUID, time at which snapshot was created, and it lists down the snap-volume-name, number of snapshots already taken and number of snapshots still available for that particular volume, and the state of the snapshot.
.TP
\fB\ snapshot status [snapname | (volume <volname>)] \fR
This command gives status of the snapshot. The details included are snapshot brick path, volume group(LVM details), status of the snapshot bricks, PID of the bricks, data percentage filled for that particular volume group to which the snapshots belong to, and total size of the logical volume.

If snapname is specified then status of the mentioned snapshot is displayed. If volname is specified then status of all snapshots belonging to that volume is displayed. If both snapname and volname is not specified then status of all the snapshots present in the system are displayed.
.TP
\fB\ snapshot config [volname] ([snap-max-hard-limit <count>] [snap-max-soft-limit <percent>]) | ([auto-delete <enable|disable>])
Displays and sets the snapshot config values.

snapshot config without any keywords displays the snapshot config values of all volumes in the system. If volname is provided, then the snapshot config values of that volume is displayed.

Snapshot config command along with keywords can be used to change the existing config values. If volname is provided then config value of that volume is changed, else it will set/change the system limit.

snap-max-soft-limit and auto-delete are global options, that will be inherited by all volumes in the system and cannot be set to individual volumes.

The system limit takes precedence over the volume specific limit.

When auto-delete feature is enabled, then upon reaching the soft-limit, with every successful snapshot creation, the oldest snapshot will be deleted.

When auto-delete feature is disabled, then upon reaching the soft-limit, the user gets a warning with every successful snapshot creation.

Upon reaching the hard-limit, further snapshot creations will not be allowed.
.TP
\fB\ snapshot activate <snapname> \fR
Activates the mentioned snapshot.

Note : By default the snapshot is activated during snapshot creation.
.TP
\fB\ snapshot deactivate <snapname> \fR
Deactivates the mentioned snapshot.
.SS "Other Commands"
.TP
\fB\ help \fR
Display the command options.
.TP
\fB\ quit \fR
Exit the gluster command line interface.

.SH FILES
/var/lib/glusterd/*
.SH SEE ALSO
.nf
\fBfusermount\fR(1), \fBmount.glusterfs\fR(8), \fBglusterfs\fR(8), \fBglusterd\fR(8)
\fR
.fi
.SH COPYRIGHT
.nf
Copyright(c) 2006-2011  Gluster, Inc.  <http://www.gluster.com>