1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
|
\input texinfo
@setfilename user-guide.info
@settitle GlusterFS 2.0 User Guide
@afourpaper
@direntry
* GlusterFS: (user-guide). GlusterFS distributed filesystem user guide
@end direntry
@copying
This is the user manual for GlusterFS 2.0.
Copyright @copyright{} 2007-2009 @email{@b{Z}} Research, Inc. Permission is granted to
copy, distribute and/or modify this document under the terms of the
@acronym{GNU} Free Documentation License, Version 1.2 or any later
version published by the Free Software Foundation; with no Invariant
Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the
license is included in the chapter entitled ``@acronym{GNU} Free
Documentation License''.
@end copying
@titlepage
@title GlusterFS 2.0 User Guide [DRAFT]
@subtitle January 15, 2008
@author http://gluster.org/core-team.php
@author @email{@b{Z}} @b{Research}
@page
@vskip 0pt plus 1filll
@insertcopying
@end titlepage
@c Info stuff
@ifnottex
@node Top
@top GlusterFS 2.0 User Guide
@insertcopying
@menu
* Acknowledgements::
* Introduction::
* Installation and Invocation::
* Concepts::
* Translators::
* Usage Scenarios::
* Troubleshooting::
* GNU Free Documentation Licence::
* Index::
@detailmenu
--- The Detailed Node Listing ---
Installation and Invocation
* Pre requisites::
* Getting GlusterFS::
* Building::
* Running GlusterFS::
* A Tutorial Introduction::
Running GlusterFS
* Server::
* Client::
Concepts
* Filesystems in Userspace::
* Translator::
* Volume specification file::
Translators
* Storage Translators::
* Client and Server Translators::
* Clustering Translators::
* Performance Translators::
* Features Translators::
Storage Translators
* POSIX::
Client and Server Translators
* Transport modules::
* Client protocol::
* Server protocol::
Clustering Translators
* Unify::
* Replicate::
* Stripe::
Performance Translators
* Read Ahead::
* Write Behind::
* IO Threads::
* IO Cache::
Features Translators
* POSIX Locks::
* Fixed ID::
Miscellaneous Translators
* ROT-13::
* Trace::
@end detailmenu
@end menu
@end ifnottex
@c Info stuff end
@contents
@node Acknowledgements
@unnumbered Acknowledgements
GlusterFS continues to be a wonderful and enriching experience for all
of us involved.
GlusterFS development would not have been possible at this pace if
not for our enthusiastic users. People from around the world have
helped us with bug reports, performance numbers, and feature suggestions.
A huge thanks to them all.
Matthew Paine - for RPMs & general enthu
Leonardo Rodrigues de Mello - for DEBs
Julian Perez & Adam D'Auria - for multi-server tutorial
Paul England - for HA spec
Brent Nelson - for many bug reports
Jacques Mattheij - for Europe mirror.
Patrick Negri - for TCP non-blocking connect.
@flushright
http://gluster.org/core-team.php (@email{list-hacking@@zresearch.com})
@email{@b{Z}} Research
@end flushright
@node Introduction
@chapter Introduction
GlusterFS is a distributed filesystem. It works at the file level,
not block level.
A network filesystem is one which allows us to access remote files. A
distributed filesystem is one that stores data on multiple machines
and makes them all appear to be a part of the same filesystem.
Need for distributed filesystems
@itemize @bullet
@item Scalability: A distributed filesystem allows us to store more data than what can be stored on a single machine.
@item Redundancy: We might want to replicate crucial data on to several machines.
@item Uniform access: One can mount a remote volume (for example your home directory) from any machine and access the same data.
@end itemize
@section Contacting us
You can reach us through the mailing list @strong{gluster-devel}
(@email{gluster-devel@@nongnu.org}).
@cindex GlusterFS mailing list
You can also find many of the developers on @acronym{IRC}, on the @code{#gluster}
channel on Freenode (@indicateurl{irc.freenode.net}).
@cindex IRC channel, #gluster
The GlusterFS documentation wiki is also useful: @*
@indicateurl{http://gluster.org/docs/index.php/GlusterFS}
For commercial support, you can contact @email{@b{Z}} Research at:
@cindex commercial support
@cindex Z Research, Inc.
@display
3194 Winding Vista Common
Fremont, CA 94539
USA.
Phone: +1 (510) 354 6801
Toll free: +1 (888) 813 6309
Fax: +1 (510) 372 0604
@end display
You can also email us at @email{support@@zresearch.com}.
@node Installation and Invocation
@chapter Installation and Invocation
@menu
* Pre requisites::
* Getting GlusterFS::
* Building::
* Running GlusterFS::
* A Tutorial Introduction::
@end menu
@node Pre requisites
@section Pre requisites
Before installing GlusterFS make sure you have the
following components installed.
@subsection @acronym{FUSE}
GlusterFS has now built-in support for the @acronym{FUSE} protocol.
You need a kernel with @acronym{FUSE} support to mount GlusterFS.
You do not need the @acronym{FUSE} package (library and utilities),
but be aware of the following issues:
@itemize
@item If you want unprivileged users to be able to mount GlusterFS filesystems,
you need a recent version of the @command{fusermount} utility. You already have
it if you have @acronym{FUSE} version 2.7.0 or higher installed; if that's not
the case, one will be compiled along with GlusterFS if you pass
@command{--enable-fusermount} to the @command{configure} script. @item You
need to ensure @acronym{FUSE} support is configured properly on your system. In
details:
@itemize
@item If your kernel has @acronym{FUSE} as a loadable module, make sure it's
loaded.
@item Create @command{/dev/fuse} (major 10, minor 229) either by means of udev
rules or by hand.
@item Optionally, if you want runtime control over your @acronym{FUSE} mounts,
mount the fusectl auxiliary filesystem:
@example
# mount -t fusectl none /sys/fs/fuse/connections
@end example
@end itemize
The @acronym{FUSE} packages shipped by the various distributions usually take care
about these things, so the easiest way to get the above tasks handled is still
installing the @acronym{FUSE} package(s).
@end itemize
To get the best performance from GlusterFS,it is recommended that you use
our patched version of the @acronym{FUSE} kernel module. See Patched FUSE for details.
@subsection Patched FUSE
The GlusterFS project maintains a patched version of @acronym{FUSE} meant to be used
with GlusterFS. The patches increase GlusterFS performance. It is recommended that
all users use the patched @acronym{FUSE}.
The patched @acronym{FUSE} tarball can be downloaded from:
@indicateurl{ftp://ftp.zresearch.com/pub/gluster/glusterfs/fuse/}
The specific changes made to @acronym{FUSE} are:
@itemize
@item The communication channel size between @acronym{FUSE} kernel module and GlusterFS has been increased to 1MB, permitting large reads and writes to be sent in bigger chunks.
@item The kernel's read-ahead boundry has been extended upto 1MB.
@item Block size returned in the @command{stat()}/@command{fstat()} calls tuned to 1MB, to make cp and similar commands perform I/O using that block size.
@item @command{flock()} locking support has been added (although some rework in GlusterFS is needed for perfect compliance).
@end itemize
@subsection libibverbs (optional)
@cindex InfiniBand, installation
@cindex libibverbs
This is only needed if you want GlusterFS to use InfiniBand as the
interconnect mechanism between server and client. You can get it from:
@indicateurl{http://www.openfabrics.org/downloads.htm}.
@subsection Bison and Flex
These should be already installed on most Linux systems. If not, use your distribution's
normal software installation procedures to install them. Make sure you install the
relevant developer packages also.
@node Getting GlusterFS
@section Getting GlusterFS
@cindex arch
There are many ways to get hold of GlusterFS. For a production deployment,
the recommended method is to download the latest release tarball.
Release tarballs are available at: @indicateurl{http://gluster.org/download.php}.
If you want the bleeding edge development source, you can get them
from the Git
@footnote{@indicateurl{http://git-scm.com}}
repository. First you must install Git itself. Then
you can check out the source
@example
$ git clone git://git.sv.gnu.org/gluster.git glusterfs
@end example
@node Building
@section Building
You can skip this section if you're installing from @acronym{RPM}s
or @acronym{DEB}s.
GlusterFS uses the Autotools mechanism to build. As such, the procedure
is straight-forward. First, change into the GlusterFS source directory.
@example
$ cd glusterfs-<version>
@end example
If you checked out the source from the Arch repository, you'll need
to run @command{./autogen.sh} first. Note that you'll need to have
Autoconf and Automake installed for this.
Run @command{configure}.
@example
$ ./configure
@end example
The configure script accepts the following options:
@cartouche
@table @code
@item --disable-ibverbs
Disable the InfiniBand transport mechanism.
@item --disable-fuse-client
Disable the @acronym{FUSE} client.
@item --disable-server
Disable building of the GlusterFS server.
@item --disable-bdb
Disable building of Berkeley DB based storage translator.
@item --disable-mod_glusterfs
Disable building of Apache/lighttpd glusterfs plugins.
@item --disable-epoll
Use poll instead of epoll.
@item --disable-libglusterfsclient
Disable building of libglusterfsclient
@item --enable-fusermount
Build fusermount
@end table
@end cartouche
Build and install GlusterFS.
@example
# make install
@end example
The binaries (@command{glusterfsd} and @command{glusterfs}) will be by
default installed in @command{/usr/local/sbin/}. Translator,
scheduler, and transport shared libraries will be installed in
@command{/usr/local/lib/glusterfs/<version>/}. Sample volume
specification files will be in @command{/usr/local/etc/glusterfs/}.
This document itself can be found in
@command{/usr/local/share/doc/glusterfs/}. If you passed the @command{--prefix}
argument to the configure script, then replace @command{/usr/local} in the preceding
paths with the prefix.
@node Running GlusterFS
@section Running GlusterFS
@menu
* Server::
* Client::
@end menu
@node Server
@subsection Server
@cindex GlusterFS server
The GlusterFS server is necessary to export storage volumes to remote clients
(See @ref{Server protocol} for more info). This section documents the invocation
of the GlusterFS server program and all the command-line options accepted by it.
@cartouche
@table @code
Basic Options
@item -f, --volfile=<path>
Use the volume file as the volume specification.
@item -s, --volfile-server=<hostname>
Server to get volume file from. This option overrides --volfile option.
@item -l, --log-file=<path>
Specify the path for the log file.
@item -L, --log-level=<level>
Set the log level for the server. Log level should be one of @acronym{DEBUG},
@acronym{WARNING}, @acronym{ERROR}, @acronym{CRITICAL}, or @acronym{NONE}.
Advanced Options
@item --debug
Run in debug mode. This option sets --no-daemon, --log-level to DEBUG and
--log-file to console.
@item -N, --no-daemon
Run glusterfsd as a foreground process.
@item -p, --pid-file=<path>
Path for the @acronym{PID} file.
@item --volfile-id=<key>
'key' of the volfile to be fetched from server.
@item --volfile-server-port=<port-number>
Listening port number of volfile server.
@item --volfile-server-transport=[socket|ib-verbs]
Transport type to get volfile from server. [default: @command{socket}]
@item --xlator-options=<volume-name.option=value>
Add/override a translator option for a volume with specified value.
Miscellaneous Options
@item -?, --help
Show this help text.
@item --usage
Display a short usage message.
@item -V, --version
Show version information.
@end table
@end cartouche
@node Client
@subsection Client
@cindex GlusterFS client
The GlusterFS client process is necessary to access remote storage volumes and
mount them locally using @acronym{FUSE}. This section documents the invocation of the
client process and all its command-line arguments.
@example
# glusterfs [options] <mountpoint>
@end example
The @command{mountpoint} is the directory where you want the GlusterFS
filesystem to appear. Example:
@example
# glusterfs -f /usr/local/etc/glusterfs-client.vol /mnt
@end example
The command-line options are detailed below.
@tex
\vfill
@end tex
@page
@cartouche
@table @code
Basic Options
@item -f, --volfile=<path>
Use the volume file as the volume specification.
@item -s, --volfile-server=<hostname>
Server to get volume file from. This option overrides --volfile option.
@item -l, --log-file=<path>
Specify the path for the log file.
@item -L, --log-level=<level>
Set the log level for the server. Log level should be one of @acronym{DEBUG},
@acronym{WARNING}, @acronym{ERROR}, @acronym{CRITICAL}, or @acronym{NONE}.
Advanced Options
@item --debug
Run in debug mode. This option sets --no-daemon, --log-level to DEBUG and
--log-file to console.
@item -N, --no-daemon
Run @command{glusterfs} as a foreground process.
@item -p, --pid-file=<path>
Path for the @acronym{PID} file.
@item --volfile-id=<key>
'key' of the volfile to be fetched from server.
@item --volfile-server-port=<port-number>
Listening port number of volfile server.
@item --volfile-server-transport=[socket|ib-verbs]
Transport type to get volfile from server. [default: @command{socket}]
@item --xlator-options=<volume-name.option=value>
Add/override a translator option for a volume with specified value.
@item --volume-name=<volume name>
Volume name in client spec to use. Defaults to the root volume.
@acronym{FUSE} Options
@item --attribute-timeout=<n>
Attribute timeout for inodes in the kernel, in seconds. Defaults to 1 second.
@item --disable-direct-io-mode
Disable direct @acronym{I/O} mode in @acronym{FUSE} kernel module. This is set
automatically if kernel supports big writes (>= 2.6.26).
@item -e, --entry-timeout=<n>
Entry timeout for directory entries in the kernel, in seconds.
Defaults to 1 second.
Missellaneous Options
@item -?, --help
Show this help information.
@item -V, --version
Show version information.
@end table
@end cartouche
@node A Tutorial Introduction
@section A Tutorial Introduction
This section will show you how to quickly get GlusterFS up and running. We'll
configure GlusterFS as a simple network filesystem, with one server and one client.
In this mode of usage, GlusterFS can serve as a replacement for NFS.
We'll make use of two machines; call them @emph{server} and
@emph{client} (If you don't want to setup two machines, just run
everything that follows on the same machine). In the examples that
follow, the shell prompts will use these names to clarify the machine
on which the command is being run. For example, a command that should
be run on the server will be shown with the prompt:
@example
[root@@server]#
@end example
Our goal is to make a directory on the @emph{server} (say, @command{/export})
accessible to the @emph{client}.
First of all, get GlusterFS installed on both the machines, as described in the
previous sections. Make sure you have the @acronym{FUSE} kernel module loaded. You
can ensure this by running:
@example
[root@@server]# modprobe fuse
@end example
Before we can run the GlusterFS client or server programs, we need to write
two files called @emph{volume specifications} (equivalently refered to as @emph{volfiles}).
The volfile describes the @emph{translator tree} on a node. The next chapter will
explain the concepts of `translator' and `volume specification' in detail. For now,
just assume that the volfile is like an NFS @command{/etc/export} file.
On the server, create a text file somewhere (we'll assume the path
@command{/tmp/glusterfsd.vol}) with the following contents.
@cartouche
@example
volume colon-o
type storage/posix
option directory /export
end-volume
volume server
type protocol/server
subvolumes colon-o
option transport-type tcp
option auth.addr.colon-o.allow *
end-volume
@end example
@end cartouche
A brief explanation of the file's contents. The first section defines a storage
volume, named ``colon-o'' (the volume names are arbitrary), which exports the
@command{/export} directory. The second section defines options for the translator
which will make the storage volume accessible remotely. It specifies @command{colon-o} as
a subvolume. This defines the @emph{translator tree}, about which more will be said
in the next chapter. The two options specify that the @acronym{TCP} protocol is to be
used (as opposed to InfiniBand, for example), and that access to the storage volume
is to be provided to clients with any @acronym{IP} address at all. If you wanted to
restrict access to this server to only your subnet for example, you'd specify
something like @command{192.168.1.*} in the second option line.
On the client machine, create the following text file (again, we'll assume
the path to be @command{/tmp/glusterfs-client.vol}). Replace
@emph{server-ip-address} with the @acronym{IP} address of your server machine. If you
are doing all this on a single machine, use @command{127.0.0.1}.
@cartouche
@example
volume client
type protocol/client
option transport-type tcp
option remote-host @emph{server-ip-address}
option remote-subvolume colon-o
end-volume
@end example
@end cartouche
Now we need to start both the server and client programs. To start the server:
@example
[root@@server]# glusterfsd -f /tmp/glusterfs-server.vol
@end example
To start the client:
@example
[root@@client]# glusterfs -f /tmp/glusterfs-client.vol /mnt/glusterfs
@end example
You should now be able to see the files under the server's @command{/export} directory
in the @command{/mnt/glusterfs} directory on the client. That's it; GlusterFS is now
working as a network file system.
@node Concepts
@chapter Concepts
@menu
* Filesystems in Userspace::
* Translator::
* Volume specification file::
@end menu
@node Filesystems in Userspace
@section Filesystems in Userspace
A filesystem is usually implemented in kernel space. Kernel space
development is much harder than userspace development. @acronym{FUSE}
is a kernel module/library that allows us to write a filesystem
completely in userspace.
@acronym{FUSE} consists of a kernel module which interacts with the userspace
implementation using a device file @code{/dev/fuse}. When a process
makes a syscall on a @acronym{FUSE} filesystem, @acronym{VFS} hands the request to the
@acronym{FUSE} module, which writes the request to @code{/dev/fuse}. The
userspace implementation polls @code{/dev/fuse}, and when a request arrives,
processes it and writes the result back to @code{/dev/fuse}. The kernel then
reads from the device file and returns the result to the user process.
In case of GlusterFS, the userspace program is the GlusterFS client.
The control flow is shown in the diagram below. The GlusterFS client
services the request by sending it to the server, which in turn
hands it to the local @acronym{POSIX} filesystem.
@center @image{fuse,44pc,,,.pdf}
@center Fig 1. Control flow in GlusterFS
@node Translator
@section Translator
The @emph{translator} is the most important concept in GlusterFS. In
fact, GlusterFS is nothing but a collection of translators working
together, forming a translator @emph{tree}.
The idea of a translator is perhaps best understood using an
analogy. Consider the @acronym{VFS} in the Linux kernel. The
@acronym{VFS} abstracts the various filesystem implementations (such
as @acronym{EXT3}, ReiserFS, @acronym{XFS}, etc.) supported by the
kernel. When an application calls the kernel to perform an operation
on a file, the kernel passes the request on to the appropriate
filesystem implementation.
For example, let's say there are two partitions on a Linux machine:
@command{/}, which is an @acronym{EXT3} partition, and @command{/usr},
which is a ReiserFS partition. Now if an application wants to open a
file called, say, @command{/etc/fstab}, then the kernel will
internally pass the request to the @acronym{EXT3} implementation. If
on the other hand, an application wants to read a file called
@command{/usr/src/linux/CREDITS}, then the kernel will call upon the
ReiserFS implementation to do the job.
The ``filesystem implementation'' objects are analogous to GlusterFS
translators. A GlusterFS translator implements all the filesystem
operations. Whereas in @acronym{VFS} there is a two-level tree (with
the kernel at the root and all the filesystem implementation as its
children), in GlusterFS there exists a more elaborate tree structure.
We can now define translators more precisely. A GlusterFS translator
is a shared object (@command{.so}) that implements every filesystem
call. GlusterFS translators can be arranged in an arbitrary tree
structure (subject to constraints imposed by the translators). When
GlusterFS receives a filesystem call, it passes it on to the
translator at the root of the translator tree. The root translator may
in turn pass it on to any or all of its children, and so on, until the
leaf nodes are reached. The result of a filesystem call is
communicated in the reverse fashion, from the leaf nodes up to the
root node, and then on to the application.
So what might a translator tree look like?
@tex
\vfill
@end tex
@page
@center @image{xlator,44pc,,,.pdf}
@center Fig 2. A sample translator tree
The diagram depicts three servers and one GlusterFS client. It is important
to note that conceptually, the translator tree spans machine boundaries.
Thus, the client machine in the diagram, @command{10.0.0.1}, can access
the aggregated storage of the filesystems on the server machines @command{10.0.0.2},
@command{10.0.0.3}, and @command{10.0.0.4}. The translator diagram will make more
sense once you've read the next chapter and understood the functions of the
various translators.
@node Volume specification file
@section Volume specification file
The volume specification file describes the translator tree for both the
server and client programs.
A volume specification file is a sequence of volume definitions.
The syntax of a volume definition is explained below:
@cartouche
@example
@strong{volume} @emph{volume-name}
@strong{type} @emph{translator-name}
@strong{option} @emph{option-name} @emph{option-value}
@dots{}
@strong{subvolumes} @emph{subvolume1} @emph{subvolume2} @dots{}
@strong{end-volume}
@end example
@dots{}
@end cartouche
@table @asis
@item @emph{volume-name}
An identifier for the volume. This is just a human-readable name,
and can contain any alphanumeric character. For instance, ``storage-1'', ``colon-o'',
or ``forty-two''.
@item @emph{translator-name}
Name of one of the available translators. Example: @command{protocol/client},
@command{cluster/unify}.
@item @emph{option-name}
Name of a valid option for the translator.
@item @emph{option-value}
Value for the option. Everything following the ``option'' keyword to the end of the
line is considered the value; it is up to the translator to parse it.
@item @emph{subvolume1}, @emph{subvolume2}, @dots{}
Volume names of sub-volumes. The sub-volumes must already have been defined earlier
in the file.
@end table
There are a few rules you must follow when writing a volume specification file:
@itemize
@item Everything following a `@command{#}' is considered a comment and is ignored. Blank lines are also ignored.
@item All names and keywords are case-sensitive.
@item The order of options inside a volume definition does not matter.
@item An option value may not span multiple lines.
@item If an option is not specified, it will assume its default value.
@item A sub-volume must have already been defined before it can be referenced. This means you have to write the specification file ``bottom-up'', starting from the leaf nodes of the translator tree and moving up to the root.
@end itemize
A simple example volume specification file is shown below:
@cartouche
@example
# This is a comment line
volume client
type protocol/client
option transport-type tcp
option remote-host localhost # Also a comment
option remote-subvolume brick
# The subvolumes line may be absent
end-volume
volume iot
type performance/io-threads
option thread-count 4
subvolumes client
end-volume
volume wb
type performance/write-behind
subvolumes iot
end-volume
@end example
@end cartouche
@node Translators
@chapter Translators
@menu
* Storage Translators::
* Client and Server Translators::
* Clustering Translators::
* Performance Translators::
* Features Translators::
* Miscellaneous Translators::
@end menu
This chapter documents all the available GlusterFS translators in detail.
Each translator section will show its name (for example, @command{cluster/unify}),
briefly describe its purpose and workings, and list every option accepted by
that translator and their meaning.
@node Storage Translators
@section Storage Translators
The storage translators form the ``backend'' for GlusterFS. Currently,
the only available storage translator is the @acronym{POSIX}
translator, which stores files on a normal @acronym{POSIX}
filesystem. A pleasant consequence of this is that your data will
still be accessible if GlusterFS crashes or cannot be started.
Other storage backends are planned for the future. One of the possibilities is an
Amazon S3 translator. Amazon S3 is an unlimited online storage service accessible
through a web services @acronym{API}. The S3 translator will allow you to access
the storage as a normal @acronym{POSIX} filesystem.
@footnote{Some more discussion about this can be found at:
http://developer.amazonwebservices.com/connect/message.jspa?messageID=52873}
@menu
* POSIX::
* BDB::
@end menu
@node POSIX
@subsection POSIX
@example
type storage/posix
@end example
The @command{posix} translator uses a normal @acronym{POSIX}
filesystem as its ``backend'' to actually store files and
directories. This can be any filesystem that supports extended
attributes (@acronym{EXT3}, ReiserFS, @acronym{XFS}, ...). Extended
attributes are used by some translators to store metadata, for
example, by the replicate and stripe translators. See
@ref{Replicate} and @ref{Stripe}, respectively for details.
@cartouche
@table @code
@item directory <path>
The directory on the local filesystem which is to be used for storage.
@end table
@end cartouche
@node BDB
@subsection BDB
@example
type storage/bdb
@end example
The @command{BDB} translator uses a @acronym{Berkeley DB} database as its
``backend'' to actually store files as key-value pair in the database and
directories as regular @acronym{POSIX} directories. Note that @acronym{BDB}
does not provide extended attribute support for regular files. Do not use
@acronym{BDB} as storage translator while using any translator that demands
extended attributes on ``backend''.
@cartouche
@table @code
@item directory <path>
The directory on the local filesystem which is to be used for storage.
@item mode [cache|persistent] (cache)
When @acronym{BDB} is run in @command{cache} mode, recovery of back-end is not completely
guaranteed. @command{persistent} guarantees that @acronym{BDB} can recover back-end from
@acronym{Berkeley DB} even if GlusterFS crashes.
@item errfile <path>
The path of the file to be used as @command{errfile} for @acronym{Berkeley DB} to report
detailed error messages, if any. Note that all the contents of this file will be written
by @acronym{Berkeley DB}, not GlusterFS.
@item logdir <path>
@end table
@end cartouche
@node Client and Server Translators, Clustering Translators, Storage Translators, Translators
@section Client and Server Translators
The client and server translator enable GlusterFS to export a
translator tree over the network or access a remote GlusterFS
server. These two translators implement GlusterFS's network protocol.
@menu
* Transport modules::
* Client protocol::
* Server protocol::
@end menu
@node Transport modules
@subsection Transport modules
The client and server translators are capable of using any of the
pluggable transport modules. Currently available transport modules are
@command{tcp}, which uses a @acronym{TCP} connection between client
and server to communicate; @command{ib-sdp}, which uses a
@acronym{TCP} connection over InfiniBand, and @command{ibverbs}, which
uses high-speed InfiniBand connections.
Each transport module comes in two different versions, one to be used on
the server side and the other on the client side.
@subsubsection TCP
The @acronym{TCP} transport module uses a @acronym{TCP/IP} connection between
the server and the client.
@example
option transport-type tcp
@end example
The @acronym{TCP} client module accepts the following options:
@cartouche
@table @code
@item non-blocking-connect [no|off|on|yes] (on)
Whether to make the connection attempt asynchronous.
@item remote-port <n> (6996)
Server port to connect to.
@cindex DNS round robin
@item remote-host <hostname> *
Hostname or @acronym{IP} address of the server. If the host name resolves to
multiple IP addresses, all of them will be tried in a round-robin fashion. This
feature can be used to implement fail-over.
@end table
@end cartouche
The @acronym{TCP} server module accepts the following options:
@cartouche
@table @code
@item bind-address <address> (0.0.0.0)
The local interface on which the server should listen to requests. Default is to
listen on all interfaces.
@item listen-port <n> (6996)
The local port to listen on.
@end table
@end cartouche
@subsubsection IB-SDP
@example
option transport-type ib-sdp
@end example
kernel implements socket interface for ib hardware. SDP is over ib-verbs.
This module accepts the same options as @command{tcp}
@subsubsection ibverbs
@example
option transport-type tcp
@end example
@cindex infiniband transport
InfiniBand is a scalable switched fabric interconnect mechanism
primarily used in high-performance computing. InfiniBand can deliver
data throughput of the order of 10 Gbit/s, with latencies of 4-5 ms.
The @command{ib-verbs} transport accesses the InfiniBand hardware through
the ``verbs'' @acronym{API}, which is the lowest level of software access possible
and which gives the highest performance. On InfiniBand hardware, it is always
best to use @command{ib-verbs}. Use @command{ib-sdp} only if you cannot get
@command{ib-verbs} working for some reason.
The @command{ib-verbs} client module accepts the following options:
@cartouche
@table @code
@item non-blocking-connect [no|off|on|yes] (on)
Whether to make the connection attempt asynchronous.
@item remote-port <n> (6996)
Server port to connect to.
@cindex DNS round robin
@item remote-host <hostname> *
Hostname or @acronym{IP} address of the server. If the host name resolves to
multiple IP addresses, all of them will be tried in a round-robin fashion. This
feature can be used to implement fail-over.
@end table
@end cartouche
The @command{ib-verbs} server module accepts the following options:
@cartouche
@table @code
@item bind-address <address> (0.0.0.0)
The local interface on which the server should listen to requests. Default is to
listen on all interfaces.
@item listen-port <n> (6996)
The local port to listen on.
@end table
@end cartouche
The following options are common to both the client and server modules:
If you are familiar with InfiniBand jargon,
the mode is used by GlusterFS is ``reliable connection-oriented channel transfer''.
@cartouche
@table @code
@item ib-verbs-work-request-send-count <n> (64)
Length of the send queue in datagrams. [Reason to increase/decrease?]
@item ib-verbs-work-request-recv-count <n> (64)
Length of the receive queue in datagrams. [Reason to increase/decrease?]
@item ib-verbs-work-request-send-size <size> (128KB)
Size of each datagram that is sent. [Reason to increase/decrease?]
@item ib-verbs-work-request-recv-size <size> (128KB)
Size of each datagram that is received. [Reason to increase/decrease?]
@item ib-verbs-port <n> (1)
Port number for ib-verbs.
@item ib-verbs-mtu [256|512|1024|2048|4096] (2048)
The Maximum Transmission Unit [Reason to increase/decrease?]
@item ib-verbs-device-name <device-name> (first device in the list)
InfiniBand device to be used.
@end table
@end cartouche
For maximum performance, you should ensure that the send/receive counts on both
the client and server are the same.
ib-verbs is preferred over ib-sdp.
@node Client protocol
@subsection Client
@example
type procotol/client
@end example
The client translator enables the GlusterFS client to access a remote server's
translator tree.
@cartouche
@table @code
@item transport-type [tcp,ib-sdp,ib-verbs] (tcp)
The transport type to use. You should use the client versions of all the
transport modules (@command{tcp}, @command{ib-sdp},
@command{ib-verbs}).
@item remote-subvolume <volume_name> *
The name of the volume on the remote host to attach to. Note that
this is @emph{not} the name of the @command{protocol/server} volume on the
server. It should be any volume under the server.
@item transport-timeout <n> (120- seconds)
Inactivity timeout. If a reply is expected and no activity takes place
on the connection within this time, the transport connection will be
broken, and a new connection will be attempted.
@end table
@end cartouche
@node Server protocol
@subsection Server
@example
type protocol/server
@end example
The server translator exports a translator tree and makes it accessible to
remote GlusterFS clients.
@cartouche
@table @code
@item client-volume-filename <path> (<CONFDIR>/glusterfs-client.vol)
The volume specification file to use for the client. This is the file the
client will receive when it is invoked with the @command{--server} option
(@ref{Client}).
@item transport-type [tcp,ib-verbs,ib-sdp] (tcp)
The transport to use. You should use the server versions of all the transport
modules (@command{tcp}, @command{ib-sdp}, @command{ib-verbs}).
@item auth.addr.<volume name>.allow <IP address wildcard pattern>
IP addresses of the clients that are allowed to attach to the specified volume.
This can be a wildcard. For example, a wildcard of the form @command{192.168.*.*}
allows any host in the @command{192.168.x.x} subnet to connect to the server.
@end table
@end cartouche
@node Clustering Translators
@section Clustering Translators
The clustering translators are the most important GlusterFS
translators, since it is these that make GlusterFS a cluster
filesystem. These translators together enable GlusterFS to access an
arbitrarily large amount of storage, and provide @acronym{RAID}-like
redundancy and distribution over the entire cluster.
There are three clustering translators: @strong{unify}, @strong{replicate},
and @strong{stripe}. The unify translator aggregates storage from
many server nodes. The replicate translator provides file replication. The stripe
translator allows a file to be spread across many server nodes. The following sections
look at each of these translators in detail.
@menu
* Unify::
* Replicate::
* Stripe::
@end menu
@node Unify
@subsection Unify
@cindex unify (translator)
@cindex scheduler (unify)
@example
type cluster/unify
@end example
The unify translator presents a `unified' view of all its sub-volumes. That is,
it makes the union of all its sub-volumes appear as a single volume. It is the
unify translator that gives GlusterFS the ability to access an arbitrarily
large amount of storage.
For unify to work correctly, certain invariants need to be maintained across
the entire network. These are:
@cindex unify invariants
@itemize
@item The directory structure of all the sub-volumes must be identical.
@item A particular file can exist on only one of the sub-volumes. Phrasing it in another way, a pathname such as @command{/home/calvin/homework.txt}) is unique across the entire cluster.
@end itemize
@tex
\vfill
@end tex
@page
@center @image{unify,44pc,,,.pdf}
Looking at the second requirement, you might wonder how one can
accomplish storing redundant copies of a file, if no file can exist
multiple times. To answer, we must remember that these invariants are
from @emph{unify's perspective}. A translator such as replicate at a lower
level in the translator tree than unify may subvert this picture.
The first invariant might seem quite tedious to ensure. We shall see
later that this is not so, since unify's @emph{self-heal} mechanism
takes care of maintaining it.
The second invariant implies that unify needs some way to decide which file goes where.
Unify makes use of @emph{scheduler} modules for this purpose.
When a file needs to be created, unify's scheduler decides upon the
sub-volume to be used to store the file. There are many schedulers
available, each using a different algorithm and suitable for different
purposes.
The various schedulers are described in detail in the sections that follow.
@subsubsection ALU
@cindex alu (scheduler)
@example
option scheduler alu
@end example
ALU stands for "Adaptive Least Usage". It is the most advanced
scheduler available in GlusterFS. It balances the load across volumes
taking several factors in account. It adapts itself to changing I/O
patterns according to its configuration. When properly configured, it
can eliminate the need for regular tuning of the filesystem to keep
volume load nicely balanced.
The ALU scheduler is composed of multiple least-usage
sub-schedulers. Each sub-scheduler keeps track of a certain type of
load, for each of the sub-volumes, getting statistics from
the sub-volumes themselves. The sub-schedulers are these:
@itemize
@item disk-usage: The used and free disk space on the volume.
@item read-usage: The amount of reading done from this volume.
@item write-usage: The amount of writing done to this volume.
@item open-files-usage: The number of files currently open from this volume.
@item disk-speed-usage: The speed at which the disks are spinning. This is a constant value and therefore not very useful.
@end itemize
The ALU scheduler needs to know which of these sub-schedulers to use,
and in which order to evaluate them. This is done through the
@command{option alu.order} configuration directive.
Each sub-scheduler needs to know two things: when to kick in (the
entry-threshold), and how long to stay in control (the
exit-threshold). For example: when unifying three disks of 100GB,
keeping an exact balance of disk-usage is not necesary. Instead, there
could be a 1GB margin, which can be used to nicely balance other
factors, such as read-usage. The disk-usage scheduler can be told to
kick in only when a certain threshold of discrepancy is passed, such
as 1GB. When it assumes control under this condition, it will write
all subsequent data to the least-used volume. If it is doing so, it is
unwise to stop right after the values are below the entry-threshold
again, since that would make it very likely that the situation will
occur again very soon. Such a situation would cause the ALU to spend
most of its time disk-usage scheduling, which is unfair to the other
sub-schedulers. The exit-threshold therefore defines the amount of
data that needs to be written to the least-used disk, before control
is relinquished again.
In addition to the sub-schedulers, the ALU scheduler also has "limits"
options. These can stop the creation of new files on a volume once
values drop below a certain threshold. For example, setting
@command{option alu.limits.min-free-disk 5GB} will stop the scheduling
of files to volumes that have less than 5GB of free disk space,
leaving the files on that disk some room to grow.
The actual values you assign to the thresholds for sub-schedulers and
limits depend on your situation. If you have fast-growing files,
you'll want to stop file-creation on a disk much earlier than when
hardly any of your files are growing. If you care less about
disk-usage balance than about read-usage balance, you'll want a bigger
disk-usage scheduler entry-threshold and a smaller read-usage
scheduler entry-threshold.
For thresholds defining a size, values specifying "KB", "MB" and "GB"
are allowed. For example: @command{option alu.limits.min-free-disk 5GB}.
@cartouche
@table @code
@item alu.order <order> * ("disk-usage:write-usage:read-usage:open-files-usage:disk-speed")
@item alu.disk-usage.entry-threshold <size> (1GB)
@item alu.disk-usage.exit-threshold <size> (512MB)
@item alu.write-usage.entry-threshold <%> (25)
@item alu.write-usage.exit-threshold <%> (5)
@item alu.read-usage.entry-threshold <%> (25)
@item alu.read-usage.exit-threshold <%> (5)
@item alu.open-files-usage.entry-threshold <n> (1000)
@item alu.open-files-usage.exit-threshold <n> (100)
@item alu.limits.min-free-disk <%>
@item alu.limits.max-open-files <n>
@end table
@end cartouche
@subsubsection Round Robin (RR)
@cindex rr (scheduler)
@example
option scheduler rr
@end example
Round-Robin (RR) scheduler creates files in a round-robin
fashion. Each client will have its own round-robin loop. When your
files are mostly similar in size and I/O access pattern, this
scheduler is a good choice. RR scheduler checks for free disk space
on the server before scheduling, so you can know when to add
another server node. The default value of min-free-disk is 5% and is
checked on file creation calls, with atleast 10 seconds (by default)
elapsing between two checks.
Options:
@cartouche
@table @code
@item rr.limits.min-free-disk <%> (5)
Minimum free disk space a node must have for RR to schedule a file to it.
@item rr.refresh-interval <t> (10 seconds)
Time between two successive free disk space checks.
@end table
@end cartouche
@subsubsection Random
@cindex random (scheduler)
@example
option scheduler random
@end example
The random scheduler schedules file creation randomly among its child nodes.
Like the round-robin scheduler, it also checks for a minimum amount of free disk
space before scheduling a file to a node.
@cartouche
@table @code
@item random.limits.min-free-disk <%> (5)
Minimum free disk space a node must have for random to schedule a file to it.
@item random.refresh-interval <t> (10 seconds)
Time between two successive free disk space checks.
@end table
@end cartouche
@subsubsection NUFA
@cindex nufa (scheduler)
@example
option scheduler nufa
@end example
It is common in many GlusterFS computing environments for all deployed
machines to act as both servers and clients. For example, a
research lab may have 40 workstations each with its own storage. All
of these workstations might act as servers exporting a volume as well
as clients accessing the entire cluster's storage. In such a
situation, it makes sense to store locally created files on the local
workstation itself (assuming files are accessed most by the
workstation that created them). The Non-Uniform File Allocation (@acronym{NUFA})
scheduler accomplishes that.
@acronym{NUFA} gives the local system first priority for file creation
over other nodes. If the local volume does not have more free disk space
than a specified amount (5% by default) then @acronym{NUFA} schedules files
among the other child volumes in a round-robin fashion.
@acronym{NUFA} is named after the similar strategy used for memory access,
@acronym{NUMA}@footnote{Non-Uniform Memory Access:
@indicateurl{http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access}}.
@cartouche
@table @code
@item nufa.limits.min-free-disk <%> (5)
Minimum disk space that must be free (local or remote) for @acronym{NUFA} to schedule a
file to it.
@item nufa.refresh-interval <t> (10 seconds)
Time between two successive free disk space checks.
@item nufa.local-volume-name <volume>
The name of the volume corresponding to the local system. This volume must be
one of the children of the unify volume. This option is mandatory.
@end table
@end cartouche
@cindex namespace
@subsubsection Namespace
Namespace volume needed because:
- persistent inode numbers.
- file exists even when node is down.
namespace files are simply touched. on every lookup it is checked.
@cartouche
@table @code
@item namespace <volume> *
Name of the namespace volume (which should be one of the unify volume's children).
@item self-heal [on|off] (on)
Enable/disable self-heal. Unless you know what you are doing, do not disable self-heal.
@end table
@end cartouche
@cindex self heal (unify)
@subsubsection Self Heal
* When a 'lookup()/stat()' call is made on directory for the first
time, a self-heal call is made, which checks for the consistancy of
its child nodes. If an entry is present in storage node, but not in
namespace, that entry is created in namespace, and vica-versa. There
is an writedir() API introduced which is used for the same. It also
checks for permissions, and uid/gid consistencies.
* This check is also done when an server goes down and comes up.
* If one starts with an empty namespace export, but has data in
storage nodes, a 'find .>/dev/null' or 'ls -lR >/dev/null' should help
to build namespace in one shot. Even otherwise, namespace is built on
demand when a file is looked up for the first time.
NOTE: There are some issues (Kernel 'Oops' msgs) seen with fuse-2.6.3,
when someone deletes namespace in backend, when glusterfs is
running. But with fuse-2.6.5, this issue is not there.
@node Replicate
@subsection Replicate (formerly AFR)
@cindex Replicate
@example
type cluster/replicate
@end example
Replicate provides @acronym{RAID}-1 like functionality for
GlusterFS. Replicate replicates files and directories across the
subvolumes. Hence if Replicate has four subvolumes, there will be
four copies of all files and directories. Replicate provides
high-availability, i.e., in case one of the subvolumes go down
(e. g. server crash, network disconnection) Replicate will still
service the requests using the redundant copies.
Replicate also provides self-heal functionality, i.e., in case the
crashed servers come up, the outdated files and directories will be
updated with the latest versions. Replicate uses extended
attributes of the backend file system to track the versioning of files
and directories and provide the self-heal feature.
@example
volume replicate-example
type cluster/replicate
subvolumes brick1 brick2 brick3
end-volume
@end example
This sample configuration will replicate all directories and files on
brick1, brick2 and brick3.
All the read operations happen from the first alive child. If all the
three sub-volumes are up, reads will be done from brick1; if brick1 is
down read will be done from brick2. In case read() was being done on
brick1 and it goes down, replicate transparently falls back to
brick2.
The next release of GlusterFS will add the following features:
@itemize
@item Ability to specify the sub-volume from which read operations are to be done (this will help users who have one of the sub-volumes as a local storage volume).
@item Allow scheduling of read operations amongst the sub-volumes in a round-robin fashion.
@end itemize
The order of the subvolumes list should be same across all the 'replicate's as
they will be used for locking purposes.
@cindex self heal (replicate)
@subsubsection Self Heal
Replicate has self-heal feature, which updates the outdated file and
directory copies by the most recent versions. For example consider the
following config:
@example
volume replicate-example
type cluster/replicate
subvolumes brick1 brick2
end-volume
@end example
@subsubsection File self-heal
Now if we create a file foo.txt on replicate-example, the file will be created
on brick1 and brick2. The file will have two extended attributes associated
with it in the backend filesystem. One is trusted.afr.createtime and the
other is trusted.afr.version. The trusted.afr.createtime xattr has the
create time (in terms of seconds since epoch) and trusted.afr.version
is a number that is incremented each time a file is modified. This increment
happens during close (incase any write was done before close).
If brick1 goes down, we edit foo.txt the version gets incremented. Now
the brick1 comes back up, when we open() on foo.txt replicate will check if
their versions are same. If they are not same, the outdated copy is
replaced by the latest copy and its version is updated. After the sync
the open() proceeds in the usual manner and the application calling open()
can continue on its access to the file.
If brick1 goes down, we delete foo.txt and create a file with the same
name again i.e foo.txt. Now brick1 comes back up, clearly there is a
chance that the version on brick1 being more than the version on brick2,
this is where createtime extended attribute helps in deciding which
the outdated copy is. Hence we need to consider both createtime and
version to decide on the latest copy.
The version attribute is incremented during the close() call. Version
will not be incremented in case there was no write() done. In case the
fd that the close() gets was got by create() call, we also create
the createtime extended attribute.
@subsubsection Directory self-heal
Suppose brick1 goes down, we delete foo.txt, brick1 comes back up, now
we should not create foo.txt on brick2 but we should delete foo.txt
on brick1. We handle this situation by having the createtime and version
attribute on the directory similar to the file. when lookup() is done
on the directory, we compare the createtime/version attributes of the
copies and see which files needs to be deleted and delete those files
and update the extended attributes of the outdated directory copy.
Each time a directory is modified (a file or a subdirectory is created
or deleted inside the directory) and one of the subvols is down, we
increment the directory's version.
lookup() is a call initiated by the kernel on a file or directory
just before any access to that file or directory. In glusterfs, by
default, lookup() will not be called in case it was called in the
past one second on that particular file or directory.
The extended attributes can be seen in the backend filesystem using
the @command{getfattr} command. (@command{getfattr -n trusted.afr.version <file>})
@cartouche
@table @code
@item debug [on|off] (off)
@item self-heal [on|off] (on)
@item replicate <pattern> (*:1)
@item lock-node <child_volume> (first child is used by default)
@end table
@end cartouche
@node Stripe
@subsection Stripe
@cindex stripe (translator)
@example
type cluster/stripe
@end example
The stripe translator distributes the contents of a file over its
sub-volumes. It does this by creating a file equal in size to the
total size of the file on each of its sub-volumes. It then writes only
a part of the file to each sub-volume, leaving the rest of it empty.
These empty regions are called `holes' in Unix terminology. The holes
do not consume any disk space.
The diagram below makes this clear.
@center @image{stripe,44pc,,,.pdf}
You can configure stripe so that only filenames matching a pattern
are striped. You can also configure the size of the data to be stored
on each sub-volume.
@cartouche
@table @code
@item block-size <pattern>:<size> (*:0 no striping)
Distribute files matching @command{<pattern>} over the sub-volumes,
storing at least @command{<size>} on each sub-volume. For example,
@example
option block-size *.mpg:1M
@end example
distributes all files ending in @command{.mpg}, storing at least 1 MB on
each sub-volume.
Any number of @command{block-size} option lines may be present, specifying
different sizes for different file name patterns.
@end table
@end cartouche
@node Performance Translators
@section Performance Translators
@menu
* Read Ahead::
* Write Behind::
* IO Threads::
* IO Cache::
* Booster::
@end menu
@node Read Ahead
@subsection Read Ahead
@cindex read-ahead (translator)
@example
type performance/read-ahead
@end example
The read-ahead translator pre-fetches data in advance on every read.
This benefits applications that mostly process files in sequential order,
since the next block of data will already be available by the time the
application is done with the current one.
Additionally, the read-ahead translator also behaves as a read-aggregator.
Many small read operations are combined and issued as fewer, larger read
requests to the server.
Read-ahead deals in ``pages'' as the unit of data fetched. The page size
is configurable, as is the ``page count'', which is the number of pages
that are pre-fetched.
Read-ahead is best used with InfiniBand (using the ib-verbs transport).
On FastEthernet and Gigabit Ethernet networks,
GlusterFS can achieve the link-maximum throughput even without
read-ahead, making it quite superflous.
Note that read-ahead only happens if the reads are perfectly
sequential. If your application accesses data in a random fashion,
using read-ahead might actually lead to a performance loss, since
read-ahead will pointlessly fetch pages which won't be used by the
application.
@cartouche
Options:
@table @code
@item page-size <n> (256KB)
The unit of data that is pre-fetched.
@item page-count <n> (2)
The number of pages that are pre-fetched.
@item force-atime-update [on|off|yes|no] (off|no)
Whether to force an access time (atime) update on the file on every read. Without
this, the atime will be slightly imprecise, as it will reflect the time when
the read-ahead translator read the data, not when the application actually read it.
@end table
@end cartouche
@node Write Behind
@subsection Write Behind
@cindex write-behind (translator)
@example
type performance/write-behind
@end example
The write-behind translator improves the latency of a write operation.
It does this by relegating the write operation to the background and
returning to the application even as the write is in progress. Using the
write-behind translator, successive write requests can be pipelined.
This mode of write-behind operation is best used on the client side, to
enable decreased write latency for the application.
The write-behind translator can also aggregate write requests. If the
@command{aggregate-size} option is specified, then successive writes upto that
size are accumulated and written in a single operation. This mode of operation
is best used on the server side, as this will decrease the disk's head movement
when multiple files are being written to in parallel.
The @command{aggregate-size} option has a default value of 128KB. Although
this works well for most users, you should always experiment with different values
to determine the one that will deliver maximum performance. This is because the
performance of write-behind depends on your interconnect, size of RAM, and the
work load.
@cartouche
@table @code
@item aggregate-size <n> (128KB)
Amount of data to accumulate before doing a write
@item flush-behind [on|yes|off|no] (off|no)
@end table
@end cartouche
@node IO Threads
@subsection IO Threads
@cindex io-threads (translator)
@example
type performance/io-threads
@end example
The IO threads translator is intended to increase the responsiveness
of the server to metadata operations by doing file I/O (read, write)
in a background thread. Since the GlusterFS server is
single-threaded, using the IO threads translator can significantly
improve performance. This translator is best used on the server side,
loaded just below the server protocol translator.
IO threads operates by handing out read and write requests to a separate thread.
The total number of threads in existence at a time is constant, and configurable.
@cartouche
@table @code
@item thread-count <n> (1)
Number of threads to use.
@end table
@end cartouche
@node IO Cache
@subsection IO Cache
@cindex io-cache (translator)
@example
type performance/io-cache
@end example
The IO cache translator caches data that has been read. This is useful
if many applications read the same data multiple times, and if reads
are much more frequent than writes (for example, IO caching may be
useful in a web hosting environment, where most clients will simply
read some files and only a few will write to them).
The IO cache translator reads data from its child in @command{page-size} chunks.
It caches data upto @command{cache-size} bytes. The cache is maintained as
a prioritized least-recently-used (@acronym{LRU}) list, with priorities determined
by user-specified patterns to match filenames.
When the IO cache translator detects a write operation, the
cache for that file is flushed.
The IO cache translator periodically verifies the consistency of
cached data, using the modification times on the files. The verification timeout
is configurable.
@cartouche
@table @code
@item page-size <n> (128KB)
Size of a page.
@item cache-size (n) (32MB)
Total amount of data to be cached.
@item force-revalidate-timeout <n> (1)
Timeout to force a cache consistency verification, in seconds.
@item priority <pattern> (*:0)
Filename patterns listed in order of priority.
@end table
@end cartouche
@node Booster
@subsection Booster
@cindex booster
@example
type performance/booster
@end example
The booster translator gives applications a faster path to communicate
read and write requests to GlusterFS. Normally, all requests to GlusterFS from
applications go through FUSE, as indicated in @ref{Filesystems in Userspace}.
Using the booster translator in conjunction with the GlusterFS booster shared
library, an application can bypass the FUSE path and send read/write requests
directly to the GlusterFS client process.
The booster mechanism consists of two parts: the booster translator,
and the booster shared library. The booster translator is meant to be
loaded on the client side, usually at the root of the translator tree.
The booster shared library should be @command{LD_PRELOAD}ed with the
application.
The booster translator when loaded opens a Unix domain socket and
listens for read/write requests on it. The booster shared library
intercepts read and write system calls and sends the requests to the
GlusterFS process directly using the Unix domain socket, bypassing FUSE.
This leads to superior performance.
Once you've loaded the booster translator in your volume specification file, you
can start your application as:
@example
$ LD_PRELOAD=/usr/local/bin/glusterfs-booster.so your_app
@end example
The booster translator accepts no options.
@node Features Translators
@section Features Translators
@menu
* POSIX Locks::
* Fixed ID::
@end menu
@node POSIX Locks
@subsection POSIX Locks
@cindex record locking
@cindex fcntl
@cindex posix-locks (translator)
@example
type features/posix-locks
@end example
This translator provides storage independent POSIX record locking
support (@command{fcntl} locking). Typically you'll want to load this on the
server side, just above the @acronym{POSIX} storage translator. Using this
translator you can get both advisory locking and mandatory locking
support. It also handles @command{flock()} locks properly.
Caveat: Consider a file that does not have its mandatory locking bits
(+setgid, -group execution) turned on. Assume that this file is now
opened by a process on a client that has the write-behind xlator
loaded. The write-behind xlator does not cache anything for files
which have mandatory locking enabled, to avoid incoherence. Let's say
that mandatory locking is now enabled on this file through another
client. The former client will not know about this change, and
write-behind may erroneously report a write as being successful when
in fact it would fail due to the region it is writing to being locked.
There seems to be no easy way to fix this. To work around this
problem, it is recommended that you never enable the mandatory bits on
a file while it is open.
@cartouche
@table @code
@item mandatory [on|off] (on)
Turns mandatory locking on.
@end table
@end cartouche
@node Fixed ID
@subsection Fixed ID
@cindex fixed-id (translator)
@example
type features/fixed-id
@end example
The fixed ID translator makes all filesystem requests from the client
to appear to be coming from a fixed, specified
@acronym{UID}/@acronym{GID}, regardless of which user actually
initiated the request.
@cartouche
@table @code
@item fixed-uid <n> [if not set, not used]
The @acronym{UID} to send to the server
@item fixed-gid <n> [if not set, not used]
The @acronym{GID} to send to the server
@end table
@end cartouche
@node Miscellaneous Translators
@section Miscellaneous Translators
@menu
* ROT-13::
* Trace::
@end menu
@node ROT-13
@subsection ROT-13
@cindex rot-13 (translator)
@example
type encryption/rot-13
@end example
@acronym{ROT-13} is a toy translator that can ``encrypt'' and ``decrypt'' file
contents using the @acronym{ROT-13} algorithm. @acronym{ROT-13} is a trivial
algorithm that rotates each alphabet by thirteen places. Thus, 'A' becomes 'N',
'B' becomes 'O', and 'Z' becomes 'M'.
It goes without saying that you shouldn't use this translator if you need
@emph{real} encryption (a future release of GlusterFS will have real encryption
translators).
@cartouche
@table @code
@item encrypt-write [on|off] (on)
Whether to encrypt on write
@item decrypt-read [on|off] (on)
Whether to decrypt on read
@end table
@end cartouche
@node Trace
@subsection Trace
@cindex trace (translator)
@example
type debug/trace
@end example
The trace translator is intended for debugging purposes. When loaded, it
logs all the system calls received by the server or client (wherever
trace is loaded), their arguments, and the results. You must use a GlusterFS log
level of DEBUG (See @ref{Running GlusterFS}) for trace to work.
Sample trace output (lines have been wrapped for readability):
@cartouche
@example
2007-10-30 00:08:58 D [trace.c:1579:trace_opendir] trace: callid: 68
(*this=0x8059e40, loc=0x8091984 @{path=/iozone3_283, inode=0x8091f00@},
fd=0x8091d50)
2007-10-30 00:08:58 D [trace.c:630:trace_opendir_cbk] trace:
(*this=0x8059e40, op_ret=4, op_errno=1, fd=0x8091d50)
2007-10-30 00:08:58 D [trace.c:1602:trace_readdir] trace: callid: 69
(*this=0x8059e40, size=4096, offset=0 fd=0x8091d50)
2007-10-30 00:08:58 D [trace.c:215:trace_readdir_cbk] trace:
(*this=0x8059e40, op_ret=0, op_errno=0, count=4)
2007-10-30 00:08:58 D [trace.c:1624:trace_closedir] trace: callid: 71
(*this=0x8059e40, *fd=0x8091d50)
2007-10-30 00:08:58 D [trace.c:809:trace_closedir_cbk] trace:
(*this=0x8059e40, op_ret=0, op_errno=1)
@end example
@end cartouche
@node Usage Scenarios
@chapter Usage Scenarios
@section Advanced Striping
This section is based on the Advanced Striping tutorial written by
Anand Avati on the GlusterFS wiki
@footnote{http://gluster.org/docs/index.php/Mixing_Striped_and_Regular_Files}.
@subsection Mixed Storage Requirements
There are two ways of scheduling the I/O. One at file level (using
unify translator) and other at block level (using stripe
translator). Striped I/O is good for files that are potentially large
and require high parallel throughput (for example, a single file of
400GB being accessed by 100s and 1000s of systems simultaneously and
randomly). For most of the cases, file level scheduling works best.
In the real world, it is desirable to mix file level and block level
scheduling on a single storage volume. Alternatively users can choose
to have two separate volumes and hence two mount points, but the
applications may demand a single storage system to host both.
This document explains how to mix file level scheduling with stripe.
@subsection Configuration Brief
This setup demonstrates how users can configure unify translator with
appropriate I/O scheduler for file level scheduling and strip for only
matching patterns. This way, GlusterFS chooses appropriate I/O profile
and knows how to efficiently handle both the types of data.
A simple technique to achieve this effect is to create a stripe set of
unify and stripe blocks, where unify is the first sub-volume. Files
that do not match the stripe policy passed on to first unify
sub-volume and inturn scheduled arcoss the cluster using its file
level I/O scheduler.
@image{advanced-stripe,44pc,,,.pdf}
@subsection Preparing GlusterFS Envoronment
Create the directories /export/namespace, /export/unify and
/export/stripe on all the storage bricks.
Place the following server and client volume spec file under
/etc/glusterfs (or appropriate installed path) and replace the IP
addresses / access control fields to match your environment.
@cartouche
@example
## file: /etc/glusterfs/glusterfsd.vol
volume posix-unify
type storage/posix
option directory /export/for-unify
end-volume
volume posix-stripe
type storage/posix
option directory /export/for-stripe
end-volume
volume posix-namespace
type storage/posix
option directory /export/for-namespace
end-volume
volume server
type protocol/server
option transport-type tcp
option auth.addr.posix-unify.allow 192.168.1.*
option auth.addr.posix-stripe.allow 192.168.1.*
option auth.addr.posix-namespace.allow 192.168.1.*
subvolumes posix-unify posix-stripe posix-namespace
end-volume
@end example
@end cartouche
@cartouche
@example
## file: /etc/glusterfs/glusterfs.vol
volume client-namespace
type protocol/client
option transport-type tcp
option remote-host 192.168.1.1
option remote-subvolume posix-namespace
end-volume
volume client-unify-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.1
option remote-subvolume posix-unify
end-volume
volume client-unify-2
type protocol/client
option transport-type tcp
option remote-host 192.168.1.2
option remote-subvolume posix-unify
end-volume
volume client-unify-3
type protocol/client
option transport-type tcp
option remote-host 192.168.1.3
option remote-subvolume posix-unify
end-volume
volume client-unify-4
type protocol/client
option transport-type tcp
option remote-host 192.168.1.4
option remote-subvolume posix-unify
end-volume
volume client-stripe-1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.1
option remote-subvolume posix-stripe
end-volume
volume client-stripe-2
type protocol/client
option transport-type tcp
option remote-host 192.168.1.2
option remote-subvolume posix-stripe
end-volume
volume client-stripe-3
type protocol/client
option transport-type tcp
option remote-host 192.168.1.3
option remote-subvolume posix-stripe
end-volume
volume client-stripe-4
type protocol/client
option transport-type tcp
option remote-host 192.168.1.4
option remote-subvolume posix-stripe
end-volume
volume unify
type cluster/unify
option scheduler rr
subvolumes cluster-unify-1 cluster-unify-2 cluster-unify-3 cluster-unify-4
end-volume
volume stripe
type cluster/stripe
option block-size *.img:2MB # All files ending with .img are striped with 2MB stripe block size.
subvolumes unify cluster-stripe-1 cluster-stripe-2 cluster-stripe-3 cluster-stripe-4
end-volume
@end example
@end cartouche
Bring up the Storage
Starting GlusterFS Server: If you have installed through binary
package, you can start the service through init.d startup script. If
not:
@example
[root@@server]# glusterfsd
@end example
Mounting GlusterFS Volumes:
@example
[root@@client]# glusterfs -s [BRICK-IP-ADDRESS] /mnt/cluster
@end example
Improving upon this Setup
Infiniband Verbs RDMA transport is much faster than TCP/IP GigE
transport.
Use of performance translators such as read-ahead, write-behind,
io-cache, io-threads, booster is recommended.
Replace round-robin (rr) scheduler with ALU to handle more dynamic
storage environments.
@node Troubleshooting
@chapter Troubleshooting
This chapter is a general troubleshooting guide to GlusterFS. It lists
common GlusterFS server and client error messages, debugging hints, and
concludes with the suggested procedure to report bugs in GlusterFS.
@section GlusterFS error messages
@subsection Server errors
@example
glusterfsd: FATAL: could not open specfile:
'/etc/glusterfs/glusterfsd.vol'
@end example
The GlusterFS server expects the volume specification file to be
at @command{/etc/glusterfs/glusterfsd.vol}. The example
specification file will be installed as
@command{/etc/glusterfs/glusterfsd.vol.sample}. You need to edit
it and rename it, or provide a different specification file using
the @command{--spec-file} command line option (See @ref{Server}).
@vskip 4ex
@example
gf_log_init: failed to open logfile "/usr/var/log/glusterfs/glusterfsd.log"
(Permission denied)
@end example
You don't have permission to create files in the
@command{/usr/var/log/glusterfs} directory. Make sure you are running
GlusterFS as root. Alternatively, specify a different path for the log
file using the @command{--log-file} option (See @ref{Server}).
@subsection Client errors
@example
fusermount: failed to access mountpoint /mnt:
Transport endpoint is not connected
@end example
A previous failed (or hung) mount of GlusterFS is preventing it from being
mounted again in the same location. The fix is to do:
@example
# umount /mnt
@end example
and try mounting again.
@vskip 4ex
@strong{``Transport endpoint is not connected''.}
If you get this error when you try a command such as @command{ls} or @command{cat},
it means the GlusterFS mount did not succeed. Try running GlusterFS in @command{DEBUG}
logging level and study the log messages to discover the cause.
@vskip 4ex
@strong{``Connect to server failed'', ``SERVER-ADDRESS: Connection refused''.}
GluserFS Server is not running or dead. Check your network
connections and firewall settings. To check if the server is reachable,
try:
@example
telnet IP-ADDRESS 6996
@end example
If the server is accessible, your `telnet' command should connect and
block. If not you will see an error message such as @command{telnet: Unable to
connect to remote host: Connection refused}. 6996 is the default
GlusterFS port. If you have changed it, then use the corresponding
port instead.
@vskip 4ex
@example
gf_log_init: failed to open logfile "/usr/var/log/glusterfs/glusterfs.log"
(Permission denied)
@end example
You don't have permission to create files in the
@command{/usr/var/log/glusterfs} directory. Make sure you are running
GlusterFS as root. Alternatively, specify a different path for the log
file using the @command{--log-file} option (See @ref{Client}).
@section FUSE error messages
@command{modprobe fuse} fails with: ``Unknown symbol in module, or unknown parameter''.
@cindex Redhat Enterprise Linux
If you are using fuse-2.6.x on Redhat Enterprise Linux Work Station 4
and Advanced Server 4 with 2.6.9-42.ELlargesmp, 2.6.9-42.ELsmp,
2.6.9-42.EL kernels and get this error while loading @acronym{FUSE} kernel
module, you need to apply the following patch.
For fuse-2.6.2:
@indicateurl{http://ftp.zresearch.com/pub/gluster/glusterfs/fuse/fuse-2.6.2-rhel-build.patch}
For fuse-2.6.3:
@indicateurl{http://ftp.zresearch.com/pub/gluster/glusterfs/fuse/fuse-2.6.3-rhel-build.patch}
@section AppArmour and GlusterFS
@cindex AppArmour
@cindex OpenSuSE
Under OpenSuSE GNU/Linux, the AppArmour security feature does not
allow GlusterFS to create temporary files or network socket
connections even while running as root. You will see error messages
like `Unable to open log file: Operation not permitted' or `Connection
refused'. Disabling AppArmour using YaST or properly configuring
AppArmour to recognize @command{glusterfsd} or @command{glusterfs}/@command{fusermount}
should solve the problem.
@section Reporting a bug
If you encounter a bug in GlusterFS, please follow the below
guidelines when you report it to the mailing list. Be sure to report
it! User feedback is crucial to the health of the project and we value
it highly.
@subsection General instructions
When running GlusterFS in a non-production environment, be sure to
build it with the following command:
@example
$ make CFLAGS='-g -O0 -DDEBUG'
@end example
This includes debugging information which will be helpful in getting
backtraces (see below) and also disable optimization. Enabling
optimization can result in incorrect line numbers being reported to
gdb.
@subsection Volume specification files
Attach all relevant server and client spec files you were using when
you encountered the bug. Also tell us details of your setup, i.e., how
many clients and how many servers.
@subsection Log files
Set the loglevel of your client and server programs to @acronym{DEBUG} (by
passing the -L @acronym{DEBUG} option) and attach the log files with your bug
report. Obviously, if only the client is failing (for example), you
only need to send us the client log file.
@subsection Backtrace
If GlusterFS has encountered a segmentation fault or has crashed for
some other reason, include the backtrace with the bug report. You can
get the backtrace using the following procedure.
Run the GlusterFS client or server inside gdb.
@example
$ gdb ./glusterfs
(gdb) set args -f client.spec -N -l/path/to/log/file -LDEBUG /mnt/point
(gdb) run
@end example
Now when the process segfaults, you can get the backtrace by typing:
@example
(gdb) bt
@end example
If the GlusterFS process has crashed and dumped a core file (you can
find this in / if running as a daemon and in the current directory
otherwise), you can do:
@example
$ gdb /path/to/glusterfs /path/to/core.<pid>
@end example
and then get the backtrace.
If the GlusterFS server or client seems to be hung, then you can get
the backtrace by attaching gdb to the process. First get the @command{PID} of
the process (using ps), and then do:
@example
$ gdb ./glusterfs <pid>
@end example
Press Ctrl-C to interrupt the process and then generate the backtrace.
@subsection Reproducing the bug
If the bug is reproducible, please include the steps necessary to do
so. If the bug is not reproducible, send us the bug report anyway.
@subsection Other information
If you think it is relevant, send us also the version of @acronym{FUSE} you're
using, the kernel version, platform.
@node GNU Free Documentation Licence
@appendix GNU Free Documentation Licence
@include fdl.texi
@node Index
@unnumbered Index
@printindex cp
@bye
|