From 60bdca792b7e572b4d79382dada1c6b93bebdd0e Mon Sep 17 00:00:00 2001 From: Vijay Bellur Date: Wed, 3 Jul 2013 13:02:29 +0530 Subject: doc: Moved non-relevant documentation files to legacy Change-Id: I2d34e5a4e47cd03d301d9fd2525fb61ae997fcb8 BUG: 811311 Signed-off-by: Vijay Bellur Reviewed-on: http://review.gluster.org/5277 Tested-by: Gluster Build System --- doc/admin-guide/en-US/Administration_Guide.ent | 4 - doc/admin-guide/en-US/Administration_Guide.xml | 27 - doc/admin-guide/en-US/Author_Group.xml | 17 - doc/admin-guide/en-US/Book_Info.xml | 28 - doc/admin-guide/en-US/Chapter.xml | 33 - doc/admin-guide/en-US/Preface.xml | 24 - doc/admin-guide/en-US/Revision_History.xml | 27 - doc/admin-guide/en-US/admin_ACLs.xml | 206 -- doc/admin-guide/en-US/admin_Hadoop.xml | 244 -- doc/admin-guide/en-US/admin_UFO.xml | 1588 ------------ doc/admin-guide/en-US/admin_commandref.xml | 334 --- doc/admin-guide/en-US/admin_console.xml | 28 - doc/admin-guide/en-US/admin_directory_Quota.xml | 179 -- doc/admin-guide/en-US/admin_geo-replication.xml | 732 ------ doc/admin-guide/en-US/admin_managing_volumes.xml | 741 ------ .../en-US/admin_monitoring_workload.xml | 878 ------- doc/admin-guide/en-US/admin_setting_volumes.xml | 325 --- doc/admin-guide/en-US/admin_settingup_clients.xml | 511 ---- doc/admin-guide/en-US/admin_start_stop_daemon.xml | 56 - doc/admin-guide/en-US/admin_storage_pools.xml | 57 - doc/admin-guide/en-US/admin_troubleshooting.xml | 518 ---- doc/admin-guide/en-US/gfs_introduction.xml | 54 - doc/admin-guide/en-US/glossary.xml | 126 - doc/admin-guide/publican.cfg | 12 - doc/legacy/Makefile.am | 3 + doc/legacy/advanced-stripe.odg | Bin 0 -> 12648 bytes doc/legacy/advanced-stripe.pdf | Bin 0 -> 13382 bytes doc/legacy/colonO-icon.jpg | Bin 0 -> 779 bytes doc/legacy/docbook/Administration_Guide.ent | 4 + doc/legacy/docbook/Administration_Guide.xml | 27 + doc/legacy/docbook/Author_Group.xml | 17 + doc/legacy/docbook/Book_Info.xml | 28 + doc/legacy/docbook/Chapter.xml | 33 + doc/legacy/docbook/Preface.xml | 24 + doc/legacy/docbook/Revision_History.xml | 27 + doc/legacy/docbook/admin_ACLs.xml | 206 ++ doc/legacy/docbook/admin_Hadoop.xml | 244 ++ doc/legacy/docbook/admin_UFO.xml | 1588 ++++++++++++ doc/legacy/docbook/admin_commandref.xml | 334 +++ doc/legacy/docbook/admin_console.xml | 28 + doc/legacy/docbook/admin_directory_Quota.xml | 179 ++ doc/legacy/docbook/admin_geo-replication.xml | 732 ++++++ doc/legacy/docbook/admin_managing_volumes.xml | 741 ++++++ doc/legacy/docbook/admin_monitoring_workload.xml | 878 +++++++ doc/legacy/docbook/admin_setting_volumes.xml | 325 +++ doc/legacy/docbook/admin_settingup_clients.xml | 511 ++++ doc/legacy/docbook/admin_start_stop_daemon.xml | 56 + doc/legacy/docbook/admin_storage_pools.xml | 57 + doc/legacy/docbook/admin_troubleshooting.xml | 518 ++++ doc/legacy/docbook/gfs_introduction.xml | 54 + doc/legacy/docbook/glossary.xml | 126 + doc/legacy/docbook/publican.cfg | 12 + doc/legacy/fdl.texi | 454 ++++ doc/legacy/fuse.odg | Bin 0 -> 13190 bytes doc/legacy/fuse.pdf | Bin 0 -> 14948 bytes doc/legacy/ha.odg | Bin 0 -> 37290 bytes doc/legacy/ha.pdf | Bin 0 -> 19403 bytes doc/legacy/stripe.odg | Bin 0 -> 10188 bytes doc/legacy/stripe.pdf | Bin 0 -> 11941 bytes doc/legacy/unify.odg | Bin 0 -> 12955 bytes doc/legacy/unify.pdf | Bin 0 -> 18969 bytes doc/legacy/user-guide.info | 2697 ++++++++++++++++++++ doc/legacy/user-guide.pdf | Bin 0 -> 353986 bytes doc/legacy/user-guide.texi | 2246 ++++++++++++++++ doc/legacy/xlator.odg | Bin 0 -> 12169 bytes doc/legacy/xlator.pdf | Bin 0 -> 14358 bytes doc/user-guide/legacy/Makefile.am | 3 - doc/user-guide/legacy/advanced-stripe.odg | Bin 12648 -> 0 bytes doc/user-guide/legacy/advanced-stripe.pdf | Bin 13382 -> 0 bytes doc/user-guide/legacy/colonO-icon.jpg | Bin 779 -> 0 bytes doc/user-guide/legacy/fdl.texi | 454 ---- doc/user-guide/legacy/fuse.odg | Bin 13190 -> 0 bytes doc/user-guide/legacy/fuse.pdf | Bin 14948 -> 0 bytes doc/user-guide/legacy/ha.odg | Bin 37290 -> 0 bytes doc/user-guide/legacy/ha.pdf | Bin 19403 -> 0 bytes doc/user-guide/legacy/stripe.odg | Bin 10188 -> 0 bytes doc/user-guide/legacy/stripe.pdf | Bin 11941 -> 0 bytes doc/user-guide/legacy/unify.odg | Bin 12955 -> 0 bytes doc/user-guide/legacy/unify.pdf | Bin 18969 -> 0 bytes doc/user-guide/legacy/user-guide.info | 2697 -------------------- doc/user-guide/legacy/user-guide.pdf | Bin 353986 -> 0 bytes doc/user-guide/legacy/user-guide.texi | 2246 ---------------- doc/user-guide/legacy/xlator.odg | Bin 12169 -> 0 bytes doc/user-guide/legacy/xlator.pdf | Bin 14358 -> 0 bytes 84 files changed, 12149 insertions(+), 12149 deletions(-) delete mode 100644 doc/admin-guide/en-US/Administration_Guide.ent delete mode 100644 doc/admin-guide/en-US/Administration_Guide.xml delete mode 100644 doc/admin-guide/en-US/Author_Group.xml delete mode 100644 doc/admin-guide/en-US/Book_Info.xml delete mode 100644 doc/admin-guide/en-US/Chapter.xml delete mode 100644 doc/admin-guide/en-US/Preface.xml delete mode 100644 doc/admin-guide/en-US/Revision_History.xml delete mode 100644 doc/admin-guide/en-US/admin_ACLs.xml delete mode 100644 doc/admin-guide/en-US/admin_Hadoop.xml delete mode 100644 doc/admin-guide/en-US/admin_UFO.xml delete mode 100644 doc/admin-guide/en-US/admin_commandref.xml delete mode 100644 doc/admin-guide/en-US/admin_console.xml delete mode 100644 doc/admin-guide/en-US/admin_directory_Quota.xml delete mode 100644 doc/admin-guide/en-US/admin_geo-replication.xml delete mode 100644 doc/admin-guide/en-US/admin_managing_volumes.xml delete mode 100644 doc/admin-guide/en-US/admin_monitoring_workload.xml delete mode 100644 doc/admin-guide/en-US/admin_setting_volumes.xml delete mode 100644 doc/admin-guide/en-US/admin_settingup_clients.xml delete mode 100644 doc/admin-guide/en-US/admin_start_stop_daemon.xml delete mode 100644 doc/admin-guide/en-US/admin_storage_pools.xml delete mode 100644 doc/admin-guide/en-US/admin_troubleshooting.xml delete mode 100644 doc/admin-guide/en-US/gfs_introduction.xml delete mode 100644 doc/admin-guide/en-US/glossary.xml delete mode 100644 doc/admin-guide/publican.cfg create mode 100644 doc/legacy/Makefile.am create mode 100644 doc/legacy/advanced-stripe.odg create mode 100644 doc/legacy/advanced-stripe.pdf create mode 100644 doc/legacy/colonO-icon.jpg create mode 100644 doc/legacy/docbook/Administration_Guide.ent create mode 100644 doc/legacy/docbook/Administration_Guide.xml create mode 100644 doc/legacy/docbook/Author_Group.xml create mode 100644 doc/legacy/docbook/Book_Info.xml create mode 100644 doc/legacy/docbook/Chapter.xml create mode 100644 doc/legacy/docbook/Preface.xml create mode 100644 doc/legacy/docbook/Revision_History.xml create mode 100644 doc/legacy/docbook/admin_ACLs.xml create mode 100644 doc/legacy/docbook/admin_Hadoop.xml create mode 100644 doc/legacy/docbook/admin_UFO.xml create mode 100644 doc/legacy/docbook/admin_commandref.xml create mode 100644 doc/legacy/docbook/admin_console.xml create mode 100644 doc/legacy/docbook/admin_directory_Quota.xml create mode 100644 doc/legacy/docbook/admin_geo-replication.xml create mode 100644 doc/legacy/docbook/admin_managing_volumes.xml create mode 100644 doc/legacy/docbook/admin_monitoring_workload.xml create mode 100644 doc/legacy/docbook/admin_setting_volumes.xml create mode 100644 doc/legacy/docbook/admin_settingup_clients.xml create mode 100644 doc/legacy/docbook/admin_start_stop_daemon.xml create mode 100644 doc/legacy/docbook/admin_storage_pools.xml create mode 100644 doc/legacy/docbook/admin_troubleshooting.xml create mode 100644 doc/legacy/docbook/gfs_introduction.xml create mode 100644 doc/legacy/docbook/glossary.xml create mode 100644 doc/legacy/docbook/publican.cfg create mode 100644 doc/legacy/fdl.texi create mode 100644 doc/legacy/fuse.odg create mode 100644 doc/legacy/fuse.pdf create mode 100644 doc/legacy/ha.odg create mode 100644 doc/legacy/ha.pdf create mode 100644 doc/legacy/stripe.odg create mode 100644 doc/legacy/stripe.pdf create mode 100644 doc/legacy/unify.odg create mode 100644 doc/legacy/unify.pdf create mode 100644 doc/legacy/user-guide.info create mode 100644 doc/legacy/user-guide.pdf create mode 100644 doc/legacy/user-guide.texi create mode 100644 doc/legacy/xlator.odg create mode 100644 doc/legacy/xlator.pdf delete mode 100644 doc/user-guide/legacy/Makefile.am delete mode 100644 doc/user-guide/legacy/advanced-stripe.odg delete mode 100644 doc/user-guide/legacy/advanced-stripe.pdf delete mode 100644 doc/user-guide/legacy/colonO-icon.jpg delete mode 100644 doc/user-guide/legacy/fdl.texi delete mode 100644 doc/user-guide/legacy/fuse.odg delete mode 100644 doc/user-guide/legacy/fuse.pdf delete mode 100644 doc/user-guide/legacy/ha.odg delete mode 100644 doc/user-guide/legacy/ha.pdf delete mode 100644 doc/user-guide/legacy/stripe.odg delete mode 100644 doc/user-guide/legacy/stripe.pdf delete mode 100644 doc/user-guide/legacy/unify.odg delete mode 100644 doc/user-guide/legacy/unify.pdf delete mode 100644 doc/user-guide/legacy/user-guide.info delete mode 100644 doc/user-guide/legacy/user-guide.pdf delete mode 100644 doc/user-guide/legacy/user-guide.texi delete mode 100644 doc/user-guide/legacy/xlator.odg delete mode 100644 doc/user-guide/legacy/xlator.pdf (limited to 'doc') diff --git a/doc/admin-guide/en-US/Administration_Guide.ent b/doc/admin-guide/en-US/Administration_Guide.ent deleted file mode 100644 index 3381b2bfe..000000000 --- a/doc/admin-guide/en-US/Administration_Guide.ent +++ /dev/null @@ -1,4 +0,0 @@ - - - - diff --git a/doc/admin-guide/en-US/Administration_Guide.xml b/doc/admin-guide/en-US/Administration_Guide.xml deleted file mode 100644 index 483855b1a..000000000 --- a/doc/admin-guide/en-US/Administration_Guide.xml +++ /dev/null @@ -1,27 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - - - - - - - - - - - - - - - - - - - - diff --git a/doc/admin-guide/en-US/Author_Group.xml b/doc/admin-guide/en-US/Author_Group.xml deleted file mode 100644 index f3fa31740..000000000 --- a/doc/admin-guide/en-US/Author_Group.xml +++ /dev/null @@ -1,17 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - Divya - Muntimadugu - - Red Hat - Engineering Content Services - - divya@redhat.com - - - diff --git a/doc/admin-guide/en-US/Book_Info.xml b/doc/admin-guide/en-US/Book_Info.xml deleted file mode 100644 index 6be6a7816..000000000 --- a/doc/admin-guide/en-US/Book_Info.xml +++ /dev/null @@ -1,28 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - Administration Guide - Using Gluster File System Beta 3 - Gluster File System - 3.3 - 1 - 1 - - - This guide describes Gluster File System (GlusterFS) and provides information on how to configure, operate, and manage GlusterFS. - - - - - - - - - - - - - diff --git a/doc/admin-guide/en-US/Chapter.xml b/doc/admin-guide/en-US/Chapter.xml deleted file mode 100644 index 4a1cef872..000000000 --- a/doc/admin-guide/en-US/Chapter.xml +++ /dev/null @@ -1,33 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - Test Chapter - - This is a test paragraph - -
- Test Section 1 - - This is a test paragraph in a section - -
- -
- Test Section 2 - - This is a test paragraph in Section 2 - - - - listitem text - - - - -
- -
- diff --git a/doc/admin-guide/en-US/Preface.xml b/doc/admin-guide/en-US/Preface.xml deleted file mode 100644 index 320311906..000000000 --- a/doc/admin-guide/en-US/Preface.xml +++ /dev/null @@ -1,24 +0,0 @@ - - - -%BOOK_ENTITIES; -]> - - Preface - This guide describes how to configure, operate, and manage Gluster File System (GlusterFS). -
- Audience - This guide is intended for Systems Administrators interested in configuring and managing GlusterFS. - This guide assumes that you are familiar with the Linux operating system, concepts of File System, GlusterFS concepts, and GlusterFS Installation -
-
- License - The License information is available at . -
- - - - - -
diff --git a/doc/admin-guide/en-US/Revision_History.xml b/doc/admin-guide/en-US/Revision_History.xml deleted file mode 100644 index 09320821f..000000000 --- a/doc/admin-guide/en-US/Revision_History.xml +++ /dev/null @@ -1,27 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - Revision History - - - - 1-0 - Thu Apr 5 2012 - - Divya - Muntimadugu - divya@redhat.com - - - - Draft - - - - - - - diff --git a/doc/admin-guide/en-US/admin_ACLs.xml b/doc/admin-guide/en-US/admin_ACLs.xml deleted file mode 100644 index 156e52c17..000000000 --- a/doc/admin-guide/en-US/admin_ACLs.xml +++ /dev/null @@ -1,206 +0,0 @@ - - - - POSIX Access Control Lists - POSIX Access Control Lists (ACLs) allows you to assign different permissions for different users or -groups even though they do not correspond to the original owner or the owning group. - - For example: User john creates a file but does not want to allow anyone to do anything with this -file, except another user, antony (even though there are other users that belong to the group john). - - This means, in addition to the file owner, the file group, and others, additional users and groups can -be granted or denied access by using POSIX ACLs. - -
- Activating POSIX ACLs Support - To use POSIX ACLs for a file or directory, the partition of the file or directory must be mounted with -POSIX ACLs support. - -
- Activating POSIX ACLs Support on Sever - To mount the backend export directories for POSIX ACLs support, use the following command: - - # mount -o acl device-namepartition - - For example: - - # mount -o acl /dev/sda1 /export1 - Alternatively, if the partition is listed in the /etc/fstab file, add the following entry for the partition -to include the POSIX ACLs option: - - LABEL=/work /export1 ext3 rw, acl 14 -
-
- Activating POSIX ACLs Support on Client - To mount the glusterfs volumes for POSIX ACLs support, use the following command: - - # mount –t glusterfs -o acl severname:volume-idmount point - - For example: - - # mount -t glusterfs -o acl 198.192.198.234:glustervolume /mnt/gluster - -
-
-
- Setting POSIX ACLs - You can set two types of POSIX ACLs, that is, access ACLs and default ACLs. You can use -access ACLs to grant permission for a specific file or directory. You can use default ACLs only -on a directory but if a file inside that directory does not have an ACLs, it inherits the permissions of -the default ACLs of the directory. - - You can set ACLs for per user, per group, for users not in the user group for the file, and via the -effective right mask. - -
- Setting Access ACLs - You can apply access ACLs to grant permission for both files and directories. - - To set or modify Access ACLs - - You can set or modify access ACLs use the following command: - - # setfacl –m entry type file - The ACL entry types are the POSIX ACLs representations of owner, group, and other. - - Permissions must be a combination of the characters r (read), w (write), and x (execute). You must -specify the ACL entry in the following format and can specify multiple entry types separated by -commas. - - - - - - - - ACL Entry - Description - - - - - u:uid:<permission> - Sets the access ACLs for a user. You can specify user name or UID - - - g:gid:<permission> - Sets the access ACLs for a group. You can specify group name or GID. - - - m:<permission> - Sets the effective rights mask. The mask is the combination of all access permissions of the owning group and all of the user and group entries. - - - o:<permission> - Sets the access ACLs for users other than the ones in the group for the file. - - - - - If a file or directory already has an POSIX ACLs, and the setfacl command is used, the additional -permissions are added to the existing POSIX ACLs or the existing rule is modified. - - For example, to give read and write permissions to user antony: - - # setfacl -m u:antony:rw /mnt/gluster/data/testfile -
-
- Setting Default ACLs - You can apply default ACLs only to directories. They determine the permissions of a file system -objects that inherits from its parent directory when it is created. - - To set default ACLs - - You can set default ACLs for files and directories using the following command: - - # setfacl –m –-set entry type directory - - For example, to set the default ACLs for the /data directory to read for users not in the user group: - - # setfacl –m --set o::r /mnt/gluster/data - - An access ACLs set for an individual file can override the default ACLs permissions. - - - Effects of a Default ACLs - The following are the ways in which the permissions of a directory's default ACLs are passed to the -files and subdirectories in it: - - - - A subdirectory inherits the default ACLs of the parent directory both as its default ACLs and as an -access ACLs. - - - - A file inherits the default ACLs as its access ACLs. - - - -
-
-
- Retrieving POSIX ACLs - You can view the existing POSIX ACLs for a file or directory. - - To view existing POSIX ACLs - - - View the existing access ACLs of a file using the following command: - - # getfacl path/filename - - For example, to view the existing POSIX ACLs for sample.jpg - - # getfacl /mnt/gluster/data/test/sample.jpg -# owner: antony -# group: antony -user::rw- -group::rw- -other::r-- - - - View the default ACLs of a directory using the following command: - - # getfacl directory name - For example, to view the existing ACLs for /data/doc - - # getfacl /mnt/gluster/data/doc -# owner: antony -# group: antony -user::rw- -user:john:r-- -group::r-- -mask::r-- -other::r-- -default:user::rwx -default:user:antony:rwx -default:group::r-x -default:mask::rwx -default:other::r-x - - -
-
- Removing POSIX ACLs - To remove all the permissions for a user, groups, or others, use the following command: - - # setfacl -x ACL entry type file - For example, to remove all permissions from the user antony: - - # setfacl -x u:antony /mnt/gluster/data/test-file -
-
- Samba and ACLs - If you are using Samba to access GlusterFS FUSE mount, then POSIX ACLs are enabled by default. -Samba has been compiled with the --with-acl-support option, so no special flags are required -when accessing or mounting a Samba share. - -
-
- NFS and ACLs - Currently we do not support ACLs configuration through NFS, i.e. setfacl and getfacl commands do -not work. However, ACLs permissions set using Gluster Native Client applies on NFS mounts. - -
-
diff --git a/doc/admin-guide/en-US/admin_Hadoop.xml b/doc/admin-guide/en-US/admin_Hadoop.xml deleted file mode 100644 index 08bac8961..000000000 --- a/doc/admin-guide/en-US/admin_Hadoop.xml +++ /dev/null @@ -1,244 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - Managing Hadoop Compatible Storage - GlusterFS provides compatibility for Apache Hadoop and it uses the standard file system -APIs available in Hadoop to provide a new storage option for Hadoop deployments. Existing -MapReduce based applications can use GlusterFS seamlessly. This new functionality opens up data -within Hadoop deployments to any file-based or object-based application. - - -
- Architecture Overview - The following diagram illustrates Hadoop integration with GlusterFS: - - - - - - -
-
- Advantages - -The following are the advantages of Hadoop Compatible Storage with GlusterFS: - - - - - - Provides simultaneous file-based and object-based access within Hadoop. - - - - Eliminates the centralized metadata server. - - - - Provides compatibility with MapReduce applications and rewrite is not required. - - - - Provides a fault tolerant file system. - - - -
-
- Preparing to Install Hadoop Compatible Storage - This section provides information on pre-requisites and list of dependencies that will be installed -during installation of Hadoop compatible storage. - - -
- Pre-requisites - The following are the pre-requisites to install Hadoop Compatible -Storage : - - - - - Hadoop 0.20.2 is installed, configured, and is running on all the machines in the cluster. - - - - Java Runtime Environment - - - - Maven (mandatory only if you are building the plugin from the source) - - - - JDK (mandatory only if you are building the plugin from the source) - - - - getfattr -- command line utility - - -
-
-
- Installing, and Configuring Hadoop Compatible Storage - This section describes how to install and configure Hadoop Compatible Storage in your storage -environment and verify that it is functioning correctly. - - - - To install and configure Hadoop compatible storage: - - Download glusterfs-hadoop-0.20.2-0.1.x86_64.rpm file to each server on your cluster. You can download the file from . - - - - - To install Hadoop Compatible Storage on all servers in your cluster, run the following command: - - # rpm –ivh --nodeps glusterfs-hadoop-0.20.2-0.1.x86_64.rpm - - The following files will be extracted: - - - - /usr/local/lib/glusterfs-Hadoop-version-gluster_plugin_version.jar - - - /usr/local/lib/conf/core-site.xml - - - - - (Optional) To install Hadoop Compatible Storage in a different location, run the following -command: - - # rpm –ivh --nodeps –prefix /usr/local/glusterfs/hadoop glusterfs-hadoop- 0.20.2-0.1.x86_64.rpm - - - - Edit the conf/core-site.xml file. The following is the sample conf/core-site.xml file: - - <configuration> - <property> - <name>fs.glusterfs.impl</name> - <value>org.apache.hadoop.fs.glusterfs.Gluster FileSystem</value> -</property> - -<property> - <name>fs.default.name</name> - <value>glusterfs://fedora1:9000</value> -</property> - -<property> - <name>fs.glusterfs.volname</name> - <value>hadoopvol</value> -</property> - -<property> - <name>fs.glusterfs.mount</name> - <value>/mnt/glusterfs</value> -</property> - -<property> - <name>fs.glusterfs.server</name> - <value>fedora2</value> -</property> - -<property> - <name>quick.slave.io</name> - <value>Off</value> -</property> -</configuration> - - The following are the configurable fields: - - - - - - - - - Property Name - Default Value - Description - - - - - fs.default.name - glusterfs://fedora1:9000 - Any hostname in the cluster as the server and any port number. - - - fs.glusterfs.volname - hadoopvol - GlusterFS volume to mount. - - - fs.glusterfs.mount - /mnt/glusterfs - The directory used to fuse mount the volume. - - - fs.glusterfs.server - fedora2 - Any hostname or IP address on the cluster except the client/master. - - - quick.slave.io - Off - Performance tunable option. If this option is set to On, the plugin will try to perform I/O directly from the disk file system (like ext3 or ext4) the file resides on. Hence read performance will improve and job would run faster. - This option is not tested widely - - - - - - - - Create a soft link in Hadoop’s library and configuration directory for the downloaded files (in -Step 3) using the following commands: - - # ln -s <target location> <source location> - - For example, - - # ln –s /usr/local/lib/glusterfs-0.20.2-0.1.jar $HADOOP_HOME/lib/glusterfs-0.20.2-0.1.jar - - # ln –s /usr/local/lib/conf/core-site.xml $HADOOP_HOME/conf/core-site.xml - - - (Optional) You can run the following command on Hadoop master to build the plugin and deploy -it along with core-site.xml file, instead of repeating the above steps: - - # build-deploy-jar.py -d $HADOOP_HOME -c - - -
-
- Starting and Stopping the Hadoop MapReduce Daemon - To start and stop MapReduce daemon - - - To start MapReduce daemon manually, enter the following command: - - # $HADOOP_HOME/bin/start-mapred.sh - - - - To stop MapReduce daemon manually, enter the following command: - - # $HADOOP_HOME/bin/stop-mapred.sh - - - - You must start Hadoop MapReduce daemon on all servers. - - -
-
diff --git a/doc/admin-guide/en-US/admin_UFO.xml b/doc/admin-guide/en-US/admin_UFO.xml deleted file mode 100644 index 03be14dc9..000000000 --- a/doc/admin-guide/en-US/admin_UFO.xml +++ /dev/null @@ -1,1588 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - Managing Unified File and Object Storage - Unified File and Object Storage (UFO) unifies NAS and object storage technology. It -provides a system for data storage that enables users to access the same data, both as an object and as a -file, thus simplifying management and controlling storage costs. - - - Unified File and Object Storage is built upon Openstack's Object Storage Swift. Open Stack Object Storage allows users to store and retrieve files and content through a simple Web Service (REST: Representational State Transfer) interface as objects and GlusterFS, allows users to store and retrieve files using Native Fuse and NFS mounts. It uses GlusterFS as a backend file system for Open Stack Swift. It also leverages on Open Stack Swift's web interface for storing and retrieving files over the web combined with GlusterFS features like scalability and high availability, replication, elastic volume management for data management at disk level. - Unified File and Object Storage technology enables enterprises to adopt and deploy -cloud storage solutions. It allows users to access and modify data as objects from a -REST interface along with the ability to access and modify files from NAS interfaces including NFS -and CIFS. In addition to decreasing cost and making it faster and easier to access object data, -it also delivers massive scalability, high availability and replication of object storage. -Infrastructure as a Service (IaaS) providers can utilize GlusterFS Unified File and Object Storage technology to enable their own cloud -storage service. Enterprises can use this technology to accelerate the process of preparing file-based -applications for the cloud and simplify new application development for cloud computing -environments. - - - OpenStack Object Storage is scalable object storage system and it is not a traditional file system. You will not be able to mount this system like traditional SAN or NAS -volumes and perform POSIX compliant operations. -
- Unified File and Object Storage Architecture - - - - - -
-
- Components of Object Storage - The major components of Object Storage are: - - Proxy Server - - - All REST requests to the UFO are routed through the Proxy Server. - - - - Objects and Containers - An object is the basic storage entity and any optional metadata that represents the data -you store. When you upload data, the data is stored as-is (with no compression or encryption). - - - A container is a storage compartment for your data and provides a way for you to organize -your data. Containers can be visualized as directories in a Linux system. Data must be stored in a container and hence objects are created within a container. - - - It implements objects as files and directories under the container. The object name is a '/' separated path and UFO maps it to directories until the last name in the path, which is marked as a file. With this approach, objects can be accessed as files and directories from native GlusterFS (FUSE) or NFS mounts by providing the '/' separated path. - Accounts and Account Servers - The OpenStack Object Storage system is designed to be used by many different storage -consumers. Each user is associated with one or more accounts and must identify themselves using an authentication system. While authenticating, users must provide the name of the account for which the authentication is requested. - - - UFO implements accounts as GlusterFS volumes. So, when a user is granted read/write permission on an account, it means that that user has access to all the data available on that GlusterFS volume. - - - - - - Authentication and Access Permissions - - - You must authenticate against an authentication service to receive OpenStack Object -Storage connection parameters and an authentication token. The token must be passed -in for all subsequent container or object operations. One authentication service that you -can use as a middleware example is called tempauth. - By default, each user has their own storage account and has full access to that -account. Users must authenticate with their credentials as described above, but once -authenticated they can manage containers and objects within that account. If a user wants to access the content from another account, they must have API access key or a session token provided by their authentication system. -
-
- Advantages of using GlusterFS Unified File and Object Storage - The following are the advantages of using GlusterFS UFO: - - - No limit on upload and download files sizes as compared to Open Stack Swift which limits the object size to 5GB. - - - A unified view of data across NAS and Object Storage technologies. - - - Using GlusterFS's UFO has other advantages like the following: - - - High availability - - - Scalability - - - Replication - - - Elastic Volume management - - - - -
-
- Preparing to Deploy Unified File and Object Storage - This section provides information on pre-requisites and list of dependencies that will be installed -during the installation of Unified File and Object Storage. - -
- Pre-requisites - GlusterFS's Unified File and Object Storage needs user_xattr support from the underlying disk file system. -Use the following command to enable user_xattr for GlusterFS bricks backend: - - # mount –o remount,user_xattr device name - For example, - - # mount –o remount,user_xattr /dev/hda1 - -
-
- Dependencies - The following packages are installed on GlusterFS when you install Unified File and Object -Storage: - - - - - curl - - - - - - - - - - - - - - - - - memcached - - - openssl - - - xfsprogs - - - python2.6 - - - pyxattr - - - python-configobj - - - - python-setuptools - - - - - python-simplejson - - - - - python-webob - - - - - python-eventlet - - - - - python-greenlet - - - - - python-pastedeploy - - - - - python-netifaces - - - -
-
-
- Installing and Configuring Unified File and Object Storage - This section provides instructions on how to install and configure Unified File and Object Storage in your storage -environment. -
- Installing Unified File and Object Storage - To install Unified File and Object Storage: - - - Download rhel_install.sh install script from . - - - - Run - rhel_install.sh script using the following command: - - # sh rhel_install.sh - - - Download swift-1.4.5-1.noarch.rpm and swift-plugin-1.0.-1.el6.noarch.rpm files from . - - - Install swift-1.4.5-1.noarch.rpm and swift-plugin-1.0.-1.el6.noarch.rpm using the following commands: - # rpm -ivh swift-1.4.5-1.noarch.rpm - # rpm -ivh swift-plugin-1.0.-1.el6.noarch.rpm - - You must repeat the above steps on all the machines on which you want to install Unified File and Object Storage. If you install the Unified File and Object Storage on multiple servers, you can use a load balancer like pound, nginx, and so on to distribute the request across the machines. - - - -
-
- Adding Users - The authentication system allows the administrator to grant different levels of access to different users based on the requirement. The following are the types of user permissions: - - - - admin user - - - - normal user - - - Admin user has read and write permissions on the account. By default, a normal user has no read or write permissions. A normal user can only authenticate itself to get a Auth-Token. Read or write permission are provided through ACLs by the admin users. - Add a new user by adding the following entry in /etc/swift/proxy-server.conf file: - user_<account-name>_<user-name> = <password> [.admin] - For example, - user_test_tester = testing .admin - - - During installation, the installation script adds few sample users to the proxy-server.conf file. It is highly recommended that you remove all the default sample user entries from the configuration file. - - - For more information on setting ACLs, see . -
-
- Configuring Proxy Server - The Proxy Server is responsible for connecting to the rest of the OpenStack Object Storage architecture. For each request, it looks up the location of the account, container, or object in the ring and route the request accordingly. The public API is also exposed through the proxy server. When objects are streamed to or from an object server, they are streamed directly through the proxy server to or from the user – the proxy server does not spool them. - - The configurable options pertaining to proxy server are stored in /etc/swift/proxy-server.conf. The following is the sample proxy-server.conf file: - [app:proxy-server] -use = egg:swift#proxy -allow_account_management=true -account_autocreate=true - -[filter:tempauth] -use = egg:swift#tempauth user_admin_admin=admin.admin.reseller_admin -user_test_tester=testing.admin -user_test2_tester2=testing2.admin -user_test_tester3=testing3 - -[filter:healthcheck] -use = egg:swift#healthcheck - -[filter:cache] -use = egg:swift#memcache - By default, GlusterFS's Unified File and Object Storage is configured to support HTTP protocol and uses temporary authentication to authenticate the HTTP requests. -
-
- Configuring Authentication System - Proxy server must be configured to authenticate using - tempauth - . -
-
- Configuring Proxy Server for HTTPS - By default, proxy server only handles HTTP request. To configure the proxy server to process HTTPS requests, perform the following steps: - - - Create self-signed cert for SSL using the following commands: - cd /etc/swift -openssl req -new -x509 -nodes -out cert.crt -keyout cert.key - - - Add the following lines to /etc/swift/proxy-server.conf under [DEFAULT] - bind_port = 443 - cert_file = /etc/swift/cert.crt - key_file = /etc/swift/cert.key - - - Restart the servers using the following commands: - swift-init main stop -swift-init main start - - - The following are the configurable options: - - - proxy-server.conf Default Options in the [DEFAULT] section - - - - - - - Option - Default - Description - - - - - bind_ip - 0.0.0.0 - IP Address for server to bind - - - bind_port - 80 - Port for server to bind - - - swift_dir - /etc/swift - Swift configuration directory - - - workers - 1 - Number of workers to fork - - - user - swift - swift user - - - cert_file - - Path to the ssl .crt - - - key_file - - Path to the ssl .key - - - -
- - proxy-server.conf Server Options in the [proxy-server] section - - - - - - - Option - Default - Description - - - - - use - - paste.deploy entry point for the container server. For most cases, this should be egg:swift#container. - - - log_name - proxy-server - Label used when logging - - - log_facility - LOG_LOCAL0 - Syslog log facility - - - log_level - INFO - Log level - - - log_headers - True - If True, log headers in each request - - - recheck_account_existence - 60 - Cache timeout in seconds to send memcached for account existence - - - recheck_container_existence - 60 - Cache timeout in seconds to send memcached for container existence - - - object_chunk_size - 65536 - Chunk size to read from object servers - - - client_chunk_size - 65536 - Chunk size to read from clients - - - memcache_servers - 127.0.0.1:11211 - Comma separated list of memcached servers ip:port - - - node_timeout - 10 - Request timeout to external services - - - client_timeout - 60 - Timeout to read one chunk from a client - - - conn_timeout - 0.5 - Connection timeout to external services - - - error_suppression_interval - 60 - Time in seconds that must elapse since the last error for a node to be considered no longer error limited - - - error_suppression_limit - 10 - Error count to consider a node error limited - - - allow_account_management - false - Whether account PUTs and DELETEs are even callable - - - -
-
-
- Configuring Object Server - The Object Server is a very simple blob storage server that can store, retrieve, and delete objects stored on local devices. Objects are stored as binary files on the file system with metadata stored in the file’s extended attributes (xattrs). This requires that the underlying file system choice for object servers support xattrs on files. - - - The configurable options pertaining Object Server are stored in the file /etc/swift/object-server/1.conf. The following is the sample object-server/1.conf file: - [DEFAULT] -devices = /srv/1/node -mount_check = false -bind_port = 6010 -user = root -log_facility = LOG_LOCAL2 - -[pipeline:main] -pipeline = gluster object-server - -[app:object-server] -use = egg:swift#object - -[filter:gluster] -use = egg:swift#gluster - -[object-replicator] -vm_test_mode = yes - -[object-updater] -[object-auditor] - The following are the configurable options: - - - object-server.conf Default Options in the [DEFAULT] section - - - - - - - Option - Default - Description - - - - - swift_dir - /etc/swift - Swift configuration directory - - - devices - /srv/node - Mount parent directory where devices are mounted - - - mount_check - true - Whether or not check if the devices are mounted to prevent accidentally writing to the root device - - - bind_ip - 0.0.0.0 - IP Address for server to bind - - - bind_port - 6000 - Port for server to bind - - - workers - 1 - Number of workers to fork - - - -
- - object-server.conf Server Options in the [object-server] section - - - - - - - Option - Default - Description - - - - - use - - paste.deploy entry point for the object server. For most cases, this should be egg:swift#object. - - - log_name - object-server - log name used when logging - - - log_facility - LOG_LOCAL0 - Syslog log facility - - - log_level - INFO - Logging level - - - log_requests - True - Whether or not to log each request - - - user - swift - swift user - - - node_timeout - 3 - Request timeout to external services - - - conn_timeout - 0.5 - Connection timeout to external services - - - network_chunk_size - 65536 - Size of chunks to read or write over the network - - - disk_chunk_size - 65536 - Size of chunks to read or write to disk - - - max_upload_time - 65536 - Maximum time allowed to upload an object - - - slow - 0 - If > 0, Minimum time in seconds for a PUT or DELETE request to complete - - - -
-
-
- Configuring Container Server - The Container Server’s primary job is to handle listings of objects. The listing is done by querying the GlusterFS mount point with path. This query returns a list of all files and directories present under that container. - - The configurable options pertaining to container server are stored in /etc/swift/container-server/1.conf file. The following is the sample container-server/1.conf file: - [DEFAULT] -devices = /srv/1/node -mount_check = false -bind_port = 6011 -user = root -log_facility = LOG_LOCAL2 - -[pipeline:main] -pipeline = gluster container-server - -[app:container-server] -use = egg:swift#container - -[filter:gluster] -use = egg:swift#gluster - -[container-replicator] -[container-updater] -[container-auditor] - The following are the configurable options: - - container-server.conf Default Options in the [DEFAULT] section - - - - - - - Option - Default - Description - - - - - swift_dir - /etc/swift - Swift configuration directory - - - devices - /srv/node - Mount parent directory where devices are mounted - - - mount_check - true - Whether or not check if the devices are mounted to prevent accidentally writing to the root device - - - bind_ip - 0.0.0.0 - IP Address for server to bind - - - bind_port - 6001 - Port for server to bind - - - workers - 1 - Number of workers to fork - - - user - swift - Swift user - - - -
- - container-server.conf Server Options in the [container-server] section - - - - - - - Option - Default - Description - - - - - use - - paste.deploy entry point for the container server. For most cases, this should be egg:swift#container. - - - log_name - container-server - Label used when logging - - - log_facility - LOG_LOCAL0 - Syslog log facility - - - log_level - INFO - Logging level - - - node_timeout - 3 - Request timeout to external services - - - conn_timeout - 0.5 - Connection timeout to external services - - - -
-
-
- Configuring Account Server - The Account Server is very similar to the Container Server, except that it is responsible for listing of containers rather than objects. In UFO, each gluster volume is an account. - - The configurable options pertaining to account server are stored in /etc/swift/account-server/1.conf file. The following is the sample account-server/1.conf file: - [DEFAULT] -devices = /srv/1/node -mount_check = false -bind_port = 6012 -user = root -log_facility = LOG_LOCAL2 - -[pipeline:main] -pipeline = gluster account-server - -[app:account-server] -use = egg:swift#account - -[filter:gluster] -use = egg:swift#gluster - -[account-replicator] -vm_test_mode = yes - -[account-auditor] -[account-reaper] - The following are the configurable options: - - account-server.conf Default Options in the [DEFAULT] section - - - - - - - Option - Default - Description - - - - - swift_dir - /etc/swift - Swift configuration directory - - - devices - /srv/node - mount parent directory where devices are mounted - - - mount_check - true - Whether or not check if the devices are mounted to prevent accidentally writing to the root device - - - bind_ip - 0.0.0.0 - IP Address for server to bind - - - bind_port - 6002 - Port for server to bind - - - workers - 1 - Number of workers to fork - - - user - swift - Swift user - - - -
- - account-server.conf Server Options in the [account-server] section - - - - - - - Option - Default - Description - - - - - use - - paste.deploy entry point for the container server. For most cases, this should be egg:swift#container. - - - log_name - account-server - Label used when logging - - - log_facility - LOG_LOCAL0 - Syslog log facility - - - log_level - INFO - Logging level - - - -
-
-
- Starting and Stopping Server - You must start the server manually when system reboots and whenever you update/modify the configuration files. - - - To start the server, enter the following command: - # swift_init main start - - - To stop the server, enter the following command: - # swift_init main stop - - -
-
-
- Working with Unified File and Object Storage - This section describes the REST API for administering and managing Object Storage. All requests will -be directed to the host and URL described in the X-Storage-URL HTTP header obtained during -successful authentication. - -
- Configuring Authenticated Access - Authentication is the process of proving identity to the system. To use the REST interface, you must -obtain an authorization token using GET method and supply it with v1.0 as the path. - - Each REST request against the Object Storage system requires the addition of a specific authorization -token HTTP x-header, defined as X-Auth-Token. The storage URL and authentication token are -returned in the headers of the response. - - - - To authenticate, run the following command: - - GET auth/v1.0 HTTP/1.1 -Host: <auth URL> -X-Auth-User: <account name>:<user name> -X-Auth-Key: <user-Password> - For example, - - GET auth/v1.0 HTTP/1.1 -Host: auth.example.com -X-Auth-User: test:tester -X-Auth-Key: testing - -HTTP/1.1 200 OK -X-Storage-Url: https:/example.storage.com:443/v1/AUTH_test -X-Storage-Token: AUTH_tkde3ad38b087b49bbbac0494f7600a554 -X-Auth-Token: AUTH_tkde3ad38b087b49bbbac0494f7600a554 -Content-Length: 0 -Date: Wed, 10 jul 2011 06:11:51 GMT - To authenticate access using cURL (for the above example), run the following -command: - - curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass:testing' -k -https://auth.example.com:443/auth/v1.0 - The X-Auth-Url has to be parsed and used in the connection and request line of all subsequent -requests to the server. In the example output, users connecting to server will send most -container/object requests with a host header of example.storage.com and the request line's version -and account as v1/AUTH_test. - - - - - - The authentication tokens are valid for a 24 hour period. - - -
-
- Working with Accounts - This section describes the list of operations you can perform at the account level of the URL. - -
- Displaying Container Information - You can list the objects of a specific container, or all containers, as needed using GET command. You -can use the following optional parameters with GET request to refine the results: - - - - - - - - Parameter - Description - - - - - limit - Limits the number of results to at most n value. - - - marker - Returns object names greater in value than the specified marker. - - - format - Specify either json or xml to return the respective serialized response. - - - - - To display container information - - - List all the containers of an account using the following command: - - GET /<apiversion>/<account> HTTP/1.1 -Host: <storage URL> -X-Auth-Token: <authentication-token-key> - For example, - - GET /v1/AUTH_test HTTP/1.1 -Host: example.storage.com -X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 - -HTTP/1.1 200 Ok -Date: Wed, 13 Jul 2011 16:32:21 GMT -Server: Apache -Content-Type: text/plain; charset=UTF-8 -Content-Length: 39 - -songs -movies -documents -reports - - - To display container information using cURL (for the above example), run the following -command: - - curl -v -X GET -H 'X-Auth-Token: AUTH_tkde3ad38b087b49bbbac0494f7600a554' -https://example.storage.com:443/v1/AUTH_test -k -
-
- Displaying Account Metadata Information - You can issue HEAD command to the storage service to view the number of containers and the total -bytes stored in the account. - - - - To display containers and storage used, run the following command: - - HEAD /<apiversion>/<account> HTTP/1.1 -Host: <storage URL> -X-Auth-Token: <authentication-token-key> - For example, - - HEAD /v1/AUTH_test HTTP/1.1 -Host: example.storage.com -X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 - -HTTP/1.1 204 No Content -Date: Wed, 13 Jul 2011 16:52:21 GMT -Server: Apache -X-Account-Container-Count: 4 -X-Account-Total-Bytes-Used: 394792 - To display account metadata information using cURL (for the above example), run the following -command: - - curl -v -X HEAD -H 'X-Auth-Token: -AUTH_tkde3ad38b087b49bbbac0494f7600a554' -https://example.storage.com:443/v1/AUTH_test -k - - -
-
-
- Working with Containers - This section describes the list of operations you can perform at the container level of the URL. - -
- Creating Containers - You can use PUT command to create containers. Containers are the storage folders for your data. -The URL encoded name must be less than 256 bytes and cannot contain a forward slash '/' character. - - - - To create a container, run the following command: - - PUT /<apiversion>/<account>/<container>/ HTTP/1.1 -Host: <storage URL> -X-Auth-Token: <authentication-token-key> - For example, - - PUT /v1/AUTH_test/pictures/ HTTP/1.1 -Host: example.storage.com -X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 -HTTP/1.1 201 Created - -Date: Wed, 13 Jul 2011 17:32:21 GMT -Server: Apache -Content-Type: text/plain; charset=UTF-8 - To create container using cURL (for the above example), run the following command: - - curl -v -X PUT -H 'X-Auth-Token: -AUTH_tkde3ad38b087b49bbbac0494f7600a554' -https://example.storage.com:443/v1/AUTH_test/pictures -k - The status code of 201 (Created) indicates that you have successfully created the container. If a -container with same is already existed, the status code of 202 is displayed. - - - -
-
- Displaying Objects of a Container - You can list the objects of a container using GET command. You can use the following optional -parameters with GET request to refine the results: - - - - - - - - Parameter - Description - - - - - limit - Limits the number of results to at most n value. - - - marker - Returns object names greater in value than the specified marker. - - - prefix - Displays the results limited to object names beginning with the substring x. beginning with the substring x. - - - path - Returns the object names nested in the pseudo path. - - - format - Specify either json or xml to return the respective serialized response. - - - delimiter - Returns all the object names nested in the container. - - - - - To display objects of a container - - - - List objects of a specific container using the following command: - - - - GET /<apiversion>/<account>/<container>[parm=value] HTTP/1.1 -Host: <storage URL> -X-Auth-Token: <authentication-token-key> - For example, - - GET /v1/AUTH_test/images HTTP/1.1 -Host: example.storage.com -X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 - -HTTP/1.1 200 Ok -Date: Wed, 13 Jul 2011 15:42:21 GMT -Server: Apache -Content-Type: text/plain; charset=UTF-8 -Content-Length: 139 - -sample file.jpg -test-file.pdf -You and Me.pdf -Puddle of Mudd.mp3 -Test Reports.doc - To display objects of a container using cURL (for the above example), run the following -command: - - curl -v -X GET-H 'X-Auth-Token: AUTH_tkde3ad38b087b49bbbac0494f7600a554' -https://example.storage.com:443/v1/AUTH_test/images -k -
-
- Displaying Container Metadata Information - You can issue HEAD command to the storage service to view the number of objects in a container and -the total bytes of all the objects stored in the container. - - - - To display list of objects and storage used, run the following command: - - HEAD /<apiversion>/<account>/<container> HTTP/1.1 -Host: <storage URL> -X-Auth-Token: <authentication-token-key> - For example, - HEAD /v1/AUTH_test/images HTTP/1.1 -Host: example.storage.com -X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 - -HTTP/1.1 204 No Content -Date: Wed, 13 Jul 2011 19:52:21 GMT -Server: Apache -X-Account-Object-Count: 8 -X-Container-Bytes-Used: 472 - To display list of objects and storage used in a container using cURL (for the above example), run -the following command: - - curl -v -X HEAD -H 'X-Auth-Token: -AUTH_tkde3ad38b087b49bbbac0494f7600a554' -https://example.storage.com:443/v1/AUTH_test/images -k - - -
-
- Deleting Container - You can use DELETE command to permanently delete containers. The container must be empty -before it can be deleted. - - You can issue HEAD command to determine if it contains any objects. - - - - To delete a container, run the following command: - - DELETE /<apiversion>/<account>/<container>/ HTTP/1.1 -Host: <storage URL> -X-Auth-Token: <authentication-token-key> - For example, - DELETE /v1/AUTH_test/pictures HTTP/1.1 -Host: example.storage.com -X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 - -HTTP/1.1 204 No Content -Date: Wed, 13 Jul 2011 17:52:21 GMT -Server: Apache -Content-Length: 0 -Content-Type: text/plain; charset=UTF-8 - To delete a container using cURL (for the above example), run the following command: - - curl -v -X DELETE -H 'X-Auth-Token: -AUTH_tkde3ad38b087b49bbbac0494f7600a554' -https://example.storage.com:443/v1/AUTH_test/pictures -k - The status code of 204 (No Content) indicates that you have successfully deleted the container. If -that container does not exist, the status code 404 (Not Found) is displayed, and if the container is -not empty, the status code 409 (Conflict) is displayed. - - - -
-
- Updating Container Metadata - You can update the metadata of container using POST operation, metadata keys should be prefixed -with 'x-container-meta'. - - - - To update the metadata of the object, run the following command: - - POST /<apiversion>/<account>/<container> HTTP/1.1 -Host: <storage URL> -X-Auth-Token: <Authentication-token-key> -X-Container-Meta-<key>: <new value> -X-Container-Meta-<key>: <new value> - For example, - - POST /v1/AUTH_test/images HTTP/1.1 -Host: example.storage.com -X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 -X-Container-Meta-Zoo: Lion -X-Container-Meta-Home: Dog - -HTTP/1.1 204 No Content -Date: Wed, 13 Jul 2011 20:52:21 GMT -Server: Apache -Content-Type: text/plain; charset=UTF-8 - To update the metadata of the object using cURL (for the above example), run the following -command: - - curl -v -X POST -H 'X-Auth-Token: -AUTH_tkde3ad38b087b49bbbac0494f7600a554' -https://example.storage.com:443/v1/AUTH_test/images -H ' X-Container-Meta-Zoo: Lion' -H 'X-Container-Meta-Home: Dog' -k - The status code of 204 (No Content) indicates the container's metadata is updated successfully. If -that object does not exist, the status code 404 (Not Found) is displayed. - - - -
-
- Setting ACLs on Container - You can set the container access control list by using POST command on container with x- container-read and x-container-write keys. - - The ACL format is [item[,item...]]. Each item can be a group name to give access to or a -referrer designation to grant or deny based on the HTTP Referer header. - - The referrer designation format is: .r:[-]value. - - The .r can also be .ref, .referer, or .referrer; though it will be shortened to.r for -decreased character count usage. The value can be * to specify any referrer host is allowed access. The leading minus sign (-) -indicates referrer hosts that should be denied access. - - Examples of valid ACLs: - - .r:* -.r:*,bobs_account,sues_account:sue -bobs_account,sues_account:sue - Examples of invalid ACLs: - .r: -.r:- - By default, allowing read access via .r will not allow listing objects in the container but allows -retrieving objects from the container. To turn on listings, use the .rlistings directive. Also, .r -designations are not allowed in headers whose names include the word write. - - For example, to set all the objects access rights to "public‟ inside the container using cURL (for the -above example), run the following command: - - curl -v -X POST -H 'X-Auth-Token: -AUTH_tkde3ad38b087b49bbbac0494f7600a554' -https://example.storage.com:443/v1/AUTH_test/images --H 'X-Container-Read: .r:*' -k -
-
-
- Working with Objects - An object represents the data and any metadata for the files stored in the system. Through the REST -interface, metadata for an object can be included by adding custom HTTP headers to the request -and the data payload as the request body. Objects name should not exceed 1024 bytes after URL -encoding. - - This section describes the list of operations you can perform at the object level of the URL. - -
- Creating or Updating Object - You can use PUT command to write or update an object's content and metadata. - - You can verify the data integrity by including an MD5checksum for the object's data in the ETag -header. ETag header is optional and can be used to ensure that the object's contents are stored -successfully in the storage system. - - You can assign custom metadata to objects by including additional HTTP headers on the PUT request. -The objects created with custom metadata via HTTP headers are identified with theX-Object- Meta- prefix. - - - - To create or update an object, run the following command: - - PUT /<apiversion>/<account>/<container>/<object> HTTP/1.1 -Host: <storage URL> -X-Auth-Token: <authentication-token-key> -ETag: da1e100dc9e7becc810986e37875ae38 -Content-Length: 342909 -X-Object-Meta-PIN: 2343 - For example, - PUT /v1/AUTH_test/pictures/dog HTTP/1.1 -Host: example.storage.com -X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 -ETag: da1e100dc9e7becc810986e37875ae38 - -HTTP/1.1 201 Created -Date: Wed, 13 Jul 2011 18:32:21 GMT -Server: Apache -ETag: da1e100dc9e7becc810986e37875ae38 -Content-Length: 0 -Content-Type: text/plain; charset=UTF-8 - To create or update an object using cURL (for the above example), run the following command: - - curl -v -X PUT -H 'X-Auth-Token: -AUTH_tkde3ad38b087b49bbbac0494f7600a554' -https://example.storage.com:443/v1/AUTH_test/pictures/dog -H 'Content- -Length: 0' -k - The status code of 201 (Created) indicates that you have successfully created or updated the object. -If there is a missing content-Length or Content-Type header in the request, the status code of 412 -(Length Required) is displayed. (Optionally) If the MD5 checksum of the data written to the storage -system does not match the ETag value, the status code of 422 (Unprocessable Entity) is displayed. - - - -
- Chunked Transfer Encoding - You can upload data without knowing the size of the data to be uploaded. You can do this by -specifying an HTTP header of Transfer-Encoding: chunked and without using a Content-Length -header. - - You can use this feature while doing a DB dump, piping the output through gzip, and then piping the -data directly into Object Storage without having to buffer the data to disk to compute the file size. - - - - To create or update an object, run the following command: - - PUT /<apiversion>/<account>/<container>/<object> HTTP/1.1 -Host: <storage URL> -X-Auth-Token: <authentication-token-key> -Transfer-Encoding: chunked -X-Object-Meta-PIN: 2343 - For example, - - PUT /v1/AUTH_test/pictures/cat HTTP/1.1 -Host: example.storage.com -X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 -Transfer-Encoding: chunked -X-Object-Meta-PIN: 2343 -19 -A bunch of data broken up -D -into chunks. -0 - - - - -
-
-
- Copying Object - You can copy object from one container to another or add a new object and then add reference to -designate the source of the data from another container. - - To copy object from one container to another - - - To add a new object and designate the source of the data from another container, run the -following command: - - COPY /<apiversion>/<account>/<container>/<sourceobject> HTTP/1.1 -Host: <storage URL> -X-Auth-Token: < authentication-token-key> -Destination: /<container>/<destinationobject> - For example, - - COPY /v1/AUTH_test/images/dogs HTTP/1.1 -Host: example.storage.com -X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 -Destination: /photos/cats - -HTTP/1.1 201 Created -Date: Wed, 13 Jul 2011 18:32:21 GMT -Server: Apache -Content-Length: 0 -Content-Type: text/plain; charset=UTF-8 - To copy an object using cURL (for the above example), run the following command: - - curl -v -X COPY -H 'X-Auth-Token: -AUTH_tkde3ad38b087b49bbbac0494f7600a554' -H 'Destination: /photos/cats' -k https://example.storage.com:443/v1/AUTH_test/images/dogs - The status code of 201 (Created) indicates that you have successfully copied the object. If there is a -missing content-Length or Content-Type header in the request, the status code of 412 (Length -Required) is displayed. - - You can also use PUT command to copy object by using additional header X-Copy-From: container/obj. - - - - To use PUT command to copy an object, run the following command: - - PUT /v1/AUTH_test/photos/cats HTTP/1.1 -Host: example.storage.com -X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 -X-Copy-From: /images/dogs - -HTTP/1.1 201 Created -Date: Wed, 13 Jul 2011 18:32:21 GMT -Server: Apache -Content-Type: text/plain; charset=UTF-8 - To copy an object using cURL (for the above example), run the following command: - - curl -v -X PUT -H 'X-Auth-Token: AUTH_tkde3ad38b087b49bbbac0494f7600a554' --H 'X-Copy-From: /images/dogs' –k -https://example.storage.com:443/v1/AUTH_test/images/cats - The status code of 201 (Created) indicates that you have successfully copied the object. - - - -
-
- Displaying Object Information - You can issue GET command on an object to view the object data of the object. - - - - To display the content of an object run the following command: - GET /<apiversion>/<account>/<container>/<object> HTTP/1.1 -Host: <storage URL> -X-Auth-Token: <Authentication-token-key> - For example, - - GET /v1/AUTH_test/images/cat HTTP/1.1 -Host: example.storage.com -X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 - -HTTP/1.1 200 Ok -Date: Wed, 13 Jul 2011 23:52:21 GMT -Server: Apache -Last-Modified: Thu, 14 Jul 2011 13:40:18 GMT -ETag: 8a964ee2a5e88be344f36c22562a6486 -Content-Length: 534210 -[.........] - To display the content of an object using cURL (for the above example), run the following -command: - - curl -v -X GET -H 'X-Auth-Token: -AUTH_tkde3ad38b087b49bbbac0494f7600a554' -https://example.storage.com:443/v1/AUTH_test/images/cat -k - The status code of 200 (Ok) indicates the object‟s data is displayed successfully. If that object does -not exist, the status code 404 (Not Found) is displayed. - - - -
-
- Displaying Object Metadata - You can issue HEAD command on an object to view the object metadata and other standard HTTP -headers. You must send only authorization token as header. - - - - To display the metadata of the object, run the following command: - - - - HEAD /<apiversion>/<account>/<container>/<object> HTTP/1.1 -Host: <storage URL> -X-Auth-Token: <Authentication-token-key> - For example, - - HEAD /v1/AUTH_test/images/cat HTTP/1.1 -Host: example.storage.com -X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 - -HTTP/1.1 204 No Content -Date: Wed, 13 Jul 2011 21:52:21 GMT -Server: Apache -Last-Modified: Thu, 14 Jul 2011 13:40:18 GMT -ETag: 8a964ee2a5e88be344f36c22562a6486 -Content-Length: 512000 -Content-Type: text/plain; charset=UTF-8 -X-Object-Meta-House: Cat -X-Object-Meta-Zoo: Cat -X-Object-Meta-Home: Cat -X-Object-Meta-Park: Cat - To display the metadata of the object using cURL (for the above example), run the following -command: - - curl -v -X HEAD -H 'X-Auth-Token: -AUTH_tkde3ad38b087b49bbbac0494f7600a554' -https://example.storage.com:443/v1/AUTH_test/images/cat -k - The status code of 204 (No Content) indicates the object‟s metadata is displayed successfully. If that -object does not exist, the status code 404 (Not Found) is displayed. - -
-
- Updating Object Metadata - You can issue POST command on an object name only to set or overwrite arbitrary key metadata. You -cannot change the object‟s other headers such as Content-Type, ETag and others using POST -operation. The POST command will delete all the existing metadata and replace it with the new -arbitrary key metadata. - - You must prefix X-Object-Meta- to the key names. - - - - To update the metadata of an object, run the following command: - POST /<apiversion>/<account>/<container>/<object> HTTP/1.1 -Host: <storage URL> -X-Auth-Token: <Authentication-token-key> -X-Object-Meta-<key>: <new value> -X-Object-Meta-<key>: <new value> - - For example, - - POST /v1/AUTH_test/images/cat HTTP/1.1 -Host: example.storage.com -X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 -X-Object-Meta-Zoo: Lion -X-Object-Meta-Home: Dog - -HTTP/1.1 202 Accepted -Date: Wed, 13 Jul 2011 22:52:21 GMT -Server: Apache -Content-Length: 0 -Content-Type: text/plain; charset=UTF-8 - To update the metadata of an object using cURL (for the above example), run the following -command: - - curl -v -X POST -H 'X-Auth-Token: -AUTH_tkde3ad38b087b49bbbac0494f7600a554' -https://example.storage.com:443/v1/AUTH_test/images/cat -H ' X-Object- -Meta-Zoo: Lion' -H 'X-Object-Meta-Home: Dog' -k - The status code of 202 (Accepted) indicates that you have successfully updated the object‟s -metadata. If that object does not exist, the status code 404 (Not Found) is displayed. - - - - -
-
- Deleting Object - You can use DELETE command to permanently delete the object. - - The DELETE command on an object will be processed immediately and any subsequent operations -like GET, HEAD, POST, or DELETE on the object will display 404 (Not Found) error. - - - - To delete an object, run the following command: - - DELETE /<apiversion>/<account>/<container>/<object> HTTP/1.1 -Host: <storage URL> -X-Auth-Token: <Authentication-token-key> - For example, - - DELETE /v1/AUTH_test/pictures/cat HTTP/1.1 -Host: example.storage.com -X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 - -HTTP/1.1 204 No Content -Date: Wed, 13 Jul 2011 20:52:21 GMT -Server: Apache -Content-Type: text/plain; charset=UTF-8 - To delete an object using cURL (for the above example), run the following command: - - curl -v -X DELETE -H 'X-Auth-Token: -AUTH_tkde3ad38b087b49bbbac0494f7600a554' -https://example.storage.com:443/v1/AUTH_test/pictures/cat -k - The status code of 204 (No Content) indicates that you have successfully deleted the object. If that -object does not exist, the status code 404 (Not Found) is displayed. - - - -
-
-
-
diff --git a/doc/admin-guide/en-US/admin_commandref.xml b/doc/admin-guide/en-US/admin_commandref.xml deleted file mode 100644 index 5e1560534..000000000 --- a/doc/admin-guide/en-US/admin_commandref.xml +++ /dev/null @@ -1,334 +0,0 @@ - - - - Command Reference - This section describes the available commands and includes the -following section: - - - - gluster Command - - Gluster Console Manager (command line interpreter) - - - - glusterd Daemon - - Gluster elastic volume management daemon - - - -
- gluster Command - NAME - - gluster - Gluster Console Manager (command line interpreter) - - SYNOPSIS - - To run the program and display the gluster prompt: - - gluster - - To specify a command directly: -gluster [COMMANDS] [OPTIONS] - - DESCRIPTION - - The Gluster Console Manager is a command line utility for elastic volume management. You can run -the gluster command on any export server. The command enables administrators to perform cloud -operations such as creating, expanding, shrinking, rebalancing, and migrating volumes without -needing to schedule server downtime. - - COMMANDS - - - - - - - - - Command - Description - - - - - - Volume - - - - volume info [all | VOLNAME] - Displays information about all volumes, or the specified volume. - - - volume create NEW-VOLNAME [stripe COUNT] [replica COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK ... - Creates a new volume of the specified type using the specified bricks and transport type (the default transport type is tcp). - - - volume delete VOLNAME - Deletes the specified volume. - - - volume start VOLNAME - Starts the specified volume. - - - volume stop VOLNAME [force] - Stops the specified volume. - - - volume rename VOLNAME NEW-VOLNAME - Renames the specified volume. - - - volume help - Displays help for the volume command. - - - - Brick - - - - volume add-brick VOLNAME NEW-BRICK ... - Adds the specified brick to the specified volume. - - - volume replace-brick VOLNAME (BRICK NEW-BRICK) start | pause | abort | status - Replaces the specified brick. - - - volume remove-brick VOLNAME [(replica COUNT)|(stripe COUNT)] BRICK ... - Removes the specified brick from the specified volume. - - - - Rebalance - - - - volume rebalance VOLNAME start - Starts rebalancing the specified volume. - - - volume rebalance VOLNAME stop - Stops rebalancing the specified volume. - - - volume rebalance VOLNAME status - Displays the rebalance status of the specified volume. - - - - Log - - - - volume log filename VOLNAME [BRICK] DIRECTORY - Sets the log directory for the corresponding volume/brick. - - - volume log rotate VOLNAME [BRICK] - Rotates the log file for corresponding volume/brick. - - - volume log locate VOLNAME [BRICK] - Locates the log file for corresponding volume/brick. - - - - Peer - - - - peer probe HOSTNAME - Probes the specified peer. - - - peer detach HOSTNAME - Detaches the specified peer. - - - peer status - Displays the status of peers. - - - peer help - Displays help for the peer command. - - - - Geo-replication - - - - volume geo-replication MASTER SLAVE start - - Start geo-replication between the hosts specified by MASTER and SLAVE. You can specify a local master volume as :VOLNAME. - You can specify a local slave volume as :VOLUME and a local slave directory as /DIRECTORY/SUB-DIRECTORY. You can specify a remote slave volume as DOMAIN::VOLNAME and a remote slave directory as DOMAIN:/DIRECTORY/SUB-DIRECTORY. - - - - volume geo-replication MASTER SLAVE stop - - Stop geo-replication between the hosts specified by MASTER and SLAVE. You can specify a local master volume as :VOLNAME and a local master directory as /DIRECTORY/SUB-DIRECTORY. - You can specify a local slave volume as :VOLNAME and a local slave directory as /DIRECTORY/SUB-DIRECTORY. You can specify a remote slave volume as DOMAIN::VOLNAME and a remote slave directory as DOMAIN:/DIRECTORY/SUB-DIRECTORY. - - - - - volume geo-replication MASTER SLAVE config [options] - - Configure geo-replication options between the hosts specified by MASTER and SLAVE. - - - gluster-command COMMAND - The path where the gluster command is installed. - - - gluster-log-level LOGFILELEVEL - The log level for gluster processes. - - - log-file LOGFILE - The path to the geo-replication log file. - - - log-level LOGFILELEVEL - The log level for geo-replication. - - - remote-gsyncd COMMAND - The path where the gsyncd binary is installed on the remote machine. - - - ssh-command COMMAND - The ssh command to use to connect to the remote machine (the default is ssh). - - - rsync-command COMMAND - The rsync command to use for synchronizing the files (the default is rsync). - - - volume_id= UID - The command to delete the existing master UID for the intermediate/slave node. - - - timeout SECONDS - The timeout period. - - - sync-jobs N - The number of simultaneous files/directories that can be synchronized. - - - - ignore-deletes - If this option is set to 1, a file deleted on master will not trigger a delete operation on the slave. Hence, the slave will remain as a superset of the master and can be used to recover the master in case of crash and/or accidental delete. - - - - Other - - - - help - - Display the command options. - - - quit - - Exit the gluster command line interface. - - - - - FILES - - - /var/lib/glusterd/* - - SEE ALSO - fusermount(1), mount.glusterfs(8), glusterfs-volgen(8), glusterfs(8), glusterd(8) -
-
- glusterd Daemon - NAME - - glusterd - Gluster elastic volume management daemon - SYNOPSIS - - glusterd [OPTION...] - - DESCRIPTION - - The glusterd daemon is used for elastic volume management. The daemon must be run on all export servers. - - OPTIONS - - - - - - - - Option - Description - - - - - - Basic - - - - -l=LOGFILE, --log-file=LOGFILE - Files to use for logging (the default is /usr/local/var/log/glusterfs/glusterfs.log). - - - -L=LOGLEVEL, --log-level=LOGLEVEL - Logging severity. Valid options are TRACE, DEBUG, INFO, WARNING, ERROR and CRITICAL (the default is INFO). - - - --debug - Runs the program in debug mode. This option sets --no-daemon, --log-level to DEBUG, and --log-file to console. - - - -N, --no-daemon - Runs the program in the foreground. - - - - Miscellaneous - - - - -?, --help - Displays this help. - - - --usage - Displays a short usage message. - - - -V, --version - Prints the program version. - - - - - FILES - - - /var/lib/glusterd/* - - SEE ALSO - fusermount(1), mount.glusterfs(8), glusterfs-volgen(8), glusterfs(8), gluster(8) -
-
diff --git a/doc/admin-guide/en-US/admin_console.xml b/doc/admin-guide/en-US/admin_console.xml deleted file mode 100644 index ebf273935..000000000 --- a/doc/admin-guide/en-US/admin_console.xml +++ /dev/null @@ -1,28 +0,0 @@ - - - - Using the Gluster Console Manager – Command Line Utility - The Gluster Console Manager is a single command line utility that simplifies configuration and management of your storage environment. The Gluster Console Manager is similar to the LVM (Logical Volume Manager) CLI or ZFS Command Line Interface, but across multiple storage servers. You can use the Gluster Console Manager online, while volumes are mounted and active. Gluster automatically synchronizes volume configuration information across all Gluster servers. - Using the Gluster Console Manager, you can create new volumes, start volumes, and stop volumes, as required. You can also add bricks to volumes, remove bricks from existing volumes, as well as change translator settings, among other operations. - You can also use the commands to create scripts for automation, as well as use the commands as an API to allow integration with third-party applications. - Running the Gluster Console Manager - You can run the Gluster Console Manager on any GlusterFS server either by invoking the commands or by running the Gluster CLI in interactive mode. You can also use the gluster command remotely using SSH. - - - To run commands directly: - # gluster peer command - For example: - # gluster peer status - - - To run the Gluster Console Manager in interactive mode - # gluster - You can execute gluster commands from the Console Manager prompt: - gluster> command - For example, to view the status of the peer server: - # gluster - gluster > peer status - Display the status of the peer. - - - diff --git a/doc/admin-guide/en-US/admin_directory_Quota.xml b/doc/admin-guide/en-US/admin_directory_Quota.xml deleted file mode 100644 index 8a1012a6a..000000000 --- a/doc/admin-guide/en-US/admin_directory_Quota.xml +++ /dev/null @@ -1,179 +0,0 @@ - - - - Managing Directory Quota - Directory quotas in GlusterFS allow you to set limits on usage of disk space by directories or volumes. -The storage administrators can control the disk space utilization at the directory and/or volume -levels in GlusterFS by setting limits to allocatable disk space at any level in the volume and directory -hierarchy. This is particularly useful in cloud deployments to facilitate utility billing model. - - - For now, only Hard limit is supported. Here, the limit cannot be exceeded and attempts to use -more disk space or inodes beyond the set limit will be denied. - - - System administrators can also monitor the resource utilization to limit the storage for the users -depending on their role in the organization. - - You can set the quota at the following levels: - - - - Directory level – limits the usage at the directory level - - - - Volume level – limits the usage at the volume level - - - - - You can set the disk limit on the directory even if it is not created. The disk limit is enforced -immediately after creating that directory. For more information on setting disk limit, see . - - -
- Enabling Quota - You must enable Quota to set disk limits. - - To enable quota - - - - Enable the quota using the following command: - - # gluster volume quota VOLNAME enable - For example, to enable quota on test-volume: - - # gluster volume quota test-volume enable -Quota is enabled on /test-volume - - -
-
- Disabling Quota - You can disable Quota, if needed. - - To disable quota: - - - - Disable the quota using the following command: - - # gluster volume quota VOLNAME disable - For example, to disable quota translator on test-volume: - - # gluster volume quota test-volume disable -Quota translator is disabled on /test-volume - - -
-
- Setting or Replacing Disk Limit - You can create new directories in your storage environment and set the disk limit or set disk limit for -the existing directories. The directory name should be relative to the volume with the export -directory/mount being treated as "/". - - To set or replace disk limit - - - - Set the disk limit using the following command: - - # gluster volume quota VOLNAME limit-usage /directorylimit-value - For example, to set limit on data directory on test-volume where data is a directory under the -export directory: - - # gluster volume quota test-volume limit-usage /data 10GB -Usage limit has been set on /data - - In a multi-level directory hierarchy, the strictest disk limit will be considered for enforcement. - - - - -
-
- Displaying Disk Limit Information - You can display disk limit information on all the directories on which the limit is set. - - To display disk limit information - - - - Display disk limit information of all the directories on which limit is set, using the following -command: - - # gluster volume quota VOLNAME list - - For example, to see the set disks limit on test-volume: - - # gluster volume quota test-volume list - - - Path__________Limit______Set Size - -/Test/data 10 GB 6 GB -/Test/data1 10 GB 4 GB - - - Display disk limit information on a particular directory on which limit is set, using the following -command: - - # gluster volume quota VOLNAME list /directory name - - For example, to see the set limit on /data directory of test-volume: - # gluster volume quota test-volume list /data - -Path__________Limit______Set Size -/Test/data 10 GB 6 GB - - -
-
- Updating Memory Cache Size - For performance reasons, quota caches the directory sizes on client. You can set timeout indicating -the maximum valid duration of directory sizes in cache, from the time they are populated. - - For example: If there are multiple clients writing to a single directory, there are chances that some -other client might write till the quota limit is exceeded. However, this new file-size may not get -reflected in the client till size entry in cache has become stale because of timeout. If writes happen -on this client during this duration, they are allowed even though they would lead to exceeding of -quota-limits, since size in cache is not in sync with the actual size. When timeout happens, the size -in cache is updated from servers and will be in sync and no further writes will be allowed. A timeout -of zero will force fetching of directory sizes from server for every operation that modifies file data -and will effectively disables directory size caching on client side. - - To update the memory cache size - - - - Update the memory cache size using the following command: - - # gluster volume set VOLNAME features.quota-timeout value - For example, to update the memory cache size for every 5 seconds on test-volume: - - # gluster volume set test-volume features.quota-timeout 5 -Set volume successful - - -
-
- Removing Disk Limit - You can remove set disk limit, if you do not want quota anymore. - - To remove disk limit - - - Remove disk limit set on a particular directory using the following command: - - # gluster volume quota VOLNAME remove /directory name - - For example, to remove the disk limit on /data directory of test-volume: - - # gluster volume quota test-volume remove /data -Usage limit set on /data is removed - - -
-
diff --git a/doc/admin-guide/en-US/admin_geo-replication.xml b/doc/admin-guide/en-US/admin_geo-replication.xml deleted file mode 100644 index 279e9a62c..000000000 --- a/doc/admin-guide/en-US/admin_geo-replication.xml +++ /dev/null @@ -1,732 +0,0 @@ - - - - Managing Geo-replication - Geo-replication provides a continuous, asynchronous, and incremental replication service from one site to another over Local Area Networks (LANs), Wide Area Network (WANs), and across the Internet. - Geo-replication uses a master–slave model, whereby replication and mirroring occurs between the following partners: - - - Master – a GlusterFS volume - - - Slave – a slave which can be of the following types: - - - A local directory which can be represented as file URL like file:///path/to/dir. You can use shortened form, for example, /path/to/dir. - - - A GlusterFS Volume - Slave volume can be either a local volume like gluster://localhost:volname (shortened form - :volname) or a volume served by different host like gluster://host:volname (shortened form - host:volname). - - - - Both of the above types can be accessed remotely using SSH tunnel. To use SSH, add an SSH prefix to either a file URL or gluster type URL. For example, ssh://root@remote-host:/path/to/dir (shortened form - root@remote-host:/path/to/dir) or ssh://root@remote-host:gluster://localhost:volname (shortened from - root@remote-host::volname). - - - - This section introduces Geo-replication, illustrates the various deployment scenarios, and explains how to configure the system to provide replication and mirroring in your environment. -
- Replicated Volumes vs Geo-replication - The following table lists the difference between replicated volumes and geo-replication: - - - - - - - Replicated Volumes - Geo-replication - - - - - Mirrors data across clusters - Mirrors data across geographically distributed clusters - - - Provides high-availability - Ensures backing up of data for disaster recovery - - - Synchronous replication (each and every file operation is sent across all the bricks) - Asynchronous replication (checks for the changes in files periodically and syncs them on detecting differences) - - - - -
-
- Preparing to Deploy Geo-replication - This section provides an overview of the Geo-replication deployment scenarios, describes how you can check the minimum system requirements, and explores common deployment scenarios. - - - - - - - - - - - - - - - - - -
- Exploring Geo-replication Deployment Scenarios - Geo-replication provides an incremental replication service over Local Area Networks (LANs), Wide Area Network (WANs), and across the Internet. This section illustrates the most common deployment scenarios for Geo-replication, including the following: - - - Geo-replication over LAN - - - - Geo-replication over WAN - - - - Geo-replication over the Internet - - - Multi-site cascading Geo-replication - - - Geo-replication over LAN - You can configure Geo-replication to mirror data over a Local Area Network. - - - Geo-replication over LAN - - - - - - Geo-replication over WAN - You can configure Geo-replication to replicate data over a Wide Area Network. - - - - Geo-replication over WAN - - - - - - - Geo-replication over Internet - You can configure Geo-replication to mirror data over the Internet. - - - - Geo-replication over Internet - - - - - - - Multi-site cascading Geo-replication - You can configure Geo-replication to mirror data in a cascading fashion across multiple sites. - - - - Multi-site cascading Geo-replication - - - - - - -
-
- Geo-replication Deployment Overview - Deploying Geo-replication involves the following steps: - - - Verify that your environment matches the minimum system requirement. For more information, see . - - - Determine the appropriate deployment scenario. For more information, see . - - - Start Geo-replication on master and slave systems, as required. For more information, see . - - -
-
- Checking Geo-replication Minimum Requirements - Before deploying GlusterFS Geo-replication, verify that your systems match the minimum requirements. - The following table outlines the minimum requirements for both master and slave nodes within your environment: - - - - - - - - Component - Master - Slave - - - - - Operating System - GNU/Linux - GNU/Linux - - - Filesystem - GlusterFS 3.2 or higher - GlusterFS 3.2 or higher (GlusterFS needs to be installed, but does not need to be running), ext3, ext4, or XFS (any other POSIX compliant file system would work, but has not been tested extensively) - - - Python - Python 2.4 (with ctypes external module), or Python 2.5 (or higher) - Python 2.4 (with ctypes external module), or Python 2.5 (or higher) - - - Secure shell - OpenSSH version 4.0 (or higher) - SSH2-compliant daemon - - - Remote synchronization - rsync 3.0.7 or higher - rsync 3.0.7 or higher - - - FUSE - GlusterFS supported versions - GlusterFS supported versions - - - - -
-
- Setting Up the Environment for Geo-replication - Time Synchronization - - - On bricks of a geo-replication master volume, all the servers' time must be uniform. You are recommended to set up NTP (Network Time Protocol) service to keep the bricks sync in time and avoid out-of-time sync effect. - For example: In a Replicated volume where brick1 of the master is at 12.20 hrs and brick 2 of the master is at 12.10 hrs with 10 minutes time lag, all the changes in brick2 between this period may go unnoticed during synchronization of files with Slave. - For more information on setting up NTP, see . - - - To setup Geo-replication for SSH - Password-less login has to be set up between the host machine (where geo-replication Start command will be issued) and the remote machine (where slave process should be launched through SSH). - - - On the node where geo-replication sessions are to be set up, run the following command: - # ssh-keygen -f /var/lib/glusterd/geo-replication/secret.pem - - Press Enter twice to avoid passphrase. - - - - Run the following command on master for all the slave hosts: - # ssh-copy-id -i /var/lib/glusterd/geo-replication/secret.pem.pub user@slavehost - - -
-
- Setting Up the Environment for a Secure Geo-replication Slave - You can configure a secure slave using SSH so that master is granted a -restricted access. With GlusterFS, you need not specify -configuration parameters regarding the slave on the master-side -configuration. For example, the master does not require the location of -the rsync program on slave but the slave must ensure that rsync is in -the PATH of the user which the master connects using SSH. The only -information that master and slave have to negotiate are the slave-side -user account, slave's resources that master uses as slave resources, and -the master's public key. Secure access to the slave can be established -using the following options: - - - Restricting Remote Command Execution - - - Using Mountbroker for Slaves - - - Using IP based Access Control - - - Backward Compatibility - Your existing Ge-replication environment will work with GlusterFS, -except for the following: - - - The process of secure reconfiguration affects only the glusterfs -instance on slave. The changes are transparent to master with the -exception that you may have to change the SSH target to an unprivileged - account on slave. - - - The following are the some exceptions where this might not work: - - - Geo-replication URLs which specify the slave resource when configuring master will include the following special characters: space, *, ?, [; - - - Slave must have a running instance of glusterd, even if there is no -gluster volume among the mounted slave resources (that is, file tree -slaves are used exclusively) . - - - - -
- Restricting Remote Command Execution - If you restrict remote command execution, then the Slave audits commands -coming from the master and the commands related to the given -geo-replication session is allowed. The Slave also provides access only -to the files within the slave resource which can be read or manipulated -by the Master. - To restrict remote command execution: - - - Identify the location of the gsyncd helper utility on Slave. This utility is installed in PREFIX/libexec/glusterfs/gsyncd, where PREFIX is a compile-time parameter of glusterfs. For example, --prefix=PREFIX to the configure script with the following common values /usr, /usr/local, and /opt/glusterfs/glusterfs_version. - - - Ensure that command invoked from master to slave passed through the slave's gsyncd utility. - You can use either of the following two options: - - - Set gsyncd with an absolute path as the shell for the account -which the master connects through SSH. If you need to use a privileged -account, then set it up by creating a new user with UID 0. - - - Setup key authentication with command enforcement to gsyncd. You must prefix the copy of master's public key in the Slave account's authorized_keys file with the following command: - command=<path to gsyncd>. - For example, command="PREFIX/glusterfs/gsyncd" ssh-rsa AAAAB3Nza.... - - - - -
-
- Using Mountbroker for Slaves - mountbroker is a new service of glusterd. This service allows an -unprivileged process to own a GlusterFS mount by registering a label -(and DSL (Domain-specific language) options ) with glusterd through a -glusterd volfile. Using CLI, you can send a mount request to glusterd to -receive an alias (symlink) of the mounted volume. - A request from the agent , the unprivileged slave agents use the -mountbroker service of glusterd to set up an auxiliary gluster mount for -the agent in a special environment which ensures that the agent is only -allowed to access with special parameters that provide administrative -level access to the particular volume. - To setup an auxiliary gluster mount for the agent: - - - Create a new group. For example, geogroup. - - - Create a unprivileged account. For example, geoaccount. Make it a member of geogroup. - - - Create a new directory owned by root and with permissions 0711. For example, create a create mountbroker-root directory /var/mountbroker-root. - - - Add the following options to the glusterd volfile, assuming the name of the slave gluster volume as slavevol: - option mountbroker-root /var/mountbroker-root - option mountbroker-geo-replication.geoaccount slavevol - option geo-replication-log-group geogroup - If you are unable to locate the glusterd volfile at /etc/glusterfs/glusterd.vol, you can create a volfile containing both the default configuration and the above options and place it at /etc/glusterfs/. - A sample glusterd volfile along with default options: - volume management - type mgmt/glusterd - option working-directory /var/lib/glusterd - option transport-type socket,rdma - option transport.socket.keepalive-time 10 - option transport.socket.keepalive-interval 2 - option transport.socket.read-fail-log off - - option mountbroker-root /var/mountbroker-root - option mountbroker-geo-replication.geoaccount slavevol - option geo-replication-log-group geogroup -end-volume - If you host multiple slave volumes on Slave, you can repeat step 2. for each of them and add the following options to the volfile: - option mountbroker-geo-replication.geoaccount2 slavevol2 -option mountbroker-geo-replication.geoaccount3 slavevol3 - - - Setup Master to access Slave as geoaccount@Slave. - You can add multiple slave volumes within the same account (geoaccount) by providing comma-separated list (without spaces) as the argument of mountbroker-geo-replication.geogroup. You can also have multiple options of the form mountbroker-geo-replication.*. It is recommended to use one service account per Master machine. For example, if there are multiple slave volumes on Slave for the master machines Master1, Master2, and Master3, then create a dedicated service user on Slave for them by repeating Step 2. for each (like geogroup1, geogroup2, and geogroup3), and then add the following corresponding options to the volfile: - - option mountbroker-geo-replication.geoaccount1 slavevol11,slavevol12,slavevol13 - option mountbroker-geo-replication.geoaccount2 slavevol21,slavevol22 - option mountbroker-geo-replication.geoaccount3 slavevol31 - -Now set up Master1 to ssh to geoaccount1@Slave, etc. - - You must restart glusterd after making changes in the configuration to effect the updates. - - -
-
- Using IP based Access Control - You can use IP based access control method to provide access control for -the slave resources using IP address. You can use method for both Slave -and file tree slaves, but in the section, we are focusing on file tree -slaves using this method. - To set access control based on IP address for file tree slaves: - - - Set a general restriction for accessibility of file tree resources: - - # gluster volume geo-replication '/*' config allow-network ::1,127.0.0.1 - This will refuse all requests for spawning slave agents except for -requests initiated locally. - - - If you want the to lease file tree at /data/slave-tree to Master, enter the following command: - # gluster volume geo-replication /data/slave-tree config allow-network MasterIP - MasterIP is the IP address of Master. The slave agent spawn request from -master will be accepted if it is executed at /data/slave-tree. - - - If the Master side network configuration does not enable the Slave to -recognize the exact IP address of Master, you can use CIDR notation to -specify a subnet instead of a single IP address as MasterIP or even -comma-separated lists of CIDR subnets. - If you want to extend IP based access control to gluster slaves, use the following command: - # gluster volume geo-replication '*' config allow-network ::1,127.0.0.1 -
-
-
-
- Starting Geo-replication - This section describes how to configure and start Gluster Geo-replication in your storage environment, and verify that it is functioning correctly. - - - - - - - - - - - - - - - - - -
- Starting Geo-replication - To start Gluster Geo-replication - - - Start geo-replication between the hosts using the following command: - - # gluster volume geo-replication MASTER SLAVE start - - For example: - - # gluster volume geo-replication Volume1 example.com:/data/remote_dir start -Starting geo-replication session between Volume1 -example.com:/data/remote_dir has been successful - - You may need to configure the service before starting Gluster Geo-replication. For more information, see . - - - -
-
- Verifying Successful Deployment - You can use the gluster command to verify the status of Gluster Geo-replication in your environment. - To verify the status Gluster Geo-replication - - - Verify the status by issuing the following command on host: - # gluster volume geo-replication MASTER SLAVE status - - For example: - - # gluster volume geo-replication Volume1 example.com:/data/remote_dir status - - # gluster volume geo-replication Volume1 example.com:/data/remote_dir status - -MASTER SLAVE STATUS -______ ______________________________ ____________ -Volume1 root@example.com:/data/remote_dir Starting.... - - - -
-
- Displaying Geo-replication Status Information - You can display status information about a specific geo-replication master session, or a particular master-slave session, or all geo-replication sessions, as needed. - To display geo-replication status information - - - Display information of all geo-replication sessions using the following command: - # gluster volume geo-replication Volume1 example.com:/data/remote_dir status - -MASTER SLAVE STATUS -______ ______________________________ ____________ -Volume1 root@example.com:/data/remote_dir Starting.... - - - - - Display information of a particular master slave session using the following command: - - # gluster volume geo-replication MASTER SLAVE status - - For example, to display information of Volume1 and example.com:/data/remote_dir - - # gluster volume geo-replication Volume1 example.com:/data/remote_dir status - - The status of the geo-replication between Volume1 and example.com:/data/remote_dir is displayed. - - - Display information of all geo-replication sessions belonging to a master - # gluster volume geo-replication MASTER status - - For example, to display information of Volume1 - # gluster volume geo-replication Volume1 example.com:/data/remote_dir status - -MASTER SLAVE STATUS -______ ______________________________ ____________ -Volume1 ssh://example.com:gluster://127.0.0.1:remove_volume OK - -Volume1 ssh://example.com:file:///data/remote_dir OK - The status of a session could be one of the following four: - - - Starting: This is the initial phase of the Geo-replication session; it remains in this state for a minute, to make sure no abnormalities are present. - - - OK: The geo-replication session is in a stable state. - - - Faulty: The geo-replication session has witnessed some abnormality and the situation has to be investigated further. For further information, see section. - - - Corrupt: The monitor thread which is monitoring the geo-replication session has died. This situation should not occur normally, if it persists contact Red Hat Support. - - -
-
- Configuring Geo-replication - To configure Gluster Geo-replication - - - Use the following command at the Gluster command line: - - # gluster volume geo-replication MASTER SLAVE config [options] - - For more information about the options, see . - - For example: - - To view list of all option/value pair, use the following command: - - # gluster volume geo-replication Volume1 example.com:/data/remote_dir config - - - -
-
- Stopping Geo-replication - You can use the gluster command to stop Gluster Geo-replication (syncing of data from Master to Slave) in your environment. - To stop Gluster Geo-replication - - - Stop geo-replication between the hosts using the following command: - - # gluster volume geo-replication MASTER SLAVE stop - For example: - - # gluster volume geo-replication Volume1 example.com:/data/remote_dir stop -Stopping geo-replication session between Volume1 and -example.com:/data/remote_dir has been successful - See for more information about the gluster command. - - - -
-
-
- Restoring Data from the Slave - You can restore data from the slave to the master volume, whenever the master volume becomes faulty for reasons like hardware failure. - - The example in this section assumes that you are using the Master Volume (Volume1) with the following configuration: - - machine1# gluster volume info -Type: Distribute -Status: Started -Number of Bricks: 2 -Transport-type: tcp -Bricks: -Brick1: machine1:/export/dir16 -Brick2: machine2:/export/dir16 -Options Reconfigured: -geo-replication.indexing: on - The data is syncing from master volume (Volume1) to slave directory (example.com:/data/remote_dir). To view the status of this geo-replication session run the following command on Master: - # gluster volume geo-replication Volume1 root@example.com:/data/remote_dir status - -MASTER SLAVE STATUS -______ ______________________________ ____________ -Volume1 root@example.com:/data/remote_dir OK - Before Failure - - Assume that the Master volume had 100 files and was mounted at /mnt/gluster on one of the client machines (client). Run the following command on Client machine to view the list of files: - - client# ls /mnt/gluster | wc –l -100 - The slave directory (example.com) will have same data as in the master volume and same can be viewed by running the following command on slave: - - example.com# ls /data/remote_dir/ | wc –l -100 - After Failure - - If one of the bricks (machine2) fails, then the status of Geo-replication session is changed from "OK" to "Faulty". To view the status of this geo-replication session run the following command on Master: - - # gluster volume geo-replication Volume1 root@example.com:/data/remote_dir status - -MASTER SLAVE STATUS -______ ______________________________ ____________ -Volume1 root@example.com:/data/remote_dir Faulty - Machine2 is failed and now you can see discrepancy in number of files between master and slave. Few files will be missing from the master volume but they will be available only on slave as shown below. - - Run the following command on Client: - - client # ls /mnt/gluster | wc –l -52 - Run the following command on slave (example.com): - - Example.com# # ls /data/remote_dir/ | wc –l -100 - To restore data from the slave machine - - - Stop all Master's geo-replication sessions using the following command: - - # gluster volume geo-replication MASTER SLAVE stop - - For example: - - machine1# gluster volume geo-replication Volume1 -example.com:/data/remote_dir stop - -Stopping geo-replication session between Volume1 & -example.com:/data/remote_dir has been successful - - Repeat # gluster volume geo-replication MASTER SLAVE stop command on all active geo-replication sessions of master volume. - - - - Replace the faulty brick in the master by using the following command: - - # gluster volume replace-brick VOLNAME BRICK NEW-BRICK start - - For example: - - machine1# gluster volume replace-brick Volume1 machine2:/export/dir16 machine3:/export/dir16 start -Replace-brick started successfully - - - Commit the migration of data using the following command: - - # gluster volume replace-brick VOLNAME BRICK NEW-BRICK commit force - For example: - - machine1# gluster volume replace-brick Volume1 machine2:/export/dir16 machine3:/export/dir16 commit force -Replace-brick commit successful - - - Verify the migration of brick by viewing the volume info using the following command: - - # gluster volume info VOLNAME - For example: - - machine1# gluster volume info -Volume Name: Volume1 -Type: Distribute -Status: Started -Number of Bricks: 2 -Transport-type: tcp -Bricks: -Brick1: machine1:/export/dir16 -Brick2: machine3:/export/dir16 -Options Reconfigured: -geo-replication.indexing: on - - - Run rsync command manually to sync data from slave to master volume's client (mount point). - - For example: - - example.com# rsync -PavhS --xattrs --ignore-existing /data/remote_dir/ client:/mnt/gluster - Verify that the data is synced by using the following command: - - On master volume, run the following command: - - Client # ls | wc –l -100 - On the Slave run the following command: - - example.com# ls /data/remote_dir/ | wc –l -100 - Now Master volume and Slave directory is synced. - - - - Restart geo-replication session from master to slave using the following command: - - # gluster volume geo-replication MASTER SLAVE start - For example: - - machine1# gluster volume geo-replication Volume1 -example.com:/data/remote_dir start -Starting geo-replication session between Volume1 & -example.com:/data/remote_dir has been successful - - -
-
- Best Practices - Manually Setting Time - If you have to change the time on your bricks manually, then you must set uniform time on all bricks. This avoids the out-of-time sync issue described in . Setting time backward corrupts the geo-replication index, so the recommended way to set the time manually is: - - - - Stop geo-replication between the master and slave using the following command: - - # gluster volume geo-replication MASTER SLAVE stop - - - - Stop the geo-replication indexing using the following command: - - # gluster volume set MASTER geo-replication.indexing off - - - Set uniform time on - all bricks.s - - - Restart your geo-replication sessions by using the following command: - - # gluster volume geo-replication MASTER SLAVE start - - - Running Geo-replication commands in one system - - It is advisable to run the geo-replication commands in one of the bricks in the trusted storage pool. This is because, the log files for the geo-replication session would be stored in the *Server* where the Geo-replication start is initiated. Hence it would be easier to locate the log-files when required. - - Isolation - Geo-replication slave operation is not sandboxed as of now and is ran as a privileged service. So for the security reason, it is advised to create a sandbox environment (dedicated machine / dedicated virtual machine / chroot/container type solution) by the administrator to run the geo-replication slave in it. Enhancement in this regard will be available in follow-up minor release. - -
-
diff --git a/doc/admin-guide/en-US/admin_managing_volumes.xml b/doc/admin-guide/en-US/admin_managing_volumes.xml deleted file mode 100644 index 70c1fe0b9..000000000 --- a/doc/admin-guide/en-US/admin_managing_volumes.xml +++ /dev/null @@ -1,741 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - Managing GlusterFS Volumes - This section describes how to perform common GlusterFS management operations, including the following: - - - - - - - - - - - - - - - - - - - - - - - - - - -
- Tuning Volume Options - You can tune volume options, as needed, while the cluster is online and available. - - Red Hat recommends you to set server.allow-insecure option to ON if there are too many bricks in each volume or if there are too many services which have already utilized all the privileged ports in the system. Turning this option ON allows ports to accept/reject messages from insecure ports. So, use this option only if your deployment requires it. - - To tune volume options - - - Tune volume options using the following command: - # gluster volume set VOLNAME OPTION PARAMETER - For example, to specify the performance cache size for test-volume: - # gluster volume set test-volume performance.cache-size 256MB -Set volume successful - The following table lists the Volume options along with its description and default value: - - The default options given here are subject to modification at any given time and may not be the same for all versions. - - - - - - - - - - Option - Description - Default Value - Available Options - - - - - auth.allow - IP addresses of the clients which should be allowed to access the volume. - * (allow all) - Valid IP address which includes wild card patterns including *, such as 192.168.1.* - - - auth.reject - IP addresses of the clients which should be denied to access the volume. - NONE (reject none) - Valid IP address which includes wild card patterns including *, such as 192.168.2.* - - - client.grace-timeout - Specifies the duration for the lock state to be maintained on the client after a network disconnection. - 10 - 10 - 1800 secs - - - cluster.self-heal-window-size - Specifies the maximum number of blocks per file on which self-heal would happen simultaneously. - 16 - 0 - 1025 blocks - - - cluster.data-self-heal-algorithm - Specifies the type of self-heal. If you set the option as "full", the entire file is copied from source to destinations. If the option is set to "diff" the file blocks that are not in sync are copied to destinations. Reset uses a heuristic model. If the file does not exist on one of the subvolumes, or a zero-byte file exists (created by entry self-heal) the entire content has to be copied anyway, so there is no benefit from using the "diff" algorithm. If the file size is about the same as page size, the entire file can be read and written with a few operations, which will be faster than "diff" which has to read checksums and then read and write. - reset - full | diff | reset - - - cluster.min-free-disk - Specifies the percentage of disk space that must be kept free. Might be useful for non-uniform bricks. - 10% - Percentage of required minimum free disk space - - - cluster.stripe-block-size - Specifies the size of the stripe unit that will be read from or written to. - 128 KB (for all files) - size in bytes - - - cluster.self-heal-daemon - Allows you to turn-off proactive self-heal on replicated volumes. - on - On | Off - - - cluster.ensure-durability - This option makes sure the data/metadata is durable across abrupt shutdown of the brick. - on - On | Off - - - diagnostics.brick-log-level - Changes the log-level of the bricks. - INFO - DEBUG|WARNING|ERROR|CRITICAL|NONE|TRACE - - - diagnostics.client-log-level - Changes the log-level of the clients. - INFO - DEBUG|WARNING|ERROR|CRITICAL|NONE|TRACE - - - diagnostics.latency-measurement - Statistics related to the latency of each operation would be tracked. - off - On | Off - - - diagnostics.dump-fd-stats - Statistics related to file-operations would be tracked. - off - On | Off - - - feature.read-only - Enables you to mount the entire volume as read-only for all the clients (including NFS clients) accessing it. - off - On | Off - - - features.lock-heal - Enables self-healing of locks when the network disconnects. - on - On | Off - - - features.quota-timeout - For performance reasons, quota caches the directory sizes on client. You can set timeout indicating the maximum duration of directory sizes in cache, from the time they are populated, during which they are considered valid. - 0 - 0 - 3600 secs - - - geo-replication.indexing - Use this option to automatically sync the changes in the filesystem from Master to Slave. - off - On | Off - - - network.frame-timeout - The time frame after which the operation has to be declared as dead, if the server does not respond for a particular operation. - 1800 (30 mins) - 1800 secs - - - network.ping-timeout - The time duration for which the client waits to check if the server is responsive. When a ping timeout happens, there is a network disconnect between the client and server. All resources held by server on behalf of the client get cleaned up. When a reconnection happens, all resources will need to be re-acquired before the client can resume its operations on the server. Additionally, the locks will be acquired and the lock tables updated. This reconnect is a very expensive operation and should be avoided. - - 42 Secs - 42 Secs - - - nfs.enable-ino32 - For 32-bit nfs clients or applications that do not support 64-bit inode numbers or large files, use this option from the CLI to make Gluster NFS return 32-bit inode numbers instead of 64-bit inode numbers. Applications that will benefit are those that were either: * Built 32-bit and run on 32-bit machines.* Built 32-bit on 64-bit systems.* Built 64-bit but use a library built 32-bit, especially relevant for python and perl scripts.Either of the conditions above can lead to application on Linux NFS clients failing with "Invalid argument" or "Value too large for defined data type" errors. - off - On | Off - - - nfs.volume-access - Set the access type for the specified sub-volume. - read-write - read-write|read-only - - - nfs.trusted-write - If there is an UNSTABLE write from the client, STABLE flag will be returned to force the client to not send a COMMIT request. In some environments, combined with a replicated GlusterFS setup, this option can improve write performance. This flag allows users to trust Gluster replication logic to sync data to the disks and recover when required. COMMIT requests if received will be handled in a default manner by fsyncing. STABLE writes are still handled in a sync manner. - off - On | Off - - - nfs.trusted-sync - All writes and COMMIT requests are treated as async. This implies that no write requests are guaranteed to be on server disks when the write reply is received at the NFS client. Trusted sync includes trusted-write behavior. - off - On | Off - - - nfs.export-dir - By default, all sub-volumes of NFS are exported as individual exports. Now, this option allows you to export only the specified subdirectory or subdirectories in the volume. This option can also be used in conjunction with nfs3.export-volumes option to restrict exports only to the subdirectories specified through this option. You must provide an absolute path. - Enabled for all sub directories. - Enable | Disable - - - nfs.export-volumes - Enable/Disable exporting entire volumes, instead if used in conjunction with nfs3.export-dir, can allow setting up only subdirectories as exports. - on - On | Off - - - nfs.rpc-auth-unix - Enable/Disable the AUTH_UNIX authentication type. This option is enabled by default for better interoperability. However, you can disable it if required. - on - On | Off - - - nfs.rpc-auth-null - Enable/Disable the AUTH_NULL authentication type. It is not recommended to change the default value for this option. - on - On | Off - - - nfs.rpc-auth-allow<IP- Addresses> - Allow a comma separated list of addresses and/or hostnames to connect to the server. By default, all clients are disallowed. This allows you to define a general rule for all exported volumes. - Reject All - IP address or Host name - - - nfs.rpc-auth-reject IP- Addresses - Reject a comma separated list of addresses and/or hostnames from connecting to the server. By default, all connections are disallowed. This allows you to define a general rule for all exported volumes. - Reject All - IP address or Host name - - - nfs.ports-insecure - Allow client connections from unprivileged ports. By default only privileged ports are allowed. This is a global setting in case insecure ports are to be enabled for all exports using a single option. - off - On | Off - - - nfs.addr-namelookup - Turn-off name lookup for incoming client connections using this option. In some setups, the name server can take too long to reply to DNS queries resulting in timeouts of mount requests. Use this option to turn off name lookups during address authentication. Note, turning this off will prevent you from using hostnames in rpc-auth.addr.* filters. - on - On | Off - - - nfs.register-with- portmap - For systems that need to run multiple NFS servers, you need to prevent more than one from registering with portmap service. Use this option to turn off portmap registration for Gluster NFS. - on - On | Off - - - nfs.port <PORT- NUMBER> - Use this option on systems that need Gluster NFS to be associated with a non-default port number. - 38465- 38467 - - - - nfs.disable - Turn-off volume being exported by NFS - off - On | Off - - - performance.write-behind-window-size - Size of the per-file write-behind buffer. - 1 MB - Write-behind cache size - - - performance.io-thread-count - The number of threads in IO threads translator. - 16 - 0 - 65 - - - performance.flush-behind - If this option is set ON, instructs write-behind translator to perform flush in background, by returning success (or any errors, if any of previous writes were failed) to application even before flush is sent to backend filesystem. - On - On | Off - - - performance.cache-max-file-size - Sets the maximum file size cached by the io-cache translator. Can use the normal size descriptors of KB, MB, GB,TB or PB (for example, 6GB). Maximum size uint64. - 2 ^ 64 -1 bytes - size in bytes - - - performance.cache-min-file-size - Sets the minimum file size cached by the io-cache translator. Values same as "max" above. - 0B - size in bytes - - - performance.cache-refresh-timeout - The cached data for a file will be retained till 'cache-refresh-timeout' seconds, after which data re-validation is performed. - 1 sec - 0 - 61 - - - performance.cache-size - Size of the read cache. - 32 MB - size in bytes - - - server.allow-insecure - Allow client connections from unprivileged ports. By default only privileged ports are allowed. This is a global setting in case insecure ports are to be enabled for all exports using a single option. - on - On | Off - - - server.grace-timeout - Specifies the duration for the lock state to be maintained on the server after a network disconnection. - 10 - 10 - 1800 secs - - - server.statedump-path - Location of the state dump file. - /tmp directory of the brick - New directory path - - - - - You can view the changed volume options using the # gluster volume info VOLNAME command. For more information, see . - - -
-
- Expanding Volumes - You can expand volumes, as needed, while the cluster is online and available. For example, you might want to add a brick to a distributed volume, thereby increasing the distribution and adding to the capacity of the GlusterFS volume. - Similarly, you might want to add a group of bricks to a distributed replicated volume, increasing the capacity of the GlusterFS volume. - - When expanding distributed replicated and distributed striped volumes, you need to add a number of bricks that is a multiple of the replica or stripe count. For example, to expand a distributed replicated volume with a replica count of 2, you need to add bricks in multiples of 2 (such as 4, 6, 8, etc.). - - To expand a volume - - - On the first server in the cluster, probe the server to which you want to add the new brick using the following command: - # gluster peer probe HOSTNAME - For example: - # gluster peer probe server4 -Probe successful - - - Add the brick using the following command: - # gluster volume add-brick VOLNAME NEW-BRICK - For example: - # gluster volume add-brick test-volume server4:/exp4 -Add Brick successful - - - Check the volume information using the following command: - # gluster volume info - The command displays information similar to the following: - Volume Name: test-volume -Type: Distribute -Status: Started -Number of Bricks: 4 -Bricks: -Brick1: server1:/exp1 -Brick2: server2:/exp2 -Brick3: server3:/exp3 -Brick4: server4:/exp4 - - - Rebalance the volume to ensure that all files are distributed to the new brick. - You can use the rebalance command as described in . - - -
-
- Shrinking Volumes - You can shrink volumes, as needed, while the cluster is online and available. For example, you might need to remove a brick that has become inaccessible in a distributed volume due to hardware or network failure. - - Data residing on the brick that you are removing will no longer be accessible at the Gluster mount point. Note however that only the configuration information is removed - you can continue to access the data directly from the brick, as necessary. - - When shrinking distributed replicated and distributed striped volumes, you need to remove a number of bricks that is a multiple of the replica or stripe count. For example, to shrink a distributed striped volume with a stripe count of 2, you need to remove bricks in multiples of 2 (such as 4, 6, 8, etc.). In addition, the bricks you are trying to remove must be from the same sub-volume (the same replica or stripe set). - To shrink a volume - - - Remove the brick using the following command: - # gluster volume remove-brick VOLNAME BRICK start - For example, to remove server2:/exp2: - # gluster volume remove-brick test-volume server2:/exp2 - -Removing brick(s) can result in data loss. Do you want to Continue? (y/n) - - - Enter "y" to confirm the operation. The command displays the following message indicating that the remove brick operation is successfully started: - Remove Brick successful - - - (Optional) View the status of the remove brick operation using the following command: - # gluster volume remove-brick VOLNAME BRICK status - For example, to view the status of remove brick operation on server2:/exp2 brick: - # gluster volume remove-brick test-volume server2:/exp2 status - Node Rebalanced-files size scanned status - --------- ---------------- ---- ------- ----------- -617c923e-6450-4065-8e33-865e28d9428f 34 340 162 in progress - - - Check the volume information using the following command: - # gluster volume info - The command displays information similar to the following: - # gluster volume info -Volume Name: test-volume -Type: Distribute -Status: Started -Number of Bricks: 3 -Bricks: -Brick1: server1:/exp1 -Brick3: server3:/exp3 -Brick4: server4:/exp4 - - - Rebalance the volume to ensure that all files are distributed to the new brick. - You can use the rebalance command as described in . - - -
-
- Migrating Volumes - You can migrate the data from one brick to another, as needed, while the cluster is online and available. - To migrate a volume - - - Make sure the new brick, server5 in this example, is successfully added to the cluster. - For more information, see . - - - Migrate the data from one brick to another using the following command: - # gluster volume replace-brick VOLNAME BRICKNEW-BRICK start - For example, to migrate the data in server3:/exp3 to server5:/exp5 in test-volume: - # gluster volume replace-brick test-volume server3:/exp3 server5:exp5 start -Replace brick start operation successful - - You need to have the FUSE package installed on the server on which you are running the replace-brick command for the command to work. - - - - To pause the migration operation, if needed, use the following command: - # gluster volume replace-brick VOLNAME BRICK NEW-BRICK pause - For example, to pause the data migration from server3:/exp3 to server5:/exp5 in test-volume: - # gluster volume replace-brick test-volume server3:/exp3 server5:exp5 pause -Replace brick pause operation successful - - - To abort the migration operation, if needed, use the following command: - # gluster volume replace-brick VOLNAME BRICK NEW-BRICK abort - For example, to abort the data migration from server3:/exp3 to server5:/exp5 in test-volume: - # gluster volume replace-brick test-volume server3:/exp3 server5:exp5 abort -Replace brick abort operation successful - - - Check the status of the migration operation using the following command: - # gluster volume replace-brick VOLNAME BRICK NEW-BRICK status - For example, to check the data migration status from server3:/exp3 to server5:/exp5 in test-volume: - # gluster volume replace-brick test-volume server3:/exp3 server5:/exp5 status -Current File = /usr/src/linux-headers-2.6.31-14/block/Makefile -Number of files migrated = 10567 -Migration complete - The status command shows the current file being migrated along with the current total number of files migrated. After completion of migration, it displays Migration complete. - - - Commit the migration of data from one brick to another using the following command: - # gluster volume replace-brick VOLNAME BRICK NEW-BRICK commit - For example, to commit the data migration from server3:/exp3 to server5:/exp5 in test-volume: - # gluster volume replace-brick test-volume server3:/exp3 server5:/exp5 commit -replace-brick commit successful - - - Verify the migration of brick by viewing the volume info using the following command: - # gluster volume info VOLNAME - For example, to check the volume information of new brick server5:/exp5 in test-volume: - # gluster volume info test-volume -Volume Name: testvolume -Type: Replicate -Status: Started -Number of Bricks: 4 -Transport-type: tcp -Bricks: -Brick1: server1:/exp1 -Brick2: server2:/exp2 -Brick3: server4:/exp4 -Brick4: server5:/exp5 - -The new volume details are displayed. - - The new volume details are displayed. - In the above example, previously, there were bricks; 1,2,3, and 4 and now brick 3 is replaced by brick 5. - - -
-
- Rebalancing Volumes - After expanding or shrinking a volume (using the add-brick and remove-brick commands respectively), you need to rebalance the data among the servers. New directories created after expanding or shrinking of the volume will be evenly distributed automatically. For all the existing directories, the distribution can be fixed by rebalancing the layout and/or data. - This section describes how to rebalance GlusterFS volumes in your storage environment, using the following common scenarios: - - - Fix Layout - Fixes the layout changes so that the files can actually go to newly added nodes. For more information, see . - - - Fix Layout and Migrate Data - Rebalances volume by fixing the layout changes and migrating the existing data. For more information, see . - - -
- Rebalancing Volume to Fix Layout Changes - Fixing the layout is necessary because the layout structure is static for a given directory. In a scenario where new bricks have been added to the existing volume, newly created files in existing directories will still be distributed only among the old bricks. The # gluster volume rebalance VOLNAME fix-layout start command will fix the layout information so that the files can also go to newly added nodes. When this command is issued, all the file stat information which is already cached will get revalidated. - A fix-layout rebalance will only fix the layout changes and does not migrate data. If you want to migrate the existing data, use# gluster volume rebalance VOLNAME start command to rebalance data among the servers. - To rebalance a volume to fix layout changes - - - Start the rebalance operation on any one of the server using the following command: - # gluster volume rebalance VOLNAME fix-layout start - For example: - # gluster volume rebalance test-volume fix-layout start -Starting rebalance on volume test-volume has been successful - - -
-
- Rebalancing Volume to Fix Layout and Migrate Data - After expanding or shrinking a volume (using the add-brick and remove-brick commands respectively), you need to rebalance the data among the servers. - To rebalance a volume to fix layout and migrate the existing data - - - Start the rebalance operation on any one of the server using the following command: - # gluster volume rebalance VOLNAME start - For example: - # gluster volume rebalance test-volume start -Starting rebalancing on volume test-volume has been successful - - - Start the migration operation forcefully on any one of the server using the following command: - # gluster volume rebalance VOLNAME start force - For example: - # gluster volume rebalance test-volume start force -Starting rebalancing on volume test-volume has been successful - - -
-
- Displaying Status of Rebalance Operation - You can display the status information about rebalance volume operation, as needed. - To view status of rebalance volume - - - Check the status of the rebalance operation, using the following command: - # gluster volume rebalance VOLNAME status - For example: - # gluster volume rebalance test-volume status - Node Rebalanced-files size scanned status - --------- ---------------- ---- ------- ----------- -617c923e-6450-4065-8e33-865e28d9428f 416 1463 312 in progress - The time to complete the rebalance operation depends on the number of files on the volume along with the corresponding file sizes. Continue checking the rebalance status, verifying that the number of files rebalanced or total files scanned keeps increasing. - For example, running the status command again might display a result similar to the following: - # gluster volume rebalance test-volume status - Node Rebalanced-files size scanned status - --------- ---------------- ---- ------- ----------- -617c923e-6450-4065-8e33-865e28d9428f 498 1783 378 in progress - The rebalance status displays the following when the rebalance is complete: - # gluster volume rebalance test-volume status - Node Rebalanced-files size scanned status - --------- ---------------- ---- ------- ----------- -617c923e-6450-4065-8e33-865e28d9428f 502 1873 334 completed - - -
-
- Stopping Rebalance Operation - You can stop the rebalance operation, as needed. - To stop rebalance - - - Stop the rebalance operation using the following command: - # gluster volume rebalance VOLNAME stop - For example: - # gluster volume rebalance test-volume stop - Node Rebalanced-files size scanned status - --------- ---------------- ---- ------- ----------- -617c923e-6450-4065-8e33-865e28d9428f 59 590 244 stopped -Stopped rebalance process on volume test-volume - - -
-
-
- Stopping Volumes - To stop a volume - - - Stop the volume using the following command: - - - # gluster volume stop VOLNAME - For example, to stop test-volume: - # gluster volume stop test-volume -Stopping volume will make its data inaccessible. Do you want to continue? (y/n) - - - - Enter y to confirm the operation. The output of the command displays the following: - - - Stopping volume test-volume has been successful - - -
-
- Deleting Volumes - To delete a volume - - - Delete the volume using the following command: - # gluster volume delete VOLNAME - For example, to delete test-volume: - # gluster volume delete test-volume -Deleting volume will erase all information about the volume. Do you want to continue? (y/n) - - - Enter y to confirm the operation. The command displays the following: - Deleting volume test-volume has been successful - - -
-
- Triggering Self-Heal on Replicate - In replicate module, previously you had to manually trigger a self-heal when a brick goes offline and comes back online, to bring all the replicas in sync. Now the pro-active self-heal daemon runs in the background, diagnoses issues and automatically initiates self-healing every 10 minutes on the files which requires healing. - You can view the list of files that need healing, the list of files which are currently/previously healed, list of files which are in split-brain state, and you can manually trigger self-heal on the entire volume or only on the files which need healing. - - - Trigger self-heal only on the files which requires healing: - # gluster volume heal VOLNAME - For example, to trigger self-heal on files which requires healing of test-volume: - # gluster volume heal test-volume -Heal operation on volume test-volume has been successful - - - Trigger self-heal on all the files of a volume: - # gluster volume heal VOLNAME full - For example, to trigger self-heal on all the files of of test-volume: - # gluster volume heal test-volume full -Heal operation on volume test-volume has been successful - - - View the list of files that needs healing: - # gluster volume heal VOLNAME info - For example, to view the list of files on test-volume that needs healing: - # gluster volume heal test-volume info -Brick server1:/gfs/test-volume_0 -Number of entries: 0 - -Brick server2:/gfs/test-volume_1 -Number of entries: 101 -/95.txt -/32.txt -/66.txt -/35.txt -/18.txt -/26.txt -/47.txt -/55.txt -/85.txt -... - - - View the list of files that are self-healed: - # gluster volume heal VOLNAME info healed - For example, to view the list of files on test-volume that are self-healed: - # gluster volume heal test-volume info healed -Brick server1:/gfs/test-volume_0 -Number of entries: 0 - -Brick server2:/gfs/test-volume_1 -Number of entries: 69 -/99.txt -/93.txt -/76.txt -/11.txt -/27.txt -/64.txt -/80.txt -/19.txt -/41.txt -/29.txt -/37.txt -/46.txt -... - - - View the list of files of a particular volume on which the self-heal failed: - # gluster volume heal VOLNAME info failed - For example, to view the list of files of test-volume that are not self-healed: - # gluster volume heal test-volume info failed -Brick server1:/gfs/test-volume_0 -Number of entries: 0 - -Brick server2:/gfs/test-volume_3 -Number of entries: 72 -/90.txt -/95.txt -/77.txt -/71.txt -/87.txt -/24.txt -... - - - View the list of files of a particular volume which are in split-brain state: - # gluster volume heal VOLNAME info split-brain - For example, to view the list of files of test-volume which are in split-brain state: - # gluster volume heal test-volume info split-brain -Brick server1:/gfs/test-volume_2 -Number of entries: 12 -/83.txt -/28.txt -/69.txt -... - -Brick server2:/gfs/test-volume_2 -Number of entries: 12 -/83.txt -/28.txt -/69.txt -... - - -
-
diff --git a/doc/admin-guide/en-US/admin_monitoring_workload.xml b/doc/admin-guide/en-US/admin_monitoring_workload.xml deleted file mode 100644 index e85bc51d8..000000000 --- a/doc/admin-guide/en-US/admin_monitoring_workload.xml +++ /dev/null @@ -1,878 +0,0 @@ - - - - Monitoring your GlusterFS Workload - You can monitor the GlusterFS volumes on different parameters. Monitoring volumes helps in capacity planning and performance tuning tasks of the GlusterFS volume. Using these information, you can identify and troubleshoot issues. - You can use Volume Top and Profile commands to view the performance and identify bottlenecks/hotspots of each brick of a volume. This helps system administrators to get vital performance information whenever performance needs to be probed. - You can also perform statedump of the brick processes and nfs server process of a volume, and also view volume status and volume information. -
- Running GlusterFS Volume Profile Command - GlusterFS Volume Profile command provides an interface to get the per-brick I/O information for each File Operation (FOP) of a volume. The per brick information helps in identifying bottlenecks in the storage system. - - This section describes how to run GlusterFS Volume Profile command by performing the following operations: - - - - - - - - - - - - -
- Start Profiling - You must start the Profiling to view the File Operation information for each brick. - - To start profiling: - - - Start profiling using the following command: - - - - # gluster volume profile VOLNAME start - For example, to start profiling on test-volume: - - # gluster volume profile test-volume start -Profiling started on test-volume - When profiling on the volume is started, the following additional options are displayed in the Volume Info: - - diagnostics.count-fop-hits: on - -diagnostics.latency-measurement: on -
-
- Displaying the I/0 Information - You can view the I/O information of each brick. - - To display I/O information: - - - - Display the I/O information using the following command: - - - - # gluster volume profile VOLNAME info - - - For example, to see the I/O information on test-volume: - - - # gluster volume profile test-volume info -Brick: Test:/export/2 -Cumulative Stats: - -Block 1b+ 32b+ 64b+ -Size: - Read: 0 0 0 - Write: 908 28 8 - -Block 128b+ 256b+ 512b+ -Size: - Read: 0 6 4 - Write: 5 23 16 - -Block 1024b+ 2048b+ 4096b+ -Size: - Read: 0 52 17 - Write: 15 120 846 - -Block 8192b+ 16384b+ 32768b+ -Size: - Read: 52 8 34 - Write: 234 134 286 - -Block 65536b+ 131072b+ -Size: - Read: 118 622 - Write: 1341 594 - - -%-latency Avg- Min- Max- calls Fop - latency Latency Latency -___________________________________________________________ -4.82 1132.28 21.00 800970.00 4575 WRITE -5.70 156.47 9.00 665085.00 39163 READDIRP -11.35 315.02 9.00 1433947.00 38698 LOOKUP -11.88 1729.34 21.00 2569638.00 7382 FXATTROP -47.35 104235.02 2485.00 7789367.00 488 FSYNC - ------------------- - ------------------- - -Duration : 335 - -BytesRead : 94505058 - -BytesWritten : 195571980 -
-
- Stop Profiling - You can stop profiling the volume, if you do not need profiling information anymore. - - To stop profiling - - - - Stop profiling using the following command: - - # gluster volume profile VOLNAME stop - - For example, to stop profiling on test-volume: - # gluster volume profile test-volume stop - Profiling stopped on test-volume - - -
-
-
- Running GlusterFS Volume TOP Command - GlusterFS Volume Top command allows you to view the glusterfs bricks’ performance metrics like -read, write, file open calls, file read calls, file write calls, directory open calls, and directory real -calls. The top command displays up to 100 results. - - This section describes how to run and view the results for the following GlusterFS Top commands: - - - - - - - - - - - - - - - - - - - - - - - - -
- Viewing Open fd Count and Maximum fd Count - You can view both current open fd count (list of files that are currently the most opened and the -count) on the brick and the maximum open fd count (count of files that are the currently open and -the count of maximum number of files opened at any given point of time, since the servers are up -and running). If the brick name is not specified, then open fd metrics of all the bricks belonging to -the volume will be displayed. - - To view open fd count and maximum fd count: - - - View open fd count and maximum fd count using the following command: - # gluster volume top VOLNAME open [brick BRICK-NAME] [list-cnt cnt] - - For example, to view open fd count and maximum fd count on brick server:/export of test-volume and list top 10 open calls: - - # gluster volume top test-volume open brick server:/export/ list-cnt 10 - Brick: server:/export/dir1 - Current open fd's: 34 Max open fd's: 209 ==========Open file stats======== - -open file name -call count - -2 /clients/client0/~dmtmp/PARADOX/ - COURSES.DB - -11 /clients/client0/~dmtmp/PARADOX/ - ENROLL.DB - -11 /clients/client0/~dmtmp/PARADOX/ - STUDENTS.DB - -10 /clients/client0/~dmtmp/PWRPNT/ - TIPS.PPT - -10 /clients/client0/~dmtmp/PWRPNT/ - PCBENCHM.PPT - -9 /clients/client7/~dmtmp/PARADOX/ - STUDENTS.DB - -9 /clients/client1/~dmtmp/PARADOX/ - STUDENTS.DB - -9 /clients/client2/~dmtmp/PARADOX/ - STUDENTS.DB - -9 /clients/client0/~dmtmp/PARADOX/ - STUDENTS.DB - -9 /clients/client8/~dmtmp/PARADOX/ - STUDENTS.DB - - -
-
- Viewing Highest File Read Calls - You can view highest read calls on each brick. If brick name is not specified, then by default, list of -100 files will be displayed. - - To view highest file Read calls: - - - - View highest file Read calls using the following command: - - # gluster volume top VOLNAME read [brick BRICK-NAME] [list-cnt cnt] - For example, to view highest Read calls on brick server:/export of test-volume: - - # gluster volume top test-volume read brick server:/export list-cnt 10 - Brick: server:/export/dir1 ==========Read file stats======== - -read filename -call count - -116 /clients/client0/~dmtmp/SEED/LARGE.FIL - -64 /clients/client0/~dmtmp/SEED/MEDIUM.FIL - -54 /clients/client2/~dmtmp/SEED/LARGE.FIL - -54 /clients/client6/~dmtmp/SEED/LARGE.FIL - -54 /clients/client5/~dmtmp/SEED/LARGE.FIL - -54 /clients/client0/~dmtmp/SEED/LARGE.FIL - -54 /clients/client3/~dmtmp/SEED/LARGE.FIL - -54 /clients/client4/~dmtmp/SEED/LARGE.FIL - -54 /clients/client9/~dmtmp/SEED/LARGE.FIL - -54 /clients/client8/~dmtmp/SEED/LARGE.FIL - - -
-
- Viewing Highest File Write Calls - You can view list of files which has highest file write calls on each brick. If brick name is not -specified, then by default, list of 100 files will be displayed. - - To view highest file Write calls: - - - - View highest file Write calls using the following command: - - # gluster volume top VOLNAME write [brick BRICK-NAME] [list-cnt cnt] - For example, to view highest Write calls on brick server:/export of test-volume: - - # gluster volume top test-volume write brick server:/export list-cnt 10 - Brick: server:/export/dir1 ==========Write file stats======== -write call count filename - -83 /clients/client0/~dmtmp/SEED/LARGE.FIL - -59 /clients/client7/~dmtmp/SEED/LARGE.FIL - -59 /clients/client1/~dmtmp/SEED/LARGE.FIL - -59 /clients/client2/~dmtmp/SEED/LARGE.FIL - -59 /clients/client0/~dmtmp/SEED/LARGE.FIL - -59 /clients/client8/~dmtmp/SEED/LARGE.FIL - -59 /clients/client5/~dmtmp/SEED/LARGE.FIL - -59 /clients/client4/~dmtmp/SEED/LARGE.FIL - -59 /clients/client6/~dmtmp/SEED/LARGE.FIL - -59 /clients/client3/~dmtmp/SEED/LARGE.FIL - - -
-
- Viewing Highest Open Calls on Directories - You can view list of files which has highest open calls on directories of each brick. If brick name is -not specified, then the metrics of all the bricks belonging to that volume will be displayed. - - To view list of open calls on each directory - - - View list of open calls on each directory using the following command: - - # gluster volume top VOLNAME opendir [brick BRICK-NAME] [list-cnt cnt] - For example, to view open calls on brick server:/export/ of test-volume: - - # gluster volume top test-volume opendir brick server:/export list-cnt 10 - Brick: server:/export/dir1 ==========Directory open stats======== - -Opendir count directory name - -1001 /clients/client0/~dmtmp - -454 /clients/client8/~dmtmp - -454 /clients/client2/~dmtmp - -454 /clients/client6/~dmtmp - -454 /clients/client5/~dmtmp - -454 /clients/client9/~dmtmp - -443 /clients/client0/~dmtmp/PARADOX - -408 /clients/client1/~dmtmp - -408 /clients/client7/~dmtmp - -402 /clients/client4/~dmtmp - - -
-
- Viewing Highest Read Calls on Directory - You can view list of files which has highest directory read calls on each brick. If brick name is not -specified, then the metrics of all the bricks belonging to that volume will be displayed. - - To view list of highest directory read calls on each brick - - - - View list of highest directory read calls on each brick using the following command: - - # gluster volume top VOLNAME readdir [brick BRICK-NAME] [list-cnt cnt] - For example, to view highest directory read calls on brick server:/export of test-volume: - # gluster volume top test-volume readdir brick server:/export list-cnt 10 - Brick: server:/export/dir1==========Directory readdirp stats======== - -readdirp count directory name - -1996 /clients/client0/~dmtmp - -1083 /clients/client0/~dmtmp/PARADOX - -904 /clients/client8/~dmtmp - -904 /clients/client2/~dmtmp - -904 /clients/client6/~dmtmp - -904 /clients/client5/~dmtmp - -904 /clients/client9/~dmtmp - -812 /clients/client1/~dmtmp - -812 /clients/client7/~dmtmp - -800 /clients/client4/~dmtmp - - - -
-
- Viewing List of Read Performance on each Brick - You can view the read throughput of files on each brick. If brick name is not specified, then the -metrics of all the bricks belonging to that volume will be displayed. The output will be the read -throughput. - - ==========Read throughput file stats======== - -read filename Time -through -put(MBp -s) - -2570.00 /clients/client0/~dmtmp/PWRPNT/ -2011-01-31 - TRIDOTS.POT 15:38:36.894610 -2570.00 /clients/client0/~dmtmp/PWRPNT/ -2011-01-31 - PCBENCHM.PPT 15:38:39.815310 -2383.00 /clients/client2/~dmtmp/SEED/ -2011-01-31 - MEDIUM.FIL 15:52:53.631499 - -2340.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 - MEDIUM.FIL 15:38:36.926198 - -2299.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 - LARGE.FIL 15:38:36.930445 - -2259.00 /clients/client0/~dmtmp/PARADOX/ -2011-01-31 - COURSES.X04 15:38:40.549919 - -2221.00 /clients/client0/~dmtmp/PARADOX/ -2011-01-31 - STUDENTS.VAL 15:52:53.298766 - -2221.00 /clients/client3/~dmtmp/SEED/ -2011-01-31 - COURSES.DB 15:39:11.776780 - -2184.00 /clients/client3/~dmtmp/SEED/ -2011-01-31 - MEDIUM.FIL 15:39:10.251764 - -2184.00 /clients/client5/~dmtmp/WORD/ -2011-01-31 - BASEMACH.DOC 15:39:09.336572 This command will initiate a dd for the specified count and block size and measures the -corresponding throughput. - - To view list of read performance on each brick - - - - View list of read performance on each brick using the following command: - - # gluster volume top VOLNAME read-perf [bs blk-size count count] [brick BRICK-NAME] [list-cnt cnt] - - For example, to view read performance on brick server:/export/ of test-volume, 256 block size -of count 1, and list count 10: - - # gluster volume top test-volume read-perf bs 256 count 1 brick server:/export/ list-cnt 10 - Brick: server:/export/dir1 256 bytes (256 B) copied, Throughput: 4.1 MB/s - ==========Read throughput file stats======== - -read filename Time -through -put(MBp -s) - -2912.00 /clients/client0/~dmtmp/PWRPNT/ -2011-01-31 - TRIDOTS.POT 15:38:36.896486 - -2570.00 /clients/client0/~dmtmp/PWRPNT/ -2011-01-31 - PCBENCHM.PPT 15:38:39.815310 - -2383.00 /clients/client2/~dmtmp/SEED/ -2011-01-31 - MEDIUM.FIL 15:52:53.631499 - -2340.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 - MEDIUM.FIL 15:38:36.926198 - -2299.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 - LARGE.FIL 15:38:36.930445 - -2259.00 /clients/client0/~dmtmp/PARADOX/ -2011-01-31 - COURSES.X04 15:38:40.549919 - -2221.00 /clients/client9/~dmtmp/PARADOX/ -2011-01-31 - STUDENTS.VAL 15:52:53.298766 - -2221.00 /clients/client8/~dmtmp/PARADOX/ -2011-01-31 - COURSES.DB 15:39:11.776780 - -2184.00 /clients/client3/~dmtmp/SEED/ -2011-01-31 - MEDIUM.FIL 15:39:10.251764 - -2184.00 /clients/client5/~dmtmp/WORD/ -2011-01-31 - BASEMACH.DOC 15:39:09.336572 - - - -
-
- Viewing List of Write Performance on each Brick - You can view list of write throughput of files on each brick. If brick name is not specified, then the -metrics of all the bricks belonging to that volume will be displayed. The output will be the write -throughput. - - This command will initiate a dd for the specified count and block size and measures the -corresponding throughput. -To view list of write performance on each brick: - - - - View list of write performance on each brick using the following command: - - # gluster volume top VOLNAME write-perf [bs blk-size count count] [brick BRICK-NAME] [list-cnt cnt] - For example, to view write performance on brick server:/export/ of test-volume, 256 block size -of count 1, and list count 10: - - # gluster volume top test-volume write-perf bs 256 count 1 brick server:/export/ list-cnt 10 - Brick: server:/export/dir1 - - 256 bytes (256 B) copied, Throughput: 2.8 MB/s ==========Write throughput file stats======== - -write filename Time -throughput -(MBps) - -1170.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 - SMALL.FIL 15:39:09.171494 - -1008.00 /clients/client6/~dmtmp/SEED/ -2011-01-31 - LARGE.FIL 15:39:09.73189 - -949.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 - MEDIUM.FIL 15:38:36.927426 - -936.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 - LARGE.FIL 15:38:36.933177 -897.00 /clients/client5/~dmtmp/SEED/ -2011-01-31 - MEDIUM.FIL 15:39:09.33628 - -897.00 /clients/client6/~dmtmp/SEED/ -2011-01-31 - MEDIUM.FIL 15:39:09.27713 - -885.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 - SMALL.FIL 15:38:36.924271 - -528.00 /clients/client5/~dmtmp/SEED/ -2011-01-31 - LARGE.FIL 15:39:09.81893 - -516.00 /clients/client6/~dmtmp/ACCESS/ -2011-01-31 - FASTENER.MDB 15:39:01.797317 - - - -
-
-
- Displaying Volume Information - You can display information about a specific volume, or all volumes, as needed. - To display volume information - - - Display information about a specific volume using the following command: - # gluster volume info VOLNAME - For example, to display information about test-volume: - # gluster volume info test-volume -Volume Name: test-volume -Type: Distribute -Status: Created -Number of Bricks: 4 -Bricks: -Brick1: server1:/exp1 -Brick2: server2:/exp2 -Brick3: server3:/exp3 -Brick4: server4:/exp4 - - - Display information about all volumes using the following command: - # gluster volume info all - # gluster volume info all - -Volume Name: test-volume -Type: Distribute -Status: Created -Number of Bricks: 4 -Bricks: -Brick1: server1:/exp1 -Brick2: server2:/exp2 -Brick3: server3:/exp3 -Brick4: server4:/exp4 - -Volume Name: mirror -Type: Distributed-Replicate -Status: Started -Number of Bricks: 2 X 2 = 4 -Bricks: -Brick1: server1:/brick1 -Brick2: server2:/brick2 -Brick3: server3:/brick3 -Brick4: server4:/brick4 - -Volume Name: Vol -Type: Distribute -Status: Started -Number of Bricks: 1 -Bricks: -Brick: server:/brick6 - - - - -
-
- Performing Statedump on a Volume - Statedump is a mechanism through which you can get details of all internal variables and state of the glusterfs process at the time of issuing the command.You can perform statedumps of the brick processes and nfs server process of a volume using the statedump command. The following options can be used to determine what information is to be dumped: - - - mem - Dumps the memory usage and memory pool details of the bricks. - - - iobuf - Dumps iobuf details of the bricks. - - - priv - Dumps private information of loaded translators. - - - callpool - Dumps the pending calls of the volume. - - - fd - Dumps the open fd tables of the volume. - - - inode - Dumps the inode tables of the volume. - - - To display volume statedump - - - Display statedump of a volume or NFS server using the following command: - # gluster volume statedump VOLNAME [nfs] [all|mem|iobuf|callpool|priv|fd|inode] - For example, to display statedump of test-volume: - # gluster volume statedump test-volume -Volume statedump successful - The statedump files are created on the brick servers in the /tmp directory or in the directory set using server.statedump-path volume option. The naming convention of the dump file is <brick-path>.<brick-pid>.dump. - - - By defult, the output of the statedump is stored at /tmp/<brickname.PID.dump> file on that particular server. Change the directory of the statedump file using the following command: - # gluster volume set VOLNAME server.statedump-path path - For example, to change the location of the statedump file of test-volume: - # gluster volume set test-volume server.statedump-path /usr/local/var/log/glusterfs/dumps/ -Set volume successful - You can view the changed path of the statedump file using the following command: - # gluster volume info VOLNAME - - -
-
- Displaying Volume Status - You can display the status information about a specific volume, brick or all volumes, as needed. Status information can be used to understand the current status of the brick, nfs processes, and overall file system. Status information can also be used to monitor and debug the volume information. You can view status of the volume along with the following details: - - - detail - Displays additional information about the bricks. - - - clients - Displays the list of clients connected to the volume. - - - mem - Displays the memory usage and memory pool details of the bricks. - - - inode - Displays the inode tables of the volume. - - - fd - Displays the open fd (file descriptors) tables of the volume. - - - callpool - Displays the pending calls of the volume. - - - To display volume status - - - Display information about a specific volume using the following command: - # gluster volume status [all|VOLNAME [BRICKNAME]] [detail|clients|mem|inode|fd|callpool] - For example, to display information about test-volume: - # gluster volume status test-volume -STATUS OF VOLUME: test-volume -BRICK PORT ONLINE PID --------------------------------------------------------- -arch:/export/1 24009 Y 22445 --------------------------------------------------------- -arch:/export/2 24010 Y 22450 - - - Display information about all volumes using the following command: - # gluster volume status all - - # gluster volume status all -STATUS OF VOLUME: volume-test -BRICK PORT ONLINE PID --------------------------------------------------------- -arch:/export/4 24010 Y 22455 - -STATUS OF VOLUME: test-volume -BRICK PORT ONLINE PID --------------------------------------------------------- -arch:/export/1 24009 Y 22445 --------------------------------------------------------- -arch:/export/2 24010 Y 22450 - - - Display additional information about the bricks using the following command: - # gluster volume status VOLNAME detail - - For example, to display additional information about the bricks of test-volume: - # gluster volume status test-volume details -STATUS OF VOLUME: test-volume -------------------------------------------- -Brick : arch:/export/1 -Port : 24009 -Online : Y -Pid : 16977 -File System : rootfs -Device : rootfs -Mount Options : rw -Disk Space Free : 13.8GB -Total Disk Space : 46.5GB -Inode Size : N/A -Inode Count : N/A -Free Inodes : N/A - -Number of Bricks: 1 -Bricks: -Brick: server:/brick6 - - - Display the list of clients accessing the volumes using the following command: - # gluster volume status VOLNAME clients - - For example, to display the list of clients connected to test-volume: - # gluster volume status test-volume clients -Brick : arch:/export/1 -Clients connected : 2 -Hostname Bytes Read BytesWritten --------- --------- ------------ -127.0.0.1:1013 776 676 -127.0.0.1:1012 50440 51200 - - - Display the memory usage and memory pool details of the bricks using the following command: - # gluster volume status VOLNAME mem - - For example, to display the memory usage and memory pool details of the bricks of test-volume: - Memory status for volume : test-volume ----------------------------------------------- -Brick : arch:/export/1 -Mallinfo --------- -Arena : 434176 -Ordblks : 2 -Smblks : 0 -Hblks : 12 -Hblkhd : 40861696 -Usmblks : 0 -Fsmblks : 0 -Uordblks : 332416 -Fordblks : 101760 -Keepcost : 100400 - -Mempool Stats -------------- -Name HotCount ColdCount PaddedSizeof AllocCount MaxAlloc ----- -------- --------- ------------ ---------- -------- -test-volume-server:fd_t 0 16384 92 57 5 -test-volume-server:dentry_t 59 965 84 59 59 -test-volume-server:inode_t 60 964 148 60 60 -test-volume-server:rpcsvc_request_t 0 525 6372 351 2 -glusterfs:struct saved_frame 0 4096 124 2 2 -glusterfs:struct rpc_req 0 4096 2236 2 2 -glusterfs:rpcsvc_request_t 1 524 6372 2 1 -glusterfs:call_stub_t 0 1024 1220 288 1 -glusterfs:call_stack_t 0 8192 2084 290 2 -glusterfs:call_frame_t 0 16384 172 1728 6 - - - Display the inode tables of the volume using the following command: - # gluster volume status VOLNAME inode - - For example, to display the inode tables of the test-volume: - # gluster volume status test-volume inode -inode tables for volume test-volume ----------------------------------------------- -Brick : arch:/export/1 -Active inodes: -GFID Lookups Ref IA type ----- ------- --- ------- -6f3fe173-e07a-4209-abb6-484091d75499 1 9 2 -370d35d7-657e-44dc-bac4-d6dd800ec3d3 1 1 2 - -LRU inodes: -GFID Lookups Ref IA type ----- ------- --- ------- -80f98abe-cdcf-4c1d-b917-ae564cf55763 1 0 1 -3a58973d-d549-4ea6-9977-9aa218f233de 1 0 1 -2ce0197d-87a9-451b-9094-9baa38121155 1 0 2 - - - Display the open fd tables of the volume using the following command: - # gluster volume status VOLNAME fd - - For example, to display the open fd tables of the test-volume: - # gluster volume status test-volume fd - -FD tables for volume test-volume ----------------------------------------------- -Brick : arch:/export/1 -Connection 1: -RefCount = 0 MaxFDs = 128 FirstFree = 4 -FD Entry PID RefCount Flags --------- --- -------- ----- -0 26311 1 2 -1 26310 3 2 -2 26310 1 2 -3 26311 3 2 - -Connection 2: -RefCount = 0 MaxFDs = 128 FirstFree = 0 -No open fds - -Connection 3: -RefCount = 0 MaxFDs = 128 FirstFree = 0 -No open fds - - - Display the pending calls of the volume using the following command: - # gluster volume status VOLNAME callpool - - Each call has a call stack containing call frames. - For example, to display the pending calls of test-volume: - # gluster volume status test-volume - -Pending calls for volume test-volume ----------------------------------------------- -Brick : arch:/export/1 -Pending calls: 2 -Call Stack1 - UID : 0 - GID : 0 - PID : 26338 - Unique : 192138 - Frames : 7 - Frame 1 - Ref Count = 1 - Translator = test-volume-server - Completed = No - Frame 2 - Ref Count = 0 - Translator = test-volume-posix - Completed = No - Parent = test-volume-access-control - Wind From = default_fsync - Wind To = FIRST_CHILD(this)->fops->fsync - Frame 3 - Ref Count = 1 - Translator = test-volume-access-control - Completed = No - Parent = repl-locks - Wind From = default_fsync - Wind To = FIRST_CHILD(this)->fops->fsync - Frame 4 - Ref Count = 1 - Translator = test-volume-locks - Completed = No - Parent = test-volume-io-threads - Wind From = iot_fsync_wrapper - Wind To = FIRST_CHILD (this)->fops->fsync - Frame 5 - Ref Count = 1 - Translator = test-volume-io-threads - Completed = No - Parent = test-volume-marker - Wind From = default_fsync - Wind To = FIRST_CHILD(this)->fops->fsync - Frame 6 - Ref Count = 1 - Translator = test-volume-marker - Completed = No - Parent = /export/1 - Wind From = io_stats_fsync - Wind To = FIRST_CHILD(this)->fops->fsync - Frame 7 - Ref Count = 1 - Translator = /export/1 - Completed = No - Parent = test-volume-server - Wind From = server_fsync_resume - Wind To = bound_xl->fops->fsync - - -
-
diff --git a/doc/admin-guide/en-US/admin_setting_volumes.xml b/doc/admin-guide/en-US/admin_setting_volumes.xml deleted file mode 100644 index 6a8468d5f..000000000 --- a/doc/admin-guide/en-US/admin_setting_volumes.xml +++ /dev/null @@ -1,325 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - Setting up GlusterFS Server Volumes - A volume is a logical collection of bricks where each brick is an export directory on a server in the trusted storage pool. Most of the gluster management operations are performed on the volume. - To create a new volume in your storage environment, specify the bricks that comprise the volume. After you have created a new volume, you must start it before attempting to mount it. - - - Volumes of the following types can be created in your storage environment: - - - Distributed - Distributed volumes distributes files throughout the bricks in the volume. You can use distributed volumes where the requirement is to scale storage and the redundancy is either not important or is provided by other hardware/software layers. For more information, see . - - - Replicated – Replicated volumes replicates files across bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical. For more information, see . - - - Striped – Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large files. For more information, see . - - - Distributed Striped - Distributed striped volumes stripe data across two or more nodes in the cluster. You should use distributed striped volumes where the requirement is to scale storage and in high concurrency environments accessing very large files is critical. For more information, see . - - - Distributed Replicated - Distributed replicated volumes distributes files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is to scale storage and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments. For more information, see . - - - Distributed Striped Replicated – Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. For best results, you should use distributed striped replicated volumes in highly concurrent environments where parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads. For more information, see . - - - - Striped Replicated – Striped replicated volumes stripes data across replicated bricks in the cluster. For best results, you should use striped replicated volumes in highly concurrent environments where there is parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads. For more -information, see . - - - - - To create a new volume - - - Create a new volume : - # gluster volume create NEW-VOLNAME [stripe COUNT | replica COUNT] [transport tcp | rdma | tcp, rdma] NEW-BRICK1 NEW-BRICK2 NEW-BRICK3... - For example, to create a volume called test-volume consisting of server3:/exp3 and server4:/exp4: - # gluster volume create test-volume server3:/exp3 server4:/exp4 -Creation of test-volume has been successful -Please start the volume to access data. - - -
- Creating Distributed Volumes - In a distributed volumes files are spread randomly across the bricks in the volume. Use distributed volumes where you need to scale storage and redundancy is either not important or is provided by other hardware/software layers. - - Disk/server failure in distributed volumes can result in a serious loss of data because directory contents are spread randomly across the bricks in the volume. - -
- Illustration of a Distributed Volume - - - - - -
- To create a distributed volume - - - Create a trusted storage pool as described earlier in . - - - Create the distributed volume: - # gluster volume create NEW-VOLNAME [transport tcp | rdma | tcp,rdma] NEW-BRICK... - For example, to create a distributed volume with four storage servers using tcp: - # gluster volume create test-volume server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 -Creation of test-volume has been successful -Please start the volume to access data. - (Optional) You can display the volume information: - # gluster volume info -Volume Name: test-volume -Type: Distribute -Status: Created -Number of Bricks: 4 -Transport-type: tcp -Bricks: -Brick1: server1:/exp1 -Brick2: server2:/exp2 -Brick3: server3:/exp3 -Brick4: server4:/exp4 - For example, to create a distributed volume with four storage servers over InfiniBand: - # gluster volume create test-volume transport rdma server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 -Creation of test-volume has been successful -Please start the volume to access data. - If the transport type is not specified, tcp is used as the default. You can also set additional options if required, such as auth.allow or auth.reject. For more information, see - - Make sure you start your volumes before you try to mount them or else client operations after the mount will hang, see for details. - - - -
-
- Creating Replicated Volumes - Replicated volumes create copies of files across multiple bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical. - - The number of bricks should be equal to of the replica count for a replicated volume. -To protect against server and disk failures, it is recommended that the bricks of the volume are from different servers. - -
- Illustration of a Replicated Volume - - - - - -
- To create a replicated volume - - - Create a trusted storage pool as described earlier in . - - - Create the replicated volume: - # gluster volume create NEW-VOLNAME [replica COUNT] [transport tcp | rdma tcp,rdma] NEW-BRICK... - For example, to create a replicated volume with two storage servers: - # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 -Creation of test-volume has been successful -Please start the volume to access data. - If the transport type is not specified, tcp is used as the default. You can also set additional options if required, such as auth.allow or auth.reject. For more information, see - - Make sure you start your volumes before you try to mount them or else client operations after the mount will hang, see for details. - - - -
-
- Creating Striped Volumes - Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large files. - - The number of bricks should be a equal to the stripe count for a striped volume. - -
- Illustration of a Striped Volume - - - - - -
- To create a striped volume - - - Create a trusted storage pool as described earlier in . - - - Create the striped volume: - # gluster volume create NEW-VOLNAME [stripe COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK... - For example, to create a striped volume across two storage servers: - # gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2 -Creation of test-volume has been successful -Please start the volume to access data. - If the transport type is not specified, tcp is used as the default. You can also set additional options if required, such as auth.allow or auth.reject. For more information, see - - Make sure you start your volumes before you try to mount them or else client operations after the mount will hang, see for details. - - - -
-
- Creating Distributed Striped Volumes - Distributed striped volumes stripes files across two or more nodes in the cluster. For best results, you should use distributed striped volumes where the requirement is to scale storage and in high concurrency environments accessing very large files is critical. - - The number of bricks should be a multiple of the stripe count for a distributed striped volume. - -
- Illustration of a Distributed Striped Volume - - - - - -
- To create a distributed striped volume - - - Create a trusted storage pool as described earlier in . - - - Create the distributed striped volume: - # gluster volume create NEW-VOLNAME [stripe COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK... - For example, to create a distributed striped volume across eight storage servers: - # gluster volume create test-volume stripe 4 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8 -Creation of test-volume has been successful -Please start the volume to access data. - If the transport type is not specified, tcp is used as the default. You can also set additional options if required, such as auth.allow or auth.reject. For more information, see - - Make sure you start your volumes before you try to mount them or else client operations after the mount will hang, see for details. - - - -
-
- Creating Distributed Replicated Volumes - Distributes files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is to scale storage and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments. - - The number of bricks should be a multiple of the replica count for a distributed replicated volume. Also, the order in which bricks are specified has a great effect on data protection. Each replica_count consecutive bricks in the list you give will form a replica set, with all replica sets combined into a volume-wide distribute set. To make sure that replica-set members are not placed on the same node, list the first brick on every server, then the second brick on every server in the same order, and so on. - -
- Illustration of a Distributed Replicated Volume - - - - - -
- To create a distributed replicated volume - - - Create a trusted storage pool as described earlier in . - - - Create the distributed replicated volume: - # gluster volume create NEW-VOLNAME [replica COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK... - For example, four node distributed (replicated) volume with a two-way mirror: - - # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 -Creation of test-volume has been successful -Please start the volume to access data. - For example, to create a six node distributed (replicated) volume with a two-way mirror: - # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 -Creation of test-volume has been successful -Please start the volume to access data. - If the transport type is not specified, tcp is used as the default. You can also set additional options if required, such as auth.allow or auth.reject. For more information, see - - Make sure you start your volumes before you try to mount them or else client operations after the mount will hang, see for details. - - - -
-
- Creating Distributed Striped Replicated Volumes - Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. For best results, you should use distributed striped replicated volumes in highly concurrent environments where parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads. - - The number of bricks should be a multiples of number of stripe count and replica count for -a distributed striped replicated volume. - - - To create a distributed striped replicated volume - - - - Create a trusted storage pool as described earlier in . - - - Create a distributed striped replicated volume using the following command: - # gluster volume create NEW-VOLNAME [stripe COUNT] [replica COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK... - For example, to create a distributed replicated striped volume across eight storage servers: - - # gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8 -Creation of test-volume has been successful -Please start the volume to access data. - If the transport type is not specified, tcp is used as the default. You can also set additional options if required, such as auth.allow or auth.reject. For more information, see - - Make sure you start your volumes before you try to mount them or else client operations after the mount will hang, see for details. - - - -
-
- Creating Striped Replicated Volumes - Striped replicated volumes stripes data across replicated bricks in the cluster. For best results, you should use striped replicated volumes in highly concurrent environments where there is parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads. - - The number of bricks should be a multiple of the replicate count and stripe count for a -striped replicated volume. - - -
- Illustration of a Striped Replicated Volume - - - - - -
- To create a striped replicated volume - - - - Create a trusted storage pool consisting of the storage servers that will comprise the volume. - For more information, see . - - - Create a striped replicated volume : - # gluster volume create NEW-VOLNAME [stripe COUNT] [replica COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK... - For example, to create a striped replicated volume across four storage servers: - - - # gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 -Creation of test-volume has been successful -Please start the volume to access data. - To create a striped replicated volume across six storage servers: - - # gluster volume create test-volume stripe 3 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 -Creation of test-volume has been successful -Please start the volume to access data. - If the transport type is not specified, tcp is used as the default. You can also set additional options if required, such as auth.allow or auth.reject. For more information, see - - Make sure you start your volumes before you try to mount them or else client operations after the mount will hang, see for details. - - - -
-
- Starting Volumes - You must start your volumes before you try to mount them. - To start a volume - - - Start a volume: - # gluster volume start VOLNAME - For example, to start test-volume: - # gluster volume start test-volume -Starting test-volume has been successful - - -
-
diff --git a/doc/admin-guide/en-US/admin_settingup_clients.xml b/doc/admin-guide/en-US/admin_settingup_clients.xml deleted file mode 100644 index 22979acf4..000000000 --- a/doc/admin-guide/en-US/admin_settingup_clients.xml +++ /dev/null @@ -1,511 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - Accessing Data - Setting Up GlusterFS Client - You can access gluster volumes in multiple ways. You can use Gluster Native Client method for high concurrency, performance and transparent failover in GNU/Linux clients. You can also use NFS v3 to access gluster volumes. Extensive testing has be done on GNU/Linux clients and NFS implementation in other operating system, such as FreeBSD, and Mac OS X, as well as Windows 7 (Professional and Up) and Windows Server 2003. Other NFS client implementations may work with gluster NFS server. - You can use CIFS to access volumes when using Microsoft Windows as well as SAMBA clients. For this access method, Samba packages need to be present on the client side. -
- Gluster Native Client - The Gluster Native Client is a FUSE-based client running in user space. Gluster Native Client is the recommended method for accessing volumes when high concurrency and high write performance is required. - This section introduces the Gluster Native Client and explains how to install the software on client machines. This section also describes how to mount volumes on clients (both manually and automatically) and how to verify that the volume has mounted successfully. -
- Installing the Gluster Native Client - Before you begin installing the Gluster Native Client, you need to verify that the FUSE module is loaded on the client and has access to the required modules as follows: - - - Add the FUSE loadable kernel module (LKM) to the Linux kernel: - # modprobe fuse - - - Verify that the FUSE module is loaded: - # dmesg | grep -i fuse - fuse init (API version 7.13) - - -
- Installing on Red Hat Package Manager (RPM) Distributions - To install Gluster Native Client on RPM distribution-based systems - - - Install required prerequisites on the client using the following command: - $ sudo yum -y install openssh-server wget fuse fuse-libs openib libibverbs - - - Ensure that TCP and UDP ports 24007 and 24008 are open on all Gluster servers. Apart from these ports, you need to open one port for each brick starting from port 24009. For example: if you have five bricks, you need to have ports 24009 to 24013 open. - You can use the following chains with iptables: - $ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT - $ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24009:24014 -j ACCEPT - - If you already have iptable chains, make sure that the above ACCEPT rules precede the DROP rules. This can be achieved by providing a lower rule number than the DROP rule. - - - - Download the latest glusterfs, glusterfs-fuse, and glusterfs-rdma RPM files to each client. The glusterfs package contains the Gluster Native Client. The glusterfs-fuse package contains the FUSE translator required for mounting on client systems and the glusterfs-rdma packages contain OpenFabrics verbs RDMA module for Infiniband. - You can download the software at . - - - Install Gluster Native Client on the client. - $ sudo rpm -i glusterfs-3.3.0qa30-1.x86_64.rpm - $ sudo rpm -i glusterfs-fuse-3.3.0qa30-1.x86_64.rpm - $ sudo rpm -i glusterfs-rdma-3.3.0qa30-1.x86_64.rpm - - The RDMA module is only required when using Infiniband. - - - -
-
- Installing on Debian-based Distributions - To install Gluster Native Client on Debian-based distributions - - - Install OpenSSH Server on each client using the following command: - $ sudo apt-get install openssh-server vim wget - - - Download the latest GlusterFS .deb file and checksum to each client. - You can download the software at . - - - For each .deb file, get the checksum (using the following command) and compare it against the checksum for that file in the md5sum file. - -$ md5sum GlusterFS_DEB_file.deb - The md5sum of the packages is available at: - - - Uninstall GlusterFS v3.1 (or an earlier version) from the client using the following command: - - $ sudo dpkg -r glusterfs - (Optional) Run $ sudo dpkg -purge glusterfs to purge the configuration files. - - - Install Gluster Native Client on the client using the following command: - - $ sudo dpkg -i GlusterFS_DEB_file - For example: - - $ sudo dpkg -i glusterfs-3.3.x.deb - - - Ensure that TCP and UDP ports 24007 and 24008 are open on all Gluster servers. Apart from these ports, you need to open one port for each brick starting from port 24009. For example: if you have five bricks, you need to have ports 24009 to 24013 open. - - You can use the following chains with iptables: - - $ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT - $ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24009:24014 -j ACCEPT - - If you already have iptable chains, make sure that the above ACCEPT rules precede the DROP rules. This can be achieved by providing a lower rule number than the DROP rule. - - - -
-
- Performing a Source Installation - To build and install Gluster Native Client from the source code - - - Create a new directory using the following commands: - # mkdir glusterfs - # cd glusterfs - - - Download the source code. - - You can download the source at . - - - Extract the source code using the following command: - - # tar -xvzf SOURCE-FILE - - - Run the configuration utility using the following command: - - # ./configure - GlusterFS configure summary - ================== - FUSE client : yes - Infiniband verbs : yes - epoll IO multiplex : yes - argp-standalone : no - fusermount : no - readline : yes - The configuration summary shows the components that will be built with Gluster Native Client. - - - Build the Gluster Native Client software using the following commands: - - # make - # make install - - - Verify that the correct version of Gluster Native Client is installed, using the following command: - - # glusterfs –-version - - -
-
-
- Mounting Volumes - After installing the Gluster Native Client, you need to mount Gluster volumes to access data. There are two methods you can choose: - - - - - - - - - After mounting a volume, you can test the mounted volume using the procedure described in . - - Server names selected during creation of Volumes should be resolvable in the client machine. You can use appropriate /etc/hosts entries or DNS server to resolve server names to IP addresses. - -
- Manually Mounting Volumes - To manually mount a Gluster volume - - - To mount a volume, use the following command: - - # mount -t glusterfs HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR - - For example: - - # mount -t glusterfs server1:/test-volume /mnt/glusterfs - - The server specified in the mount command is only used to fetch the gluster configuration volfile describing the volume name. Subsequently, the client will communicate directly with the servers mentioned in the volfile (which might not even include the one used for mount). - - - If you see a usage message like "Usage: mount.glusterfs", mount usually requires you to create a directory to be used as the mount point. Run "mkdir /mnt/glusterfs" before you attempt to run the mount command listed above. - - - - Mounting Options - You can specify the following options when using the mount -t glusterfs command. Note that you need to separate all options with commas. - - - backupvolfile-server=server-name - volfile-max-fetch-attempts=number of attempts - log-level=loglevel - - log-file=logfile - - transport=transport-type - - direct-io-mode=[enable|disable] - - - For example: - - # mount -t glusterfs -o backupvolfile-server=volfile_server2 --volfile-max-fetch-attempts=2 log-level=WARNING,log-file=/var/log/gluster.log server1:/test-volume /mnt/glusterfs - If option is added while mounting fuse client, when the first -volfile server fails, then the server specified in option is used as volfile server to mount -the client. - In --volfile-max-fetch-attempts=X option, specify the number of attempts to fetch volume files while mounting a volume. This option is useful when you mount a server with multiple IP addresses or when round-robin DNS is configured for the server-name.. -
-
- Automatically Mounting Volumes - You can configure your system to automatically mount the Gluster volume each time your system starts. - The server specified in the mount command is only used to fetch the gluster configuration volfile describing the volume name. Subsequently, the client will communicate directly with the servers mentioned in the volfile (which might not even include the one used for mount). - To automatically mount a Gluster volume - - - To mount a volume, edit the /etc/fstab file and add the following line: - - HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR glusterfs defaults,_netdev 0 0 - For example: - - server1:/test-volume /mnt/glusterfs glusterfs defaults,_netdev 0 0 - - - Mounting Options - You can specify the following options when updating the /etc/fstab file. Note that you need to separate all options with commas. - - - log-level=loglevel - - log-file=logfile - - transport=transport-type - - direct-io-mode=[enable|disable] - - - For example: - - HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR glusterfs defaults,_netdev,log-level=WARNING,log-file=/var/log/gluster.log 0 0 -
-
- Testing Mounted Volumes - To test mounted volumes - - - Use the following command: - - # mount - If the gluster volume was successfully mounted, the output of the mount command on the client will be similar to this example: - - - server1:/test-volume on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072 - - - - - Use the following command: - - # df - - The output of df command on the client will display the aggregated storage space from all the bricks in a volume similar to this example: - - # df -h /mnt/glusterfs Filesystem Size Used Avail Use% Mounted on server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs - - - Change to the directory and list the contents by entering the following: - - # cd MOUNTDIR - # ls - - - For example, - # cd /mnt/glusterfs - # ls - - -
-
-
-
- NFS - You can use NFS v3 to access to gluster volumes. Extensive testing has be done on GNU/Linux clients and NFS implementation in other operating system, such as FreeBSD, and Mac OS X, as well as Windows 7 (Professional and Up), Windows Server 2003, and others, may work with gluster NFS server implementation. - GlusterFS now includes network lock manager (NLM) v4. NLM enables applications on NFSv3 clients to do record locking on files on NFS server. It is started automatically whenever the NFS server is run. - You must install nfs-common package on both servers and clients (only for Debian-based) distribution. - This section describes how to use NFS to mount Gluster volumes (both manually and automatically) and how to verify that the volume has been mounted successfully. -
- Using NFS to Mount Volumes - You can use either of the following methods to mount Gluster volumes: - - - - - - - - - Prerequisite: Install nfs-common package on both servers and clients (only for Debian-based distribution), using the following command: - $ sudo aptitude install nfs-common - After mounting a volume, you can test the mounted volume using the procedure described in . -
- Manually Mounting Volumes Using NFS - To manually mount a Gluster volume using NFS - - - To mount a volume, use the following command: - - # mount -t nfs -o vers=3 HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR - - For example: - # mount -t nfs -o vers=3 server1:/test-volume /mnt/glusterfs - - Gluster NFS server does not support UDP. If the NFS client you are using defaults to connecting using UDP, the following message appears: - - requested NFS version or transport protocol is not supported. - - To connect using TCP - - - Add the following option to the mount command: - - -o mountproto=tcp - For example: - - # mount -o mountproto=tcp -t nfs server1:/test-volume /mnt/glusterfs - - - To mount Gluster NFS server from a Solaris client - - - Use the following command: - - # mount -o proto=tcp,vers=3 nfs://HOSTNAME-OR-IPADDRESS:38467/VOLNAME MOUNTDIR - -For example: - # mount -o proto=tcp,vers=3 nfs://server1:38467/test-volume /mnt/glusterfs - - -
-
- Automatically Mounting Volumes Using NFS - You can configure your system to automatically mount Gluster volumes using NFS each time the system starts. - To automatically mount a Gluster volume using NFS - - - To mount a volume, edit the /etc/fstab file and add the following line: - HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR nfs defaults,_netdev,vers=3 0 0 - For example, - server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,vers=3 0 0 - - Gluster NFS server does not support UDP. If the NFS client you are using defaults to connecting using UDP, the following message appears: - requested NFS version or transport protocol is not supported. - - - To connect using TCP - - - Add the following entry in /etc/fstab file : - HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR nfs defaults,_netdev,mountproto=tcp 0 0 - For example, - server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,mountproto=tcp 0 0 - - - To automount NFS mounts - Gluster supports *nix standard method of automounting NFS mounts. Update the /etc/auto.master and /etc/auto.misc and restart the autofs service. After that, whenever a user or process attempts to access the directory it will be mounted in the background. -
-
- Testing Volumes Mounted Using NFS - You can confirm that Gluster directories are mounting successfully. - To test mounted volumes - - - Use the mount command by entering the following: - # mount - For example, the output of the mount command on the client will display an entry like the following: - server1:/test-volume on /mnt/glusterfs type nfs (rw,vers=3,addr=server1) - - - - - Use the df command by entering the following: - # df - For example, the output of df command on the client will display the aggregated storage space from all the bricks in a volume. - # df -h /mnt/glusterfs -Filesystem Size Used Avail Use% Mounted on -server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs - - - Change to the directory and list the contents by entering the following: - # cd MOUNTDIR - # ls - For example, - - # cd /mnt/glusterfs - - # ls - - -
-
-
-
- CIFS - You can use CIFS to access to volumes when using Microsoft Windows as well as SAMBA clients. For this access method, Samba packages need to be present on the client side. You can export glusterfs mount point as the samba export, and then mount it using CIFS protocol. - This section describes how to mount CIFS shares on Microsoft Windows-based clients (both manually and automatically) and how to verify that the volume has mounted successfully. - - CIFS access using the Mac OS X Finder is not supported, however, you can use the Mac OS X command line to access Gluster volumes using CIFS. - -
- Using CIFS to Mount Volumes - You can use either of the following methods to mount Gluster volumes: - - - - - - - - - After mounting a volume, you can test the mounted volume using the procedure described in . - You can also use Samba for exporting Gluster Volumes through CIFS protocol. -
- Exporting Gluster Volumes Through Samba - We recommend you to use Samba for exporting Gluster volumes through the CIFS protocol. - To export volumes through CIFS protocol - - - Mount a Gluster volume. For more information on mounting volumes, see . - - - Setup Samba configuration to export the mount point of the Gluster volume. - For example, if a Gluster volume is mounted on /mnt/gluster, you must edit smb.conf file to enable exporting this through CIFS. Open smb.conf file in an editor and add the following lines for a simple configuration: - [glustertest] - - comment = For testing a Gluster volume exported through CIFS - - path = /mnt/glusterfs - - read only = no - - guest ok = yes - - - Save the changes and start the smb service using your systems init scripts (/etc/init.d/smb [re]start). - - To be able mount from any server in the trusted storage pool, you must repeat these steps on each Gluster node. For more advanced configurations, see Samba documentation. - -
-
- Manually Mounting Volumes Using CIFS - You can manually mount Gluster volumes using CIFS on Microsoft Windows-based client machines. - To manually mount a Gluster volume using CIFS - - - Using Windows Explorer, choose Tools > Map Network Drive… from the menu. The Map Network Drive window appears. - - - Choose the drive letter using the Drive drop-down list. - - - Click Browse, select the volume to map to the network drive, and click OK. - - - Click Finish. - - - The network drive (mapped to the volume) appears in the Computer window. - Alternatively, to manually mount a Gluster volume using CIFS. - - - Click Start > Run and enter the following: - - \\SERVERNAME\VOLNAME - - For example: - - \\server1\test-volume - - - -
-
- Automatically Mounting Volumes Using CIFS - You can configure your system to automatically mount Gluster volumes using CIFS on Microsoft Windows-based clients each time the system starts. - To automatically mount a Gluster volume using CIFS - The network drive (mapped to the volume) appears in the Computer window and is reconnected each time the system starts. - - - Using Windows Explorer, choose Tools > Map Network Drive… from the menu. The Map Network Drive window appears. - - - Choose the drive letter using the Drive drop-down list. - - - Click Browse, select the volume to map to the network drive, and click OK. - - - Click the Reconnect at logon checkbox. - - - Click Finish. - - -
-
- Testing Volumes Mounted Using CIFS - You can confirm that Gluster directories are mounting successfully by navigating to the directory using Windows Explorer. -
-
-
-
diff --git a/doc/admin-guide/en-US/admin_start_stop_daemon.xml b/doc/admin-guide/en-US/admin_start_stop_daemon.xml deleted file mode 100644 index bdab0b8b6..000000000 --- a/doc/admin-guide/en-US/admin_start_stop_daemon.xml +++ /dev/null @@ -1,56 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - Managing the glusterd Service - After installing GlusterFS, you must start glusterd service. The glusterd service serves as the Gluster elastic volume manager, overseeing glusterfs processes, and co-ordinating dynamic volume operations, such as adding and removing volumes across multiple storage servers non-disruptively. - This section describes how to start the glusterd service in the following ways: - - - - - - - - - - You must start glusterd on all GlusterFS servers. - -
- Starting and Stopping glusterd Manually - This section describes how to start and stop glusterd manually - - - To start glusterd manually, enter the following command: - # /etc/init.d/glusterd start - - - To stop glusterd manually, enter the following command: - # /etc/init.d/glusterd stop - - -
-
- Starting glusterd Automatically - This section describes how to configure the system to automatically start the glusterd service every time the system boots. - To automatically start the glusterd service every time the system boots, enter the following from the command line: - # chkconfig glusterd on -
- Red Hat-based Systems - To configure Red Hat-based systems to automatically start the glusterd service every time the system boots, enter the following from the command line: - # chkconfig glusterd on -
-
- Debian-based Systems - To configure Debian-based systems to automatically start the glusterd service every time the system boots, enter the following from the command line: - # update-rc.d glusterd defaults -
-
- Systems Other than Red Hat and Debain - To configure systems other than Red Hat or Debian to automatically start the glusterd service every time the system boots, enter the following entry to the /etc/rc.local file: - # echo "glusterd" >> /etc/rc.local -
-
-
diff --git a/doc/admin-guide/en-US/admin_storage_pools.xml b/doc/admin-guide/en-US/admin_storage_pools.xml deleted file mode 100644 index 87b6320bd..000000000 --- a/doc/admin-guide/en-US/admin_storage_pools.xml +++ /dev/null @@ -1,57 +0,0 @@ - - - - Setting up Trusted Storage Pools - Before you can configure a GlusterFS volume, you must create a trusted storage pool consisting of the storage servers that provides bricks to a volume. - A storage pool is a trusted network of storage servers. When you start the first server, the storage pool consists of that server alone. To add additional storage servers to the storage pool, you can use the probe command from a storage server that is already trusted. - - Do not self-probe the first server/localhost. - - The GlusterFS service must be running on all storage servers that you want to add to the storage pool. See for more information. -
- Adding Servers to Trusted Storage Pool - To create a trusted storage pool, add servers to the trusted storage pool - - - The hostnames used to create the storage pool must be resolvable by DNS. - To add a server to the storage pool: - # gluster peer probe server - For example, to create a trusted storage pool of four servers, add three servers to the storage pool from server1: - # gluster peer probe server2 -Probe successful - -# gluster peer probe server3 -Probe successful - -# gluster peer probe server4 -Probe successful - - - - Verify the peer status from the first server using the following commands: - # gluster peer status -Number of Peers: 3 - -Hostname: server2 -Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5 -State: Peer in Cluster (Connected) - -Hostname: server3 -Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7 -State: Peer in Cluster (Connected) - -Hostname: server4 -Uuid: 3e0caba-9df7-4f66-8e5d-cbc348f29ff7 -State: Peer in Cluster (Connected) - - -
-
- Removing Servers from the Trusted Storage Pool - To remove a server from the storage pool: - # gluster peer detach server - For example, to remove server4 from the trusted storage pool: - # gluster peer detach server4 -Detach successful -
-
diff --git a/doc/admin-guide/en-US/admin_troubleshooting.xml b/doc/admin-guide/en-US/admin_troubleshooting.xml deleted file mode 100644 index af1259ada..000000000 --- a/doc/admin-guide/en-US/admin_troubleshooting.xml +++ /dev/null @@ -1,518 +0,0 @@ - - - - - Troubleshooting GlusterFS - This section describes how to manage GlusterFS logs and most common troubleshooting scenarios -related to GlusterFS. - -
- Managing GlusterFS Logs - This section describes how to manage GlusterFS logs by performing the following operation: - - - - - Rotating Logs - - - -
- Rotating Logs - Administrators can rotate the log file in a volume, as needed. - - To rotate a log file - - - Rotate the log file using the following command: - - # gluster volume log rotate VOLNAME - For example, to rotate the log file on test-volume: - - # gluster volume log rotate test-volume -log rotate successful - - - When a log file is rotated, the contents of the current log file are moved to log-file- -name.epoch-time-stamp. - - - - -
-
-
- Troubleshooting Geo-replication - This section describes the most common troubleshooting scenarios related to GlusterFS Geo-replication. - -
- Locating Log Files - For every Geo-replication session, the following three log files are associated to it (four, if the slave is a -gluster volume): - - - - Master-log-file - log file for the process which monitors the Master volume - - - - Slave-log-file - log file for process which initiates the changes in slave - - - - Master-gluster-log-file - log file for the maintenance mount point that Geo-replication module -uses to monitor the master volume - - - - Slave-gluster-log-file - is the slave's counterpart of it - - - - Master Log File - - To get the Master-log-file for geo-replication, use the following command: - - gluster volume geo-replication MASTER SLAVE config log-file - - For example: - - # gluster volume geo-replication Volume1 example.com:/data/remote_dir config log-file - Slave Log File - To get the log file for Geo-replication on slave (glusterd must be running on slave machine), use the -following commands: - - - - On master, run the following command: - - # gluster volume geo-replication Volume1 example.com:/data/remote_dir config session-owner 5f6e5200-756f-11e0-a1f0-0800200c9a66 - Displays the session owner details. - - - - On slave, run the following command: - - # gluster volume geo-replication /data/remote_dir config log-file /var/log/gluster/${session-owner}:remote-mirror.log - - - Replace the session owner details (output of Step 1) to the output of the Step 2 to get the -location of the log file. - - /var/log/gluster/5f6e5200-756f-11e0-a1f0-0800200c9a66:remote-mirror.log - - - -
-
- Rotating Geo-replication Logs - Administrators can rotate the log file of a particular master-slave session, as needed. -When you run geo-replication's log-rotate command, the log file -is backed up with the current timestamp suffixed to the file -name and signal is sent to gsyncd to start logging to a new -log file. - To rotate a geo-replication log file - - - Rotate log file for a particular master-slave session using the following command: - - # gluster volume geo-replication master slave log-rotate - - For example, to rotate the log file of master Volume1 and slave example.com:/data/remote_dir -: - - # gluster volume geo-replication Volume1 example.com:/data/remote_dir log rotate -log rotate successful - - - Rotate log file for all sessions for a master volume using the following command: - - # gluster volume geo-replication master log-rotate - - For example, to rotate the log file of master Volume1: - - # gluster volume geo-replication Volume1 log rotate -log rotate successful - - - Rotate log file for all sessions using the following command: - - # gluster volume geo-replication log-rotate - - For example, to rotate the log file for all sessions: - # gluster volume geo-replication log rotate -log rotate successful - - -
-
- Synchronization is not complete - Description: GlusterFS Geo-replication did not synchronize the data completely but still the geo- -replication status displayed is OK. - - Solution: You can enforce a full sync of the data by erasing the index and restarting GlusterFS Geo- -replication. After restarting, GlusterFS Geo-replication begins synchronizing all the data. All files are compared using checksum, which can be a lengthy and high resource utilization operation on large -data sets. If the error situation persists, contact Red Hat Support. - - For more information about erasing index, see . - -
-
- Issues in Data Synchronization - Description: Geo-replication display status as OK, but the files do not get synced, only -directories and symlink gets synced with the following error message in the log: - - [2011-05-02 13:42:13.467644] E [master:288:regjob] GMaster: failed to sync ./some_file` - Solution: Geo-replication invokes rsync v3.0.0 or higher on the host and the remote machine. You must verify if -you have installed the required version. - -
-
- Geo-replication status displays Faulty very often - Description: Geo-replication displays status as faulty very often with a backtrace similar to -the following: - - 2011-04-28 14:06:18.378859] E [syncdutils:131:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File "/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 152, in twraptf(*aa) File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 118, in listen rid, exc, res = recv(self.inf) File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 42, in recv return pickle.load(inf) EOFError - Solution: This error indicates that the RPC communication between the master gsyncd module and slave -gsyncd module is broken and this can happen for various reasons. Check if it satisfies all the following -pre-requisites: - - - - Password-less SSH is set up properly between the host and the remote machine. - - - - If FUSE is installed in the machine, because geo-replication module mounts the GlusterFS volume -using FUSE to sync data. - - - - If the Slave is a volume, check if that volume is started. - - - - If the Slave is a plain directory, verify if the directory has been created already with the -required permissions. - - - - If GlusterFS 3.2 or higher is not installed in the default location (in Master) and has been prefixed to be -installed in a custom location, configure the gluster-command for it to point to the exact -location. - - - - If GlusterFS 3.2 or higher is not installed in the default location (in slave) and has been prefixed to be -installed in a custom location, configure the remote-gsyncd-command for it to point to the -exact place where gsyncd is located. - - - -
-
- Intermediate Master goes to Faulty State - Description: In a cascading set-up, the intermediate master goes to faulty state with the following -log: - - raise RuntimeError ("aborting on uuid change from %s to %s" % \ RuntimeError: aborting on uuid change from af07e07c-427f-4586-ab9f- 4bf7d299be81 to de6b5040-8f4e-4575-8831-c4f55bd41154 - Solution: In a cascading set-up the Intermediate master is loyal to the original primary master. The -above log means that the geo-replication module has detected change in primary master. -If this is the desired behavior, delete the config option volume-id in the session initiated from the -intermediate master. - -
-
-
- Troubleshooting POSIX ACLs - This section describes the most common troubleshooting issues related to POSIX ACLs. - -
- setfacl command fails with “setfacl: <file or directory name>: Operation not supported” error - You may face this error when the backend file systems in one of the servers is not mounted with -the "-o acl" option. The same can be confirmed by viewing the following error message in the log file -of the server "Posix access control list is not supported". - - Solution: Remount the backend file system with "-o acl" option. For more information, see . - -
-
-
- Troubleshooting Hadoop Compatible Storage - This section describes the most common troubleshooting issues related to Hadoop Compatible -Storage. - - -
- Time Sync - Running MapReduce job may throw exceptions if the time is out-of-sync on the hosts in the cluster. - - - Solution: Sync the time on all hosts using ntpd program. - -
-
-
- Troubleshooting NFS - This section describes the most common troubleshooting issues related to NFS . - -
- mount command on NFS client fails with “RPC Error: Program not registered” - Start portmap or rpcbind service on the NFS server. - - This error is encountered when the server has not started correctly. - - On most Linux distributions this is fixed by starting portmap: - - $ /etc/init.d/portmap start - - On some distributions where portmap has been replaced by rpcbind, the following command is -required: - - $ /etc/init.d/rpcbind start - After starting portmap or rpcbind, gluster NFS server needs to be restarted. - -
-
- NFS server start-up fails with “Port is already in use” error in the log file." - Another Gluster NFS server is running on the same machine. - - This error can arise in case there is already a Gluster NFS server running on the same machine. -This situation can be confirmed from the log file, if the following error lines exist: - - [2010-05-26 23:40:49] E [rpc-socket.c:126:rpcsvc_socket_listen] rpc-socket: binding socket failed:Address already in use -[2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use -[2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection -[2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed -[2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 -[2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed -[2010-05-26 23:40:49] C [nfs.c:531:notify] nfs: Failed to initialize protocols - To resolve this error one of the Gluster NFS servers will have to be shutdown. At this time, -Gluster NFS server does not support running multiple NFS servers on the same machine. - -
-
- mount command fails with “rpc.statd” related error message - If the mount command fails with the following error message: - - mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. - Start rpc.statd - For NFS clients to mount the NFS server, rpc.statd service must be running on the clients. - Start -rpc.statd service by running the following command: - - $ rpc.statd -
-
- mount command takes too long to finish. - Start rpcbind service on the NFS client. - - The problem is that the rpcbind or portmap service is not running on the NFS client. The -resolution for this is to start either of these services by running the following command: - - $ /etc/init.d/portmap start - - On some distributions where portmap has been replaced by rpcbind, the following command is -required: - - $ /etc/init.d/rpcbind start -
-
- NFS server glusterfsd starts but initialization fails with “nfsrpc- service: portmap registration of program failed” error message in the log. - NFS start-up can succeed but the initialization of the NFS service can still fail preventing clients -from accessing the mount points. Such a situation can be confirmed from the following error -messages in the log file: - - [2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap -[2010-05-26 23:33:47] E [rpcsvc.c:2682:rpcsvc_program_register] rpc-service: portmap registration of program failed -[2010-05-26 23:33:47] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 -[2010-05-26 23:33:47] E [nfs.c:125:nfs_init_versions] nfs: Program init failed -[2010-05-26 23:33:47] C [nfs.c:531:notify] nfs: Failed to initialize protocols -[2010-05-26 23:33:49] E [rpcsvc.c:2614:rpcsvc_program_unregister_portmap] rpc-service: Could not unregister with portmap -[2010-05-26 23:33:49] E [rpcsvc.c:2731:rpcsvc_program_unregister] rpc-service: portmap unregistration of program failed -[2010-05-26 23:33:49] E [rpcsvc.c:2744:rpcsvc_program_unregister] rpc-service: Program unregistration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 - - - Start portmap or rpcbind service on the NFS server. - - On most Linux distributions, portmap can be started using the following command: - - $ /etc/init.d/portmap start - On some distributions where portmap has been replaced by rpcbind, run the following command: - - $ /etc/init.d/rpcbind start - After starting portmap or rpcbind, gluster NFS server needs to be restarted. - - - - Stop another NFS server running on the same machine. - - Such an error is also seen when there is another NFS server running on the same machine but it is -not the Gluster NFS server. On Linux systems, this could be the kernel NFS server. Resolution -involves stopping the other NFS server or not running the Gluster NFS server on the machine. -Before stopping the kernel NFS server, ensure that no critical service depends on access to that -NFS server's exports. - - On Linux, kernel NFS servers can be stopped by using either of the following commands -depending on the distribution in use: - - $ /etc/init.d/nfs-kernel-server stop - - $ /etc/init.d/nfs stop - - - Restart Gluster NFS server. - - - -
-
- mount command fails with NFS server failed error. - mount command fails with following error - - mount: mount to NFS server '10.1.10.11' failed: timed out (retrying). - Perform one of the following to resolve this issue: - - - - Disable name lookup requests from NFS server to a DNS server. - - The NFS server attempts to authenticate NFS clients by performing a reverse DNS lookup to -match hostnames in the volume file with the client IP addresses. There can be a situation where -the NFS server either is not able to connect to the DNS server or the DNS server is taking too long -to responsd to DNS request. These delays can result in delayed replies from the NFS server to the -NFS client resulting in the timeout error seen above. - - NFS server provides a work-around that disables DNS requests, instead relying only on the client -IP addresses for authentication. The following option can be added for successful mounting in -such situations: - - option rpc-auth.addr.namelookup off - - Note: Remember that disabling the NFS server forces authentication of clients to use only IP -addresses and if the authentication rules in the volume file use hostnames, those authentication -rules will fail and disallow mounting for those clients. - - - or - - - NFS version used by the NFS client is other than version 3. - - Gluster NFS server supports version 3 of NFS protocol. In recent Linux kernels, the default NFS -version has been changed from 3 to 4. It is possible that the client machine is unable to connect -to the Gluster NFS server because it is using version 4 messages which are not understood by -Gluster NFS server. The timeout can be resolved by forcing the NFS client to use version 3. The -vers option to mount command is used for this purpose: - - $ mount nfsserver:export -o vers=3 mount-point - - - -
-
- showmount fails with clnt_create: RPC: Unable to receive - Check your firewall setting to open ports 111 for portmap requests/replies and Gluster NFS -server requests/replies. Gluster NFS server operates over the following port numbers: 38465, -38466, and 38467. - - For more information, see . - -
-
- Application fails with "Invalid argument" or "Value too large for defined data type" error. - These two errors generally happen for 32-bit nfs clients or applications that do not support 64-bit -inode numbers or large files. -Use the following option from the CLI to make Gluster NFS return 32-bit inode numbers instead: -nfs.enable-ino32 <on|off> - - Applications that will benefit are those that were either: - - - - built 32-bit and run on 32-bit machines such that they do not support large files by default - - - built 32-bit on 64-bit systems - - - - This option is disabled by default so NFS returns 64-bit inode numbers by default. - - Applications which can be rebuilt from source are recommended to rebuild using the following -flag with gcc: - -D_FILE_OFFSET_BITS=64 - -
-
-
- Troubleshooting File Locks - In GlusterFS 3.3 you can use statedump command to list the locks held on files. The statedump output also provides information on each lock with its range, basename, PID of the application holding the lock, and so on. You can analyze the output to know about the locks whose owner/application is no longer running or interested in that lock. After ensuring that the no application is using the file, you can clear the lock using the following clear lock command: - # gluster volume clear-locks VOLNAME path kind {blocked | granted | all}{inode [range] | entry [basename] | posix [range]} - For more information on performing statedump, see - To identify locked file and clear locks - - - Perform statedump on the volume to view the files that are locked using the following command: - # gluster volume statedump VOLNAME inode - For example, to display statedump of test-volume: - # gluster volume statedump test-volume -Volume statedump successful - The statedump files are created on the brick servers in the /tmp directory or in the directory set using server.statedump-path volume option. The naming convention of the dump file is <brick-path>.<brick-pid>.dump. - The following are the sample contents of the statedump file. It indicates that GlusterFS has entered into a state where there is an entry lock (entrylk) and an inode lock (inodelk). Ensure that those are stale locks and no resources own them. - [xlator.features.locks.vol-locks.inode] -path=/ -mandatory=0 -entrylk-count=1 -lock-dump.domain.domain=vol-replicate-0 -xlator.feature.locks.lock-dump.domain.entrylk.entrylk[0](ACTIVE)=type=ENTRYLK_WRLCK on basename=file1, pid = 714782904, owner=ffffff2a3c7f0000, transport=0x20e0670, , granted at Mon Feb 27 16:01:01 2012 - -conn.2.bound_xl./gfs/brick1.hashsize=14057 -conn.2.bound_xl./gfs/brick1.name=/gfs/brick1/inode -conn.2.bound_xl./gfs/brick1.lru_limit=16384 -conn.2.bound_xl./gfs/brick1.active_size=2 -conn.2.bound_xl./gfs/brick1.lru_size=0 -conn.2.bound_xl./gfs/brick1.purge_size=0 - -[conn.2.bound_xl./gfs/brick1.active.1] -gfid=538a3d4a-01b0-4d03-9dc9-843cd8704d07 -nlookup=1 -ref=2 -ia_type=1 -[xlator.features.locks.vol-locks.inode] -path=/file1 -mandatory=0 -inodelk-count=1 -lock-dump.domain.domain=vol-replicate-0 -inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 714787072, owner=00ffff2a3c7f0000, transport=0x20e0670, , granted at Mon Feb 27 16:01:01 2012 - - - Clear the lock using the following command: - # gluster volume clear-locks VOLNAME path kind granted entry basename - For example, to clear the entry lock on file1 of test-volume: - - # gluster volume clear-locks test-volume / kind granted entry file1 -Volume clear-locks successful -vol-locks: entry blocked locks=0 granted locks=1 - - - Clear the inode lock using the following command: - # gluster volume clear-locks VOLNAME path kind granted inode range - For example, to clear the inode lock on file1 of test-volume: - - # gluster volume clear-locks test-volume /file1 kind granted inode 0,0-0 -Volume clear-locks successful -vol-locks: inode blocked locks=0 granted locks=1 - You can perform statedump on test-volume again to verify that the above inode and entry locks are cleared. - - -
-
diff --git a/doc/admin-guide/en-US/gfs_introduction.xml b/doc/admin-guide/en-US/gfs_introduction.xml deleted file mode 100644 index 5fd887305..000000000 --- a/doc/admin-guide/en-US/gfs_introduction.xml +++ /dev/null @@ -1,54 +0,0 @@ - - - - Introducing Gluster File System - GlusterFS is an open source, clustered file system capable of scaling to several petabytes and handling thousands of clients. GlusterFS can be flexibly combined with commodity physical, virtual, and cloud resources to deliver highly available and performant enterprise storage at a fraction of the cost of traditional solutions. - GlusterFS clusters together storage building blocks over Infiniband RDMA and/or TCP/IP interconnect, aggregating disk and memory resources and managing data in a single global namespace. GlusterFS is based on a stackable user space design, delivering exceptional performance for diverse workloads. - -
- Virtualized Cloud Environments - - - Virtualized Cloud Environments - - - - - -
- GlusterFS is designed for today's high-performance, virtualized cloud environments. Unlike traditional data centers, cloud environments require multi-tenancy along with the ability to grow or shrink resources on demand. Enterprises can scale capacity, performance, and availability on demand, with no vendor lock-in, across on-premise, public cloud, and hybrid environments. - GlusterFS is in production at thousands of enterprises spanning media, healthcare, government, education, web 2.0, and financial services. The following table lists the commercial offerings and its documentation location: - - - - - - - - Product - Documentation Location - - - - - Red Hat Storage Software Appliance - - - - - - Red Hat Virtual Storage Appliance - - - - - - Red Hat Storage - - - - - - - -
diff --git a/doc/admin-guide/en-US/glossary.xml b/doc/admin-guide/en-US/glossary.xml deleted file mode 100644 index a8544b8cd..000000000 --- a/doc/admin-guide/en-US/glossary.xml +++ /dev/null @@ -1,126 +0,0 @@ - - - - Glossary - - - Brick - - A Brick is the GlusterFS basic unit of storage, represented by an export directory on a server in the trusted storage pool. A Brick is expressed by combining a server with an export directory in the following format: - SERVER:EXPORT - For example: - myhostname:/exports/myexportdir/ - - - - Cluster - - A cluster is a group of linked computers, working together closely thus in many respects forming a single computer. - - - - Distributed File System - - A file system that allows multiple clients to concurrently access data over a computer network. - - - - Filesystem - - A method of storing and organizing computer files and their data. Essentially, it organizes these files into a database for the storage, organization, manipulation, and retrieval by the computer's operating system. - Source: Wikipedia - - - - FUSE - - Filesystem in Userspace (FUSE) is a loadable kernel module for Unix-like computer operating systems that lets non-privileged users create their own file systems without editing kernel code. This is achieved by running file system code in user space while the FUSE module provides only a "bridge" to the actual kernel interfaces. - Source: Wikipedia - - - - Geo-Replication - - Geo-replication provides a continuous, asynchronous, and incremental replication service from site to another over Local Area Networks (LAN), Wide Area Network (WAN), and across the Internet. - - - - glusterd - - The Gluster management daemon that needs to run on all servers in the trusted storage pool. - - - - Metadata - - Metadata is data providing information about one or more other pieces of data. - - - - Namespace - - Namespace is an abstract container or environment created to hold a logical grouping of unique identifiers or symbols. Each Gluster volume exposes a single namespace as a POSIX mount point that contains every file in the cluster. - - - - Open Source - - Open source describes practices in production and development that promote access to the end product's source materials. Some consider open source a philosophy, others consider it a pragmatic methodology. - Before the term open source became widely adopted, developers and producers used a variety of phrases to describe the concept; open source gained hold with the rise of the Internet, and the attendant need for massive retooling of the computing source code. - Opening the source code enabled a self-enhancing diversity of production models, communication paths, and interactive communities. Subsequently, a new, three-word phrase "open source software" was born to describe the environment that the new copyright, licensing, domain, and consumer issues created. - Source: Wikipedia - - - - Petabyte - - A petabyte (derived from the SI prefix peta- ) is a unit of information equal to one quadrillion (short scale) bytes, or 1000 terabytes. The unit symbol for the petabyte is PB. The prefix peta- (P) indicates a power of 1000: - 1 PB = 1,000,000,000,000,000 B = 10005 B = 1015 B. - The term "pebibyte" (PiB), using a binary prefix, is used for the corresponding power of 1024. - Source: Wikipedia - - - - POSIX - - Portable Operating System Interface (for Unix) is the name of a family of related standards specified by the IEEE to define the application programming interface (API), along with shell and utilities interfaces for software compatible with variants of the Unix operating system. Gluster exports a fully POSIX compliant file system. - - - - RAID - - Redundant Array of Inexpensive Disks (RAID) is a technology that provides increased storage reliability through redundancy, combining multiple low-cost, less-reliable disk drives components into a logical unit where all drives in the array are interdependent. - - - - RRDNS - - Round Robin Domain Name Service (RRDNS) is a method to distribute load across application servers. RRDNS is implemented by creating multiple A records with the same name and different IP addresses in the zone file of a DNS server. - - - - Trusted Storage Pool - - A storage pool is a trusted network of storage servers. When you start the first server, the storage pool consists of that server alone. - - - - Userspace - - Applications running in user space don’t directly interact with hardware, instead using the kernel to moderate access. Userspace applications are generally more portable than applications in kernel space. Gluster is a user space application. - - - - Volfile - - Volfile is a configuration file used by glusterfs process. Volfile will be usually located at /var/lib/glusterd/vols/VOLNAME. - - - - Volume - - A volume is a logical collection of bricks. Most of the gluster management operations happen on the volume. - - - - diff --git a/doc/admin-guide/publican.cfg b/doc/admin-guide/publican.cfg deleted file mode 100644 index e42fa1b3d..000000000 --- a/doc/admin-guide/publican.cfg +++ /dev/null @@ -1,12 +0,0 @@ -# Config::Simple 4.59 -# Thu Apr 5 11:09:15 2012 - -xml_lang: "en-US" -type: Book -brand: Gluster_Brand -prod_url: http://www.gluster.org -doc_url: http://www.gluster.com/community/documentation/index.php/Main_Page -condition: gfs -show_remarks: 1 - - diff --git a/doc/legacy/Makefile.am b/doc/legacy/Makefile.am new file mode 100644 index 000000000..b2caabaa2 --- /dev/null +++ b/doc/legacy/Makefile.am @@ -0,0 +1,3 @@ +info_TEXINFOS = user-guide.texi +CLEANFILES = *~ +DISTCLEANFILES = .deps/*.P *.info *vti diff --git a/doc/legacy/advanced-stripe.odg b/doc/legacy/advanced-stripe.odg new file mode 100644 index 000000000..7686d7091 Binary files /dev/null and b/doc/legacy/advanced-stripe.odg differ diff --git a/doc/legacy/advanced-stripe.pdf b/doc/legacy/advanced-stripe.pdf new file mode 100644 index 000000000..ec8b03dcf Binary files /dev/null and b/doc/legacy/advanced-stripe.pdf differ diff --git a/doc/legacy/colonO-icon.jpg b/doc/legacy/colonO-icon.jpg new file mode 100644 index 000000000..3e66f7a27 Binary files /dev/null and b/doc/legacy/colonO-icon.jpg differ diff --git a/doc/legacy/docbook/Administration_Guide.ent b/doc/legacy/docbook/Administration_Guide.ent new file mode 100644 index 000000000..3381b2bfe --- /dev/null +++ b/doc/legacy/docbook/Administration_Guide.ent @@ -0,0 +1,4 @@ + + + + diff --git a/doc/legacy/docbook/Administration_Guide.xml b/doc/legacy/docbook/Administration_Guide.xml new file mode 100644 index 000000000..483855b1a --- /dev/null +++ b/doc/legacy/docbook/Administration_Guide.xml @@ -0,0 +1,27 @@ + + +%BOOK_ENTITIES; +]> + + + + + + + + + + + + + + + + + + + + + + diff --git a/doc/legacy/docbook/Author_Group.xml b/doc/legacy/docbook/Author_Group.xml new file mode 100644 index 000000000..f3fa31740 --- /dev/null +++ b/doc/legacy/docbook/Author_Group.xml @@ -0,0 +1,17 @@ + + +%BOOK_ENTITIES; +]> + + + Divya + Muntimadugu + + Red Hat + Engineering Content Services + + divya@redhat.com + + + diff --git a/doc/legacy/docbook/Book_Info.xml b/doc/legacy/docbook/Book_Info.xml new file mode 100644 index 000000000..6be6a7816 --- /dev/null +++ b/doc/legacy/docbook/Book_Info.xml @@ -0,0 +1,28 @@ + + +%BOOK_ENTITIES; +]> + + Administration Guide + Using Gluster File System Beta 3 + Gluster File System + 3.3 + 1 + 1 + + + This guide describes Gluster File System (GlusterFS) and provides information on how to configure, operate, and manage GlusterFS. + + + + + + + + + + + + + diff --git a/doc/legacy/docbook/Chapter.xml b/doc/legacy/docbook/Chapter.xml new file mode 100644 index 000000000..4a1cef872 --- /dev/null +++ b/doc/legacy/docbook/Chapter.xml @@ -0,0 +1,33 @@ + + +%BOOK_ENTITIES; +]> + + Test Chapter + + This is a test paragraph + +
+ Test Section 1 + + This is a test paragraph in a section + +
+ +
+ Test Section 2 + + This is a test paragraph in Section 2 + + + + listitem text + + + + +
+ +
+ diff --git a/doc/legacy/docbook/Preface.xml b/doc/legacy/docbook/Preface.xml new file mode 100644 index 000000000..320311906 --- /dev/null +++ b/doc/legacy/docbook/Preface.xml @@ -0,0 +1,24 @@ + + + +%BOOK_ENTITIES; +]> + + Preface + This guide describes how to configure, operate, and manage Gluster File System (GlusterFS). +
+ Audience + This guide is intended for Systems Administrators interested in configuring and managing GlusterFS. + This guide assumes that you are familiar with the Linux operating system, concepts of File System, GlusterFS concepts, and GlusterFS Installation +
+
+ License + The License information is available at . +
+ + + + + +
diff --git a/doc/legacy/docbook/Revision_History.xml b/doc/legacy/docbook/Revision_History.xml new file mode 100644 index 000000000..09320821f --- /dev/null +++ b/doc/legacy/docbook/Revision_History.xml @@ -0,0 +1,27 @@ + + +%BOOK_ENTITIES; +]> + + Revision History + + + + 1-0 + Thu Apr 5 2012 + + Divya + Muntimadugu + divya@redhat.com + + + + Draft + + + + + + + diff --git a/doc/legacy/docbook/admin_ACLs.xml b/doc/legacy/docbook/admin_ACLs.xml new file mode 100644 index 000000000..156e52c17 --- /dev/null +++ b/doc/legacy/docbook/admin_ACLs.xml @@ -0,0 +1,206 @@ + + + + POSIX Access Control Lists + POSIX Access Control Lists (ACLs) allows you to assign different permissions for different users or +groups even though they do not correspond to the original owner or the owning group. + + For example: User john creates a file but does not want to allow anyone to do anything with this +file, except another user, antony (even though there are other users that belong to the group john). + + This means, in addition to the file owner, the file group, and others, additional users and groups can +be granted or denied access by using POSIX ACLs. + +
+ Activating POSIX ACLs Support + To use POSIX ACLs for a file or directory, the partition of the file or directory must be mounted with +POSIX ACLs support. + +
+ Activating POSIX ACLs Support on Sever + To mount the backend export directories for POSIX ACLs support, use the following command: + + # mount -o acl device-namepartition + + For example: + + # mount -o acl /dev/sda1 /export1 + Alternatively, if the partition is listed in the /etc/fstab file, add the following entry for the partition +to include the POSIX ACLs option: + + LABEL=/work /export1 ext3 rw, acl 14 +
+
+ Activating POSIX ACLs Support on Client + To mount the glusterfs volumes for POSIX ACLs support, use the following command: + + # mount –t glusterfs -o acl severname:volume-idmount point + + For example: + + # mount -t glusterfs -o acl 198.192.198.234:glustervolume /mnt/gluster + +
+
+
+ Setting POSIX ACLs + You can set two types of POSIX ACLs, that is, access ACLs and default ACLs. You can use +access ACLs to grant permission for a specific file or directory. You can use default ACLs only +on a directory but if a file inside that directory does not have an ACLs, it inherits the permissions of +the default ACLs of the directory. + + You can set ACLs for per user, per group, for users not in the user group for the file, and via the +effective right mask. + +
+ Setting Access ACLs + You can apply access ACLs to grant permission for both files and directories. + + To set or modify Access ACLs + + You can set or modify access ACLs use the following command: + + # setfacl –m entry type file + The ACL entry types are the POSIX ACLs representations of owner, group, and other. + + Permissions must be a combination of the characters r (read), w (write), and x (execute). You must +specify the ACL entry in the following format and can specify multiple entry types separated by +commas. + + + + + + + + ACL Entry + Description + + + + + u:uid:<permission> + Sets the access ACLs for a user. You can specify user name or UID + + + g:gid:<permission> + Sets the access ACLs for a group. You can specify group name or GID. + + + m:<permission> + Sets the effective rights mask. The mask is the combination of all access permissions of the owning group and all of the user and group entries. + + + o:<permission> + Sets the access ACLs for users other than the ones in the group for the file. + + + + + If a file or directory already has an POSIX ACLs, and the setfacl command is used, the additional +permissions are added to the existing POSIX ACLs or the existing rule is modified. + + For example, to give read and write permissions to user antony: + + # setfacl -m u:antony:rw /mnt/gluster/data/testfile +
+
+ Setting Default ACLs + You can apply default ACLs only to directories. They determine the permissions of a file system +objects that inherits from its parent directory when it is created. + + To set default ACLs + + You can set default ACLs for files and directories using the following command: + + # setfacl –m –-set entry type directory + + For example, to set the default ACLs for the /data directory to read for users not in the user group: + + # setfacl –m --set o::r /mnt/gluster/data + + An access ACLs set for an individual file can override the default ACLs permissions. + + + Effects of a Default ACLs + The following are the ways in which the permissions of a directory's default ACLs are passed to the +files and subdirectories in it: + + + + A subdirectory inherits the default ACLs of the parent directory both as its default ACLs and as an +access ACLs. + + + + A file inherits the default ACLs as its access ACLs. + + + +
+
+
+ Retrieving POSIX ACLs + You can view the existing POSIX ACLs for a file or directory. + + To view existing POSIX ACLs + + + View the existing access ACLs of a file using the following command: + + # getfacl path/filename + + For example, to view the existing POSIX ACLs for sample.jpg + + # getfacl /mnt/gluster/data/test/sample.jpg +# owner: antony +# group: antony +user::rw- +group::rw- +other::r-- + + + View the default ACLs of a directory using the following command: + + # getfacl directory name + For example, to view the existing ACLs for /data/doc + + # getfacl /mnt/gluster/data/doc +# owner: antony +# group: antony +user::rw- +user:john:r-- +group::r-- +mask::r-- +other::r-- +default:user::rwx +default:user:antony:rwx +default:group::r-x +default:mask::rwx +default:other::r-x + + +
+
+ Removing POSIX ACLs + To remove all the permissions for a user, groups, or others, use the following command: + + # setfacl -x ACL entry type file + For example, to remove all permissions from the user antony: + + # setfacl -x u:antony /mnt/gluster/data/test-file +
+
+ Samba and ACLs + If you are using Samba to access GlusterFS FUSE mount, then POSIX ACLs are enabled by default. +Samba has been compiled with the --with-acl-support option, so no special flags are required +when accessing or mounting a Samba share. + +
+
+ NFS and ACLs + Currently we do not support ACLs configuration through NFS, i.e. setfacl and getfacl commands do +not work. However, ACLs permissions set using Gluster Native Client applies on NFS mounts. + +
+
diff --git a/doc/legacy/docbook/admin_Hadoop.xml b/doc/legacy/docbook/admin_Hadoop.xml new file mode 100644 index 000000000..08bac8961 --- /dev/null +++ b/doc/legacy/docbook/admin_Hadoop.xml @@ -0,0 +1,244 @@ + + +%BOOK_ENTITIES; +]> + + Managing Hadoop Compatible Storage + GlusterFS provides compatibility for Apache Hadoop and it uses the standard file system +APIs available in Hadoop to provide a new storage option for Hadoop deployments. Existing +MapReduce based applications can use GlusterFS seamlessly. This new functionality opens up data +within Hadoop deployments to any file-based or object-based application. + + +
+ Architecture Overview + The following diagram illustrates Hadoop integration with GlusterFS: + + + + + + +
+
+ Advantages + +The following are the advantages of Hadoop Compatible Storage with GlusterFS: + + + + + + Provides simultaneous file-based and object-based access within Hadoop. + + + + Eliminates the centralized metadata server. + + + + Provides compatibility with MapReduce applications and rewrite is not required. + + + + Provides a fault tolerant file system. + + + +
+
+ Preparing to Install Hadoop Compatible Storage + This section provides information on pre-requisites and list of dependencies that will be installed +during installation of Hadoop compatible storage. + + +
+ Pre-requisites + The following are the pre-requisites to install Hadoop Compatible +Storage : + + + + + Hadoop 0.20.2 is installed, configured, and is running on all the machines in the cluster. + + + + Java Runtime Environment + + + + Maven (mandatory only if you are building the plugin from the source) + + + + JDK (mandatory only if you are building the plugin from the source) + + + + getfattr +- command line utility + + +
+
+
+ Installing, and Configuring Hadoop Compatible Storage + This section describes how to install and configure Hadoop Compatible Storage in your storage +environment and verify that it is functioning correctly. + + + + To install and configure Hadoop compatible storage: + + Download glusterfs-hadoop-0.20.2-0.1.x86_64.rpm file to each server on your cluster. You can download the file from . + + + + + To install Hadoop Compatible Storage on all servers in your cluster, run the following command: + + # rpm –ivh --nodeps glusterfs-hadoop-0.20.2-0.1.x86_64.rpm + + The following files will be extracted: + + + + /usr/local/lib/glusterfs-Hadoop-version-gluster_plugin_version.jar + + + /usr/local/lib/conf/core-site.xml + + + + + (Optional) To install Hadoop Compatible Storage in a different location, run the following +command: + + # rpm –ivh --nodeps –prefix /usr/local/glusterfs/hadoop glusterfs-hadoop- 0.20.2-0.1.x86_64.rpm + + + + Edit the conf/core-site.xml file. The following is the sample conf/core-site.xml file: + + <configuration> + <property> + <name>fs.glusterfs.impl</name> + <value>org.apache.hadoop.fs.glusterfs.Gluster FileSystem</value> +</property> + +<property> + <name>fs.default.name</name> + <value>glusterfs://fedora1:9000</value> +</property> + +<property> + <name>fs.glusterfs.volname</name> + <value>hadoopvol</value> +</property> + +<property> + <name>fs.glusterfs.mount</name> + <value>/mnt/glusterfs</value> +</property> + +<property> + <name>fs.glusterfs.server</name> + <value>fedora2</value> +</property> + +<property> + <name>quick.slave.io</name> + <value>Off</value> +</property> +</configuration> + + The following are the configurable fields: + + + + + + + + + Property Name + Default Value + Description + + + + + fs.default.name + glusterfs://fedora1:9000 + Any hostname in the cluster as the server and any port number. + + + fs.glusterfs.volname + hadoopvol + GlusterFS volume to mount. + + + fs.glusterfs.mount + /mnt/glusterfs + The directory used to fuse mount the volume. + + + fs.glusterfs.server + fedora2 + Any hostname or IP address on the cluster except the client/master. + + + quick.slave.io + Off + Performance tunable option. If this option is set to On, the plugin will try to perform I/O directly from the disk file system (like ext3 or ext4) the file resides on. Hence read performance will improve and job would run faster. + This option is not tested widely + + + + + + + + Create a soft link in Hadoop’s library and configuration directory for the downloaded files (in +Step 3) using the following commands: + + # ln -s <target location> <source location> + + For example, + + # ln –s /usr/local/lib/glusterfs-0.20.2-0.1.jar $HADOOP_HOME/lib/glusterfs-0.20.2-0.1.jar + + # ln –s /usr/local/lib/conf/core-site.xml $HADOOP_HOME/conf/core-site.xml + + + (Optional) You can run the following command on Hadoop master to build the plugin and deploy +it along with core-site.xml file, instead of repeating the above steps: + + # build-deploy-jar.py -d $HADOOP_HOME -c + + +
+
+ Starting and Stopping the Hadoop MapReduce Daemon + To start and stop MapReduce daemon + + + To start MapReduce daemon manually, enter the following command: + + # $HADOOP_HOME/bin/start-mapred.sh + + + + To stop MapReduce daemon manually, enter the following command: + + # $HADOOP_HOME/bin/stop-mapred.sh + + + + You must start Hadoop MapReduce daemon on all servers. + + +
+
diff --git a/doc/legacy/docbook/admin_UFO.xml b/doc/legacy/docbook/admin_UFO.xml new file mode 100644 index 000000000..03be14dc9 --- /dev/null +++ b/doc/legacy/docbook/admin_UFO.xml @@ -0,0 +1,1588 @@ + + +%BOOK_ENTITIES; +]> + + Managing Unified File and Object Storage + Unified File and Object Storage (UFO) unifies NAS and object storage technology. It +provides a system for data storage that enables users to access the same data, both as an object and as a +file, thus simplifying management and controlling storage costs. + + + Unified File and Object Storage is built upon Openstack's Object Storage Swift. Open Stack Object Storage allows users to store and retrieve files and content through a simple Web Service (REST: Representational State Transfer) interface as objects and GlusterFS, allows users to store and retrieve files using Native Fuse and NFS mounts. It uses GlusterFS as a backend file system for Open Stack Swift. It also leverages on Open Stack Swift's web interface for storing and retrieving files over the web combined with GlusterFS features like scalability and high availability, replication, elastic volume management for data management at disk level. + Unified File and Object Storage technology enables enterprises to adopt and deploy +cloud storage solutions. It allows users to access and modify data as objects from a +REST interface along with the ability to access and modify files from NAS interfaces including NFS +and CIFS. In addition to decreasing cost and making it faster and easier to access object data, +it also delivers massive scalability, high availability and replication of object storage. +Infrastructure as a Service (IaaS) providers can utilize GlusterFS Unified File and Object Storage technology to enable their own cloud +storage service. Enterprises can use this technology to accelerate the process of preparing file-based +applications for the cloud and simplify new application development for cloud computing +environments. + + + OpenStack Object Storage is scalable object storage system and it is not a traditional file system. You will not be able to mount this system like traditional SAN or NAS +volumes and perform POSIX compliant operations. +
+ Unified File and Object Storage Architecture + + + + + +
+
+ Components of Object Storage + The major components of Object Storage are: + + Proxy Server + + + All REST requests to the UFO are routed through the Proxy Server. + + + + Objects and Containers + An object is the basic storage entity and any optional metadata that represents the data +you store. When you upload data, the data is stored as-is (with no compression or encryption). + + + A container is a storage compartment for your data and provides a way for you to organize +your data. Containers can be visualized as directories in a Linux system. Data must be stored in a container and hence objects are created within a container. + + + It implements objects as files and directories under the container. The object name is a '/' separated path and UFO maps it to directories until the last name in the path, which is marked as a file. With this approach, objects can be accessed as files and directories from native GlusterFS (FUSE) or NFS mounts by providing the '/' separated path. + Accounts and Account Servers + The OpenStack Object Storage system is designed to be used by many different storage +consumers. Each user is associated with one or more accounts and must identify themselves using an authentication system. While authenticating, users must provide the name of the account for which the authentication is requested. + + + UFO implements accounts as GlusterFS volumes. So, when a user is granted read/write permission on an account, it means that that user has access to all the data available on that GlusterFS volume. + + + + + + Authentication and Access Permissions + + + You must authenticate against an authentication service to receive OpenStack Object +Storage connection parameters and an authentication token. The token must be passed +in for all subsequent container or object operations. One authentication service that you +can use as a middleware example is called tempauth. + By default, each user has their own storage account and has full access to that +account. Users must authenticate with their credentials as described above, but once +authenticated they can manage containers and objects within that account. If a user wants to access the content from another account, they must have API access key or a session token provided by their authentication system. +
+
+ Advantages of using GlusterFS Unified File and Object Storage + The following are the advantages of using GlusterFS UFO: + + + No limit on upload and download files sizes as compared to Open Stack Swift which limits the object size to 5GB. + + + A unified view of data across NAS and Object Storage technologies. + + + Using GlusterFS's UFO has other advantages like the following: + + + High availability + + + Scalability + + + Replication + + + Elastic Volume management + + + + +
+
+ Preparing to Deploy Unified File and Object Storage + This section provides information on pre-requisites and list of dependencies that will be installed +during the installation of Unified File and Object Storage. + +
+ Pre-requisites + GlusterFS's Unified File and Object Storage needs user_xattr support from the underlying disk file system. +Use the following command to enable user_xattr for GlusterFS bricks backend: + + # mount –o remount,user_xattr device name + For example, + + # mount –o remount,user_xattr /dev/hda1 + +
+
+ Dependencies + The following packages are installed on GlusterFS when you install Unified File and Object +Storage: + + + + + curl + + + + + + + + + + + + + + + + + memcached + + + openssl + + + xfsprogs + + + python2.6 + + + pyxattr + + + python-configobj + + + + python-setuptools + + + + + python-simplejson + + + + + python-webob + + + + + python-eventlet + + + + + python-greenlet + + + + + python-pastedeploy + + + + + python-netifaces + + + +
+
+
+ Installing and Configuring Unified File and Object Storage + This section provides instructions on how to install and configure Unified File and Object Storage in your storage +environment. +
+ Installing Unified File and Object Storage + To install Unified File and Object Storage: + + + Download rhel_install.sh install script from . + + + + Run + rhel_install.sh script using the following command: + + # sh rhel_install.sh + + + Download swift-1.4.5-1.noarch.rpm and swift-plugin-1.0.-1.el6.noarch.rpm files from . + + + Install swift-1.4.5-1.noarch.rpm and swift-plugin-1.0.-1.el6.noarch.rpm using the following commands: + # rpm -ivh swift-1.4.5-1.noarch.rpm + # rpm -ivh swift-plugin-1.0.-1.el6.noarch.rpm + + You must repeat the above steps on all the machines on which you want to install Unified File and Object Storage. If you install the Unified File and Object Storage on multiple servers, you can use a load balancer like pound, nginx, and so on to distribute the request across the machines. + + + +
+
+ Adding Users + The authentication system allows the administrator to grant different levels of access to different users based on the requirement. The following are the types of user permissions: + + + + admin user + + + + normal user + + + Admin user has read and write permissions on the account. By default, a normal user has no read or write permissions. A normal user can only authenticate itself to get a Auth-Token. Read or write permission are provided through ACLs by the admin users. + Add a new user by adding the following entry in /etc/swift/proxy-server.conf file: + user_<account-name>_<user-name> = <password> [.admin] + For example, + user_test_tester = testing .admin + + + During installation, the installation script adds few sample users to the proxy-server.conf file. It is highly recommended that you remove all the default sample user entries from the configuration file. + + + For more information on setting ACLs, see . +
+
+ Configuring Proxy Server + The Proxy Server is responsible for connecting to the rest of the OpenStack Object Storage architecture. For each request, it looks up the location of the account, container, or object in the ring and route the request accordingly. The public API is also exposed through the proxy server. When objects are streamed to or from an object server, they are streamed directly through the proxy server to or from the user – the proxy server does not spool them. + + The configurable options pertaining to proxy server are stored in /etc/swift/proxy-server.conf. The following is the sample proxy-server.conf file: + [app:proxy-server] +use = egg:swift#proxy +allow_account_management=true +account_autocreate=true + +[filter:tempauth] +use = egg:swift#tempauth user_admin_admin=admin.admin.reseller_admin +user_test_tester=testing.admin +user_test2_tester2=testing2.admin +user_test_tester3=testing3 + +[filter:healthcheck] +use = egg:swift#healthcheck + +[filter:cache] +use = egg:swift#memcache + By default, GlusterFS's Unified File and Object Storage is configured to support HTTP protocol and uses temporary authentication to authenticate the HTTP requests. +
+
+ Configuring Authentication System + Proxy server must be configured to authenticate using + tempauth + . +
+
+ Configuring Proxy Server for HTTPS + By default, proxy server only handles HTTP request. To configure the proxy server to process HTTPS requests, perform the following steps: + + + Create self-signed cert for SSL using the following commands: + cd /etc/swift +openssl req -new -x509 -nodes -out cert.crt -keyout cert.key + + + Add the following lines to /etc/swift/proxy-server.conf under [DEFAULT] + bind_port = 443 + cert_file = /etc/swift/cert.crt + key_file = /etc/swift/cert.key + + + Restart the servers using the following commands: + swift-init main stop +swift-init main start + + + The following are the configurable options: + + + proxy-server.conf Default Options in the [DEFAULT] section + + + + + + + Option + Default + Description + + + + + bind_ip + 0.0.0.0 + IP Address for server to bind + + + bind_port + 80 + Port for server to bind + + + swift_dir + /etc/swift + Swift configuration directory + + + workers + 1 + Number of workers to fork + + + user + swift + swift user + + + cert_file + + Path to the ssl .crt + + + key_file + + Path to the ssl .key + + + +
+ + proxy-server.conf Server Options in the [proxy-server] section + + + + + + + Option + Default + Description + + + + + use + + paste.deploy entry point for the container server. For most cases, this should be egg:swift#container. + + + log_name + proxy-server + Label used when logging + + + log_facility + LOG_LOCAL0 + Syslog log facility + + + log_level + INFO + Log level + + + log_headers + True + If True, log headers in each request + + + recheck_account_existence + 60 + Cache timeout in seconds to send memcached for account existence + + + recheck_container_existence + 60 + Cache timeout in seconds to send memcached for container existence + + + object_chunk_size + 65536 + Chunk size to read from object servers + + + client_chunk_size + 65536 + Chunk size to read from clients + + + memcache_servers + 127.0.0.1:11211 + Comma separated list of memcached servers ip:port + + + node_timeout + 10 + Request timeout to external services + + + client_timeout + 60 + Timeout to read one chunk from a client + + + conn_timeout + 0.5 + Connection timeout to external services + + + error_suppression_interval + 60 + Time in seconds that must elapse since the last error for a node to be considered no longer error limited + + + error_suppression_limit + 10 + Error count to consider a node error limited + + + allow_account_management + false + Whether account PUTs and DELETEs are even callable + + + +
+
+
+ Configuring Object Server + The Object Server is a very simple blob storage server that can store, retrieve, and delete objects stored on local devices. Objects are stored as binary files on the file system with metadata stored in the file’s extended attributes (xattrs). This requires that the underlying file system choice for object servers support xattrs on files. + + + The configurable options pertaining Object Server are stored in the file /etc/swift/object-server/1.conf. The following is the sample object-server/1.conf file: + [DEFAULT] +devices = /srv/1/node +mount_check = false +bind_port = 6010 +user = root +log_facility = LOG_LOCAL2 + +[pipeline:main] +pipeline = gluster object-server + +[app:object-server] +use = egg:swift#object + +[filter:gluster] +use = egg:swift#gluster + +[object-replicator] +vm_test_mode = yes + +[object-updater] +[object-auditor] + The following are the configurable options: + + + object-server.conf Default Options in the [DEFAULT] section + + + + + + + Option + Default + Description + + + + + swift_dir + /etc/swift + Swift configuration directory + + + devices + /srv/node + Mount parent directory where devices are mounted + + + mount_check + true + Whether or not check if the devices are mounted to prevent accidentally writing to the root device + + + bind_ip + 0.0.0.0 + IP Address for server to bind + + + bind_port + 6000 + Port for server to bind + + + workers + 1 + Number of workers to fork + + + +
+ + object-server.conf Server Options in the [object-server] section + + + + + + + Option + Default + Description + + + + + use + + paste.deploy entry point for the object server. For most cases, this should be egg:swift#object. + + + log_name + object-server + log name used when logging + + + log_facility + LOG_LOCAL0 + Syslog log facility + + + log_level + INFO + Logging level + + + log_requests + True + Whether or not to log each request + + + user + swift + swift user + + + node_timeout + 3 + Request timeout to external services + + + conn_timeout + 0.5 + Connection timeout to external services + + + network_chunk_size + 65536 + Size of chunks to read or write over the network + + + disk_chunk_size + 65536 + Size of chunks to read or write to disk + + + max_upload_time + 65536 + Maximum time allowed to upload an object + + + slow + 0 + If > 0, Minimum time in seconds for a PUT or DELETE request to complete + + + +
+
+
+ Configuring Container Server + The Container Server’s primary job is to handle listings of objects. The listing is done by querying the GlusterFS mount point with path. This query returns a list of all files and directories present under that container. + + The configurable options pertaining to container server are stored in /etc/swift/container-server/1.conf file. The following is the sample container-server/1.conf file: + [DEFAULT] +devices = /srv/1/node +mount_check = false +bind_port = 6011 +user = root +log_facility = LOG_LOCAL2 + +[pipeline:main] +pipeline = gluster container-server + +[app:container-server] +use = egg:swift#container + +[filter:gluster] +use = egg:swift#gluster + +[container-replicator] +[container-updater] +[container-auditor] + The following are the configurable options: + + container-server.conf Default Options in the [DEFAULT] section + + + + + + + Option + Default + Description + + + + + swift_dir + /etc/swift + Swift configuration directory + + + devices + /srv/node + Mount parent directory where devices are mounted + + + mount_check + true + Whether or not check if the devices are mounted to prevent accidentally writing to the root device + + + bind_ip + 0.0.0.0 + IP Address for server to bind + + + bind_port + 6001 + Port for server to bind + + + workers + 1 + Number of workers to fork + + + user + swift + Swift user + + + +
+ + container-server.conf Server Options in the [container-server] section + + + + + + + Option + Default + Description + + + + + use + + paste.deploy entry point for the container server. For most cases, this should be egg:swift#container. + + + log_name + container-server + Label used when logging + + + log_facility + LOG_LOCAL0 + Syslog log facility + + + log_level + INFO + Logging level + + + node_timeout + 3 + Request timeout to external services + + + conn_timeout + 0.5 + Connection timeout to external services + + + +
+
+
+ Configuring Account Server + The Account Server is very similar to the Container Server, except that it is responsible for listing of containers rather than objects. In UFO, each gluster volume is an account. + + The configurable options pertaining to account server are stored in /etc/swift/account-server/1.conf file. The following is the sample account-server/1.conf file: + [DEFAULT] +devices = /srv/1/node +mount_check = false +bind_port = 6012 +user = root +log_facility = LOG_LOCAL2 + +[pipeline:main] +pipeline = gluster account-server + +[app:account-server] +use = egg:swift#account + +[filter:gluster] +use = egg:swift#gluster + +[account-replicator] +vm_test_mode = yes + +[account-auditor] +[account-reaper] + The following are the configurable options: + + account-server.conf Default Options in the [DEFAULT] section + + + + + + + Option + Default + Description + + + + + swift_dir + /etc/swift + Swift configuration directory + + + devices + /srv/node + mount parent directory where devices are mounted + + + mount_check + true + Whether or not check if the devices are mounted to prevent accidentally writing to the root device + + + bind_ip + 0.0.0.0 + IP Address for server to bind + + + bind_port + 6002 + Port for server to bind + + + workers + 1 + Number of workers to fork + + + user + swift + Swift user + + + +
+ + account-server.conf Server Options in the [account-server] section + + + + + + + Option + Default + Description + + + + + use + + paste.deploy entry point for the container server. For most cases, this should be egg:swift#container. + + + log_name + account-server + Label used when logging + + + log_facility + LOG_LOCAL0 + Syslog log facility + + + log_level + INFO + Logging level + + + +
+
+
+ Starting and Stopping Server + You must start the server manually when system reboots and whenever you update/modify the configuration files. + + + To start the server, enter the following command: + # swift_init main start + + + To stop the server, enter the following command: + # swift_init main stop + + +
+
+
+ Working with Unified File and Object Storage + This section describes the REST API for administering and managing Object Storage. All requests will +be directed to the host and URL described in the X-Storage-URL HTTP header obtained during +successful authentication. + +
+ Configuring Authenticated Access + Authentication is the process of proving identity to the system. To use the REST interface, you must +obtain an authorization token using GET method and supply it with v1.0 as the path. + + Each REST request against the Object Storage system requires the addition of a specific authorization +token HTTP x-header, defined as X-Auth-Token. The storage URL and authentication token are +returned in the headers of the response. + + + + To authenticate, run the following command: + + GET auth/v1.0 HTTP/1.1 +Host: <auth URL> +X-Auth-User: <account name>:<user name> +X-Auth-Key: <user-Password> + For example, + + GET auth/v1.0 HTTP/1.1 +Host: auth.example.com +X-Auth-User: test:tester +X-Auth-Key: testing + +HTTP/1.1 200 OK +X-Storage-Url: https:/example.storage.com:443/v1/AUTH_test +X-Storage-Token: AUTH_tkde3ad38b087b49bbbac0494f7600a554 +X-Auth-Token: AUTH_tkde3ad38b087b49bbbac0494f7600a554 +Content-Length: 0 +Date: Wed, 10 jul 2011 06:11:51 GMT + To authenticate access using cURL (for the above example), run the following +command: + + curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass:testing' -k +https://auth.example.com:443/auth/v1.0 + The X-Auth-Url has to be parsed and used in the connection and request line of all subsequent +requests to the server. In the example output, users connecting to server will send most +container/object requests with a host header of example.storage.com and the request line's version +and account as v1/AUTH_test. + + + + + + The authentication tokens are valid for a 24 hour period. + + +
+
+ Working with Accounts + This section describes the list of operations you can perform at the account level of the URL. + +
+ Displaying Container Information + You can list the objects of a specific container, or all containers, as needed using GET command. You +can use the following optional parameters with GET request to refine the results: + + + + + + + + Parameter + Description + + + + + limit + Limits the number of results to at most n value. + + + marker + Returns object names greater in value than the specified marker. + + + format + Specify either json or xml to return the respective serialized response. + + + + + To display container information + + + List all the containers of an account using the following command: + + GET /<apiversion>/<account> HTTP/1.1 +Host: <storage URL> +X-Auth-Token: <authentication-token-key> + For example, + + GET /v1/AUTH_test HTTP/1.1 +Host: example.storage.com +X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 + +HTTP/1.1 200 Ok +Date: Wed, 13 Jul 2011 16:32:21 GMT +Server: Apache +Content-Type: text/plain; charset=UTF-8 +Content-Length: 39 + +songs +movies +documents +reports + + + To display container information using cURL (for the above example), run the following +command: + + curl -v -X GET -H 'X-Auth-Token: AUTH_tkde3ad38b087b49bbbac0494f7600a554' +https://example.storage.com:443/v1/AUTH_test -k +
+
+ Displaying Account Metadata Information + You can issue HEAD command to the storage service to view the number of containers and the total +bytes stored in the account. + + + + To display containers and storage used, run the following command: + + HEAD /<apiversion>/<account> HTTP/1.1 +Host: <storage URL> +X-Auth-Token: <authentication-token-key> + For example, + + HEAD /v1/AUTH_test HTTP/1.1 +Host: example.storage.com +X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 + +HTTP/1.1 204 No Content +Date: Wed, 13 Jul 2011 16:52:21 GMT +Server: Apache +X-Account-Container-Count: 4 +X-Account-Total-Bytes-Used: 394792 + To display account metadata information using cURL (for the above example), run the following +command: + + curl -v -X HEAD -H 'X-Auth-Token: +AUTH_tkde3ad38b087b49bbbac0494f7600a554' +https://example.storage.com:443/v1/AUTH_test -k + + +
+
+
+ Working with Containers + This section describes the list of operations you can perform at the container level of the URL. + +
+ Creating Containers + You can use PUT command to create containers. Containers are the storage folders for your data. +The URL encoded name must be less than 256 bytes and cannot contain a forward slash '/' character. + + + + To create a container, run the following command: + + PUT /<apiversion>/<account>/<container>/ HTTP/1.1 +Host: <storage URL> +X-Auth-Token: <authentication-token-key> + For example, + + PUT /v1/AUTH_test/pictures/ HTTP/1.1 +Host: example.storage.com +X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 +HTTP/1.1 201 Created + +Date: Wed, 13 Jul 2011 17:32:21 GMT +Server: Apache +Content-Type: text/plain; charset=UTF-8 + To create container using cURL (for the above example), run the following command: + + curl -v -X PUT -H 'X-Auth-Token: +AUTH_tkde3ad38b087b49bbbac0494f7600a554' +https://example.storage.com:443/v1/AUTH_test/pictures -k + The status code of 201 (Created) indicates that you have successfully created the container. If a +container with same is already existed, the status code of 202 is displayed. + + + +
+
+ Displaying Objects of a Container + You can list the objects of a container using GET command. You can use the following optional +parameters with GET request to refine the results: + + + + + + + + Parameter + Description + + + + + limit + Limits the number of results to at most n value. + + + marker + Returns object names greater in value than the specified marker. + + + prefix + Displays the results limited to object names beginning with the substring x. beginning with the substring x. + + + path + Returns the object names nested in the pseudo path. + + + format + Specify either json or xml to return the respective serialized response. + + + delimiter + Returns all the object names nested in the container. + + + + + To display objects of a container + + + + List objects of a specific container using the following command: + + + + GET /<apiversion>/<account>/<container>[parm=value] HTTP/1.1 +Host: <storage URL> +X-Auth-Token: <authentication-token-key> + For example, + + GET /v1/AUTH_test/images HTTP/1.1 +Host: example.storage.com +X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 + +HTTP/1.1 200 Ok +Date: Wed, 13 Jul 2011 15:42:21 GMT +Server: Apache +Content-Type: text/plain; charset=UTF-8 +Content-Length: 139 + +sample file.jpg +test-file.pdf +You and Me.pdf +Puddle of Mudd.mp3 +Test Reports.doc + To display objects of a container using cURL (for the above example), run the following +command: + + curl -v -X GET-H 'X-Auth-Token: AUTH_tkde3ad38b087b49bbbac0494f7600a554' +https://example.storage.com:443/v1/AUTH_test/images -k +
+
+ Displaying Container Metadata Information + You can issue HEAD command to the storage service to view the number of objects in a container and +the total bytes of all the objects stored in the container. + + + + To display list of objects and storage used, run the following command: + + HEAD /<apiversion>/<account>/<container> HTTP/1.1 +Host: <storage URL> +X-Auth-Token: <authentication-token-key> + For example, + HEAD /v1/AUTH_test/images HTTP/1.1 +Host: example.storage.com +X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 + +HTTP/1.1 204 No Content +Date: Wed, 13 Jul 2011 19:52:21 GMT +Server: Apache +X-Account-Object-Count: 8 +X-Container-Bytes-Used: 472 + To display list of objects and storage used in a container using cURL (for the above example), run +the following command: + + curl -v -X HEAD -H 'X-Auth-Token: +AUTH_tkde3ad38b087b49bbbac0494f7600a554' +https://example.storage.com:443/v1/AUTH_test/images -k + + +
+
+ Deleting Container + You can use DELETE command to permanently delete containers. The container must be empty +before it can be deleted. + + You can issue HEAD command to determine if it contains any objects. + + + + To delete a container, run the following command: + + DELETE /<apiversion>/<account>/<container>/ HTTP/1.1 +Host: <storage URL> +X-Auth-Token: <authentication-token-key> + For example, + DELETE /v1/AUTH_test/pictures HTTP/1.1 +Host: example.storage.com +X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 + +HTTP/1.1 204 No Content +Date: Wed, 13 Jul 2011 17:52:21 GMT +Server: Apache +Content-Length: 0 +Content-Type: text/plain; charset=UTF-8 + To delete a container using cURL (for the above example), run the following command: + + curl -v -X DELETE -H 'X-Auth-Token: +AUTH_tkde3ad38b087b49bbbac0494f7600a554' +https://example.storage.com:443/v1/AUTH_test/pictures -k + The status code of 204 (No Content) indicates that you have successfully deleted the container. If +that container does not exist, the status code 404 (Not Found) is displayed, and if the container is +not empty, the status code 409 (Conflict) is displayed. + + + +
+
+ Updating Container Metadata + You can update the metadata of container using POST operation, metadata keys should be prefixed +with 'x-container-meta'. + + + + To update the metadata of the object, run the following command: + + POST /<apiversion>/<account>/<container> HTTP/1.1 +Host: <storage URL> +X-Auth-Token: <Authentication-token-key> +X-Container-Meta-<key>: <new value> +X-Container-Meta-<key>: <new value> + For example, + + POST /v1/AUTH_test/images HTTP/1.1 +Host: example.storage.com +X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 +X-Container-Meta-Zoo: Lion +X-Container-Meta-Home: Dog + +HTTP/1.1 204 No Content +Date: Wed, 13 Jul 2011 20:52:21 GMT +Server: Apache +Content-Type: text/plain; charset=UTF-8 + To update the metadata of the object using cURL (for the above example), run the following +command: + + curl -v -X POST -H 'X-Auth-Token: +AUTH_tkde3ad38b087b49bbbac0494f7600a554' +https://example.storage.com:443/v1/AUTH_test/images -H ' X-Container-Meta-Zoo: Lion' -H 'X-Container-Meta-Home: Dog' -k + The status code of 204 (No Content) indicates the container's metadata is updated successfully. If +that object does not exist, the status code 404 (Not Found) is displayed. + + + +
+
+ Setting ACLs on Container + You can set the container access control list by using POST command on container with x- container-read and x-container-write keys. + + The ACL format is [item[,item...]]. Each item can be a group name to give access to or a +referrer designation to grant or deny based on the HTTP Referer header. + + The referrer designation format is: .r:[-]value. + + The .r can also be .ref, .referer, or .referrer; though it will be shortened to.r for +decreased character count usage. The value can be * to specify any referrer host is allowed access. The leading minus sign (-) +indicates referrer hosts that should be denied access. + + Examples of valid ACLs: + + .r:* +.r:*,bobs_account,sues_account:sue +bobs_account,sues_account:sue + Examples of invalid ACLs: + .r: +.r:- + By default, allowing read access via .r will not allow listing objects in the container but allows +retrieving objects from the container. To turn on listings, use the .rlistings directive. Also, .r +designations are not allowed in headers whose names include the word write. + + For example, to set all the objects access rights to "public‟ inside the container using cURL (for the +above example), run the following command: + + curl -v -X POST -H 'X-Auth-Token: +AUTH_tkde3ad38b087b49bbbac0494f7600a554' +https://example.storage.com:443/v1/AUTH_test/images +-H 'X-Container-Read: .r:*' -k +
+
+
+ Working with Objects + An object represents the data and any metadata for the files stored in the system. Through the REST +interface, metadata for an object can be included by adding custom HTTP headers to the request +and the data payload as the request body. Objects name should not exceed 1024 bytes after URL +encoding. + + This section describes the list of operations you can perform at the object level of the URL. + +
+ Creating or Updating Object + You can use PUT command to write or update an object's content and metadata. + + You can verify the data integrity by including an MD5checksum for the object's data in the ETag +header. ETag header is optional and can be used to ensure that the object's contents are stored +successfully in the storage system. + + You can assign custom metadata to objects by including additional HTTP headers on the PUT request. +The objects created with custom metadata via HTTP headers are identified with theX-Object- Meta- prefix. + + + + To create or update an object, run the following command: + + PUT /<apiversion>/<account>/<container>/<object> HTTP/1.1 +Host: <storage URL> +X-Auth-Token: <authentication-token-key> +ETag: da1e100dc9e7becc810986e37875ae38 +Content-Length: 342909 +X-Object-Meta-PIN: 2343 + For example, + PUT /v1/AUTH_test/pictures/dog HTTP/1.1 +Host: example.storage.com +X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 +ETag: da1e100dc9e7becc810986e37875ae38 + +HTTP/1.1 201 Created +Date: Wed, 13 Jul 2011 18:32:21 GMT +Server: Apache +ETag: da1e100dc9e7becc810986e37875ae38 +Content-Length: 0 +Content-Type: text/plain; charset=UTF-8 + To create or update an object using cURL (for the above example), run the following command: + + curl -v -X PUT -H 'X-Auth-Token: +AUTH_tkde3ad38b087b49bbbac0494f7600a554' +https://example.storage.com:443/v1/AUTH_test/pictures/dog -H 'Content- +Length: 0' -k + The status code of 201 (Created) indicates that you have successfully created or updated the object. +If there is a missing content-Length or Content-Type header in the request, the status code of 412 +(Length Required) is displayed. (Optionally) If the MD5 checksum of the data written to the storage +system does not match the ETag value, the status code of 422 (Unprocessable Entity) is displayed. + + + +
+ Chunked Transfer Encoding + You can upload data without knowing the size of the data to be uploaded. You can do this by +specifying an HTTP header of Transfer-Encoding: chunked and without using a Content-Length +header. + + You can use this feature while doing a DB dump, piping the output through gzip, and then piping the +data directly into Object Storage without having to buffer the data to disk to compute the file size. + + + + To create or update an object, run the following command: + + PUT /<apiversion>/<account>/<container>/<object> HTTP/1.1 +Host: <storage URL> +X-Auth-Token: <authentication-token-key> +Transfer-Encoding: chunked +X-Object-Meta-PIN: 2343 + For example, + + PUT /v1/AUTH_test/pictures/cat HTTP/1.1 +Host: example.storage.com +X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 +Transfer-Encoding: chunked +X-Object-Meta-PIN: 2343 +19 +A bunch of data broken up +D +into chunks. +0 + + + + +
+
+
+ Copying Object + You can copy object from one container to another or add a new object and then add reference to +designate the source of the data from another container. + + To copy object from one container to another + + + To add a new object and designate the source of the data from another container, run the +following command: + + COPY /<apiversion>/<account>/<container>/<sourceobject> HTTP/1.1 +Host: <storage URL> +X-Auth-Token: < authentication-token-key> +Destination: /<container>/<destinationobject> + For example, + + COPY /v1/AUTH_test/images/dogs HTTP/1.1 +Host: example.storage.com +X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 +Destination: /photos/cats + +HTTP/1.1 201 Created +Date: Wed, 13 Jul 2011 18:32:21 GMT +Server: Apache +Content-Length: 0 +Content-Type: text/plain; charset=UTF-8 + To copy an object using cURL (for the above example), run the following command: + + curl -v -X COPY -H 'X-Auth-Token: +AUTH_tkde3ad38b087b49bbbac0494f7600a554' -H 'Destination: /photos/cats' -k https://example.storage.com:443/v1/AUTH_test/images/dogs + The status code of 201 (Created) indicates that you have successfully copied the object. If there is a +missing content-Length or Content-Type header in the request, the status code of 412 (Length +Required) is displayed. + + You can also use PUT command to copy object by using additional header X-Copy-From: container/obj. + + + + To use PUT command to copy an object, run the following command: + + PUT /v1/AUTH_test/photos/cats HTTP/1.1 +Host: example.storage.com +X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 +X-Copy-From: /images/dogs + +HTTP/1.1 201 Created +Date: Wed, 13 Jul 2011 18:32:21 GMT +Server: Apache +Content-Type: text/plain; charset=UTF-8 + To copy an object using cURL (for the above example), run the following command: + + curl -v -X PUT -H 'X-Auth-Token: AUTH_tkde3ad38b087b49bbbac0494f7600a554' +-H 'X-Copy-From: /images/dogs' –k +https://example.storage.com:443/v1/AUTH_test/images/cats + The status code of 201 (Created) indicates that you have successfully copied the object. + + + +
+
+ Displaying Object Information + You can issue GET command on an object to view the object data of the object. + + + + To display the content of an object run the following command: + GET /<apiversion>/<account>/<container>/<object> HTTP/1.1 +Host: <storage URL> +X-Auth-Token: <Authentication-token-key> + For example, + + GET /v1/AUTH_test/images/cat HTTP/1.1 +Host: example.storage.com +X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 + +HTTP/1.1 200 Ok +Date: Wed, 13 Jul 2011 23:52:21 GMT +Server: Apache +Last-Modified: Thu, 14 Jul 2011 13:40:18 GMT +ETag: 8a964ee2a5e88be344f36c22562a6486 +Content-Length: 534210 +[.........] + To display the content of an object using cURL (for the above example), run the following +command: + + curl -v -X GET -H 'X-Auth-Token: +AUTH_tkde3ad38b087b49bbbac0494f7600a554' +https://example.storage.com:443/v1/AUTH_test/images/cat -k + The status code of 200 (Ok) indicates the object‟s data is displayed successfully. If that object does +not exist, the status code 404 (Not Found) is displayed. + + + +
+
+ Displaying Object Metadata + You can issue HEAD command on an object to view the object metadata and other standard HTTP +headers. You must send only authorization token as header. + + + + To display the metadata of the object, run the following command: + + + + HEAD /<apiversion>/<account>/<container>/<object> HTTP/1.1 +Host: <storage URL> +X-Auth-Token: <Authentication-token-key> + For example, + + HEAD /v1/AUTH_test/images/cat HTTP/1.1 +Host: example.storage.com +X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 + +HTTP/1.1 204 No Content +Date: Wed, 13 Jul 2011 21:52:21 GMT +Server: Apache +Last-Modified: Thu, 14 Jul 2011 13:40:18 GMT +ETag: 8a964ee2a5e88be344f36c22562a6486 +Content-Length: 512000 +Content-Type: text/plain; charset=UTF-8 +X-Object-Meta-House: Cat +X-Object-Meta-Zoo: Cat +X-Object-Meta-Home: Cat +X-Object-Meta-Park: Cat + To display the metadata of the object using cURL (for the above example), run the following +command: + + curl -v -X HEAD -H 'X-Auth-Token: +AUTH_tkde3ad38b087b49bbbac0494f7600a554' +https://example.storage.com:443/v1/AUTH_test/images/cat -k + The status code of 204 (No Content) indicates the object‟s metadata is displayed successfully. If that +object does not exist, the status code 404 (Not Found) is displayed. + +
+
+ Updating Object Metadata + You can issue POST command on an object name only to set or overwrite arbitrary key metadata. You +cannot change the object‟s other headers such as Content-Type, ETag and others using POST +operation. The POST command will delete all the existing metadata and replace it with the new +arbitrary key metadata. + + You must prefix X-Object-Meta- to the key names. + + + + To update the metadata of an object, run the following command: + POST /<apiversion>/<account>/<container>/<object> HTTP/1.1 +Host: <storage URL> +X-Auth-Token: <Authentication-token-key> +X-Object-Meta-<key>: <new value> +X-Object-Meta-<key>: <new value> + + For example, + + POST /v1/AUTH_test/images/cat HTTP/1.1 +Host: example.storage.com +X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 +X-Object-Meta-Zoo: Lion +X-Object-Meta-Home: Dog + +HTTP/1.1 202 Accepted +Date: Wed, 13 Jul 2011 22:52:21 GMT +Server: Apache +Content-Length: 0 +Content-Type: text/plain; charset=UTF-8 + To update the metadata of an object using cURL (for the above example), run the following +command: + + curl -v -X POST -H 'X-Auth-Token: +AUTH_tkde3ad38b087b49bbbac0494f7600a554' +https://example.storage.com:443/v1/AUTH_test/images/cat -H ' X-Object- +Meta-Zoo: Lion' -H 'X-Object-Meta-Home: Dog' -k + The status code of 202 (Accepted) indicates that you have successfully updated the object‟s +metadata. If that object does not exist, the status code 404 (Not Found) is displayed. + + + + +
+
+ Deleting Object + You can use DELETE command to permanently delete the object. + + The DELETE command on an object will be processed immediately and any subsequent operations +like GET, HEAD, POST, or DELETE on the object will display 404 (Not Found) error. + + + + To delete an object, run the following command: + + DELETE /<apiversion>/<account>/<container>/<object> HTTP/1.1 +Host: <storage URL> +X-Auth-Token: <Authentication-token-key> + For example, + + DELETE /v1/AUTH_test/pictures/cat HTTP/1.1 +Host: example.storage.com +X-Auth-Token: AUTH_tkd3ad38b087b49bbbac0494f7600a554 + +HTTP/1.1 204 No Content +Date: Wed, 13 Jul 2011 20:52:21 GMT +Server: Apache +Content-Type: text/plain; charset=UTF-8 + To delete an object using cURL (for the above example), run the following command: + + curl -v -X DELETE -H 'X-Auth-Token: +AUTH_tkde3ad38b087b49bbbac0494f7600a554' +https://example.storage.com:443/v1/AUTH_test/pictures/cat -k + The status code of 204 (No Content) indicates that you have successfully deleted the object. If that +object does not exist, the status code 404 (Not Found) is displayed. + + + +
+
+
+
diff --git a/doc/legacy/docbook/admin_commandref.xml b/doc/legacy/docbook/admin_commandref.xml new file mode 100644 index 000000000..5e1560534 --- /dev/null +++ b/doc/legacy/docbook/admin_commandref.xml @@ -0,0 +1,334 @@ + + + + Command Reference + This section describes the available commands and includes the +following section: + + + + gluster Command + + Gluster Console Manager (command line interpreter) + + + + glusterd Daemon + + Gluster elastic volume management daemon + + + +
+ gluster Command + NAME + + gluster - Gluster Console Manager (command line interpreter) + + SYNOPSIS + + To run the program and display the gluster prompt: + + gluster + + To specify a command directly: +gluster [COMMANDS] [OPTIONS] + + DESCRIPTION + + The Gluster Console Manager is a command line utility for elastic volume management. You can run +the gluster command on any export server. The command enables administrators to perform cloud +operations such as creating, expanding, shrinking, rebalancing, and migrating volumes without +needing to schedule server downtime. + + COMMANDS + + + + + + + + + Command + Description + + + + + + Volume + + + + volume info [all | VOLNAME] + Displays information about all volumes, or the specified volume. + + + volume create NEW-VOLNAME [stripe COUNT] [replica COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK ... + Creates a new volume of the specified type using the specified bricks and transport type (the default transport type is tcp). + + + volume delete VOLNAME + Deletes the specified volume. + + + volume start VOLNAME + Starts the specified volume. + + + volume stop VOLNAME [force] + Stops the specified volume. + + + volume rename VOLNAME NEW-VOLNAME + Renames the specified volume. + + + volume help + Displays help for the volume command. + + + + Brick + + + + volume add-brick VOLNAME NEW-BRICK ... + Adds the specified brick to the specified volume. + + + volume replace-brick VOLNAME (BRICK NEW-BRICK) start | pause | abort | status + Replaces the specified brick. + + + volume remove-brick VOLNAME [(replica COUNT)|(stripe COUNT)] BRICK ... + Removes the specified brick from the specified volume. + + + + Rebalance + + + + volume rebalance VOLNAME start + Starts rebalancing the specified volume. + + + volume rebalance VOLNAME stop + Stops rebalancing the specified volume. + + + volume rebalance VOLNAME status + Displays the rebalance status of the specified volume. + + + + Log + + + + volume log filename VOLNAME [BRICK] DIRECTORY + Sets the log directory for the corresponding volume/brick. + + + volume log rotate VOLNAME [BRICK] + Rotates the log file for corresponding volume/brick. + + + volume log locate VOLNAME [BRICK] + Locates the log file for corresponding volume/brick. + + + + Peer + + + + peer probe HOSTNAME + Probes the specified peer. + + + peer detach HOSTNAME + Detaches the specified peer. + + + peer status + Displays the status of peers. + + + peer help + Displays help for the peer command. + + + + Geo-replication + + + + volume geo-replication MASTER SLAVE start + + Start geo-replication between the hosts specified by MASTER and SLAVE. You can specify a local master volume as :VOLNAME. + You can specify a local slave volume as :VOLUME and a local slave directory as /DIRECTORY/SUB-DIRECTORY. You can specify a remote slave volume as DOMAIN::VOLNAME and a remote slave directory as DOMAIN:/DIRECTORY/SUB-DIRECTORY. + + + + volume geo-replication MASTER SLAVE stop + + Stop geo-replication between the hosts specified by MASTER and SLAVE. You can specify a local master volume as :VOLNAME and a local master directory as /DIRECTORY/SUB-DIRECTORY. + You can specify a local slave volume as :VOLNAME and a local slave directory as /DIRECTORY/SUB-DIRECTORY. You can specify a remote slave volume as DOMAIN::VOLNAME and a remote slave directory as DOMAIN:/DIRECTORY/SUB-DIRECTORY. + + + + + volume geo-replication MASTER SLAVE config [options] + + Configure geo-replication options between the hosts specified by MASTER and SLAVE. + + + gluster-command COMMAND + The path where the gluster command is installed. + + + gluster-log-level LOGFILELEVEL + The log level for gluster processes. + + + log-file LOGFILE + The path to the geo-replication log file. + + + log-level LOGFILELEVEL + The log level for geo-replication. + + + remote-gsyncd COMMAND + The path where the gsyncd binary is installed on the remote machine. + + + ssh-command COMMAND + The ssh command to use to connect to the remote machine (the default is ssh). + + + rsync-command COMMAND + The rsync command to use for synchronizing the files (the default is rsync). + + + volume_id= UID + The command to delete the existing master UID for the intermediate/slave node. + + + timeout SECONDS + The timeout period. + + + sync-jobs N + The number of simultaneous files/directories that can be synchronized. + + + + ignore-deletes + If this option is set to 1, a file deleted on master will not trigger a delete operation on the slave. Hence, the slave will remain as a superset of the master and can be used to recover the master in case of crash and/or accidental delete. + + + + Other + + + + help + + Display the command options. + + + quit + + Exit the gluster command line interface. + + + + + FILES + + + /var/lib/glusterd/* + + SEE ALSO + fusermount(1), mount.glusterfs(8), glusterfs-volgen(8), glusterfs(8), glusterd(8) +
+
+ glusterd Daemon + NAME + + glusterd - Gluster elastic volume management daemon + SYNOPSIS + + glusterd [OPTION...] + + DESCRIPTION + + The glusterd daemon is used for elastic volume management. The daemon must be run on all export servers. + + OPTIONS + + + + + + + + Option + Description + + + + + + Basic + + + + -l=LOGFILE, --log-file=LOGFILE + Files to use for logging (the default is /usr/local/var/log/glusterfs/glusterfs.log). + + + -L=LOGLEVEL, --log-level=LOGLEVEL + Logging severity. Valid options are TRACE, DEBUG, INFO, WARNING, ERROR and CRITICAL (the default is INFO). + + + --debug + Runs the program in debug mode. This option sets --no-daemon, --log-level to DEBUG, and --log-file to console. + + + -N, --no-daemon + Runs the program in the foreground. + + + + Miscellaneous + + + + -?, --help + Displays this help. + + + --usage + Displays a short usage message. + + + -V, --version + Prints the program version. + + + + + FILES + + + /var/lib/glusterd/* + + SEE ALSO + fusermount(1), mount.glusterfs(8), glusterfs-volgen(8), glusterfs(8), gluster(8) +
+
diff --git a/doc/legacy/docbook/admin_console.xml b/doc/legacy/docbook/admin_console.xml new file mode 100644 index 000000000..ebf273935 --- /dev/null +++ b/doc/legacy/docbook/admin_console.xml @@ -0,0 +1,28 @@ + + + + Using the Gluster Console Manager – Command Line Utility + The Gluster Console Manager is a single command line utility that simplifies configuration and management of your storage environment. The Gluster Console Manager is similar to the LVM (Logical Volume Manager) CLI or ZFS Command Line Interface, but across multiple storage servers. You can use the Gluster Console Manager online, while volumes are mounted and active. Gluster automatically synchronizes volume configuration information across all Gluster servers. + Using the Gluster Console Manager, you can create new volumes, start volumes, and stop volumes, as required. You can also add bricks to volumes, remove bricks from existing volumes, as well as change translator settings, among other operations. + You can also use the commands to create scripts for automation, as well as use the commands as an API to allow integration with third-party applications. + Running the Gluster Console Manager + You can run the Gluster Console Manager on any GlusterFS server either by invoking the commands or by running the Gluster CLI in interactive mode. You can also use the gluster command remotely using SSH. + + + To run commands directly: + # gluster peer command + For example: + # gluster peer status + + + To run the Gluster Console Manager in interactive mode + # gluster + You can execute gluster commands from the Console Manager prompt: + gluster> command + For example, to view the status of the peer server: + # gluster + gluster > peer status + Display the status of the peer. + + + diff --git a/doc/legacy/docbook/admin_directory_Quota.xml b/doc/legacy/docbook/admin_directory_Quota.xml new file mode 100644 index 000000000..8a1012a6a --- /dev/null +++ b/doc/legacy/docbook/admin_directory_Quota.xml @@ -0,0 +1,179 @@ + + + + Managing Directory Quota + Directory quotas in GlusterFS allow you to set limits on usage of disk space by directories or volumes. +The storage administrators can control the disk space utilization at the directory and/or volume +levels in GlusterFS by setting limits to allocatable disk space at any level in the volume and directory +hierarchy. This is particularly useful in cloud deployments to facilitate utility billing model. + + + For now, only Hard limit is supported. Here, the limit cannot be exceeded and attempts to use +more disk space or inodes beyond the set limit will be denied. + + + System administrators can also monitor the resource utilization to limit the storage for the users +depending on their role in the organization. + + You can set the quota at the following levels: + + + + Directory level – limits the usage at the directory level + + + + Volume level – limits the usage at the volume level + + + + + You can set the disk limit on the directory even if it is not created. The disk limit is enforced +immediately after creating that directory. For more information on setting disk limit, see . + + +
+ Enabling Quota + You must enable Quota to set disk limits. + + To enable quota + + + + Enable the quota using the following command: + + # gluster volume quota VOLNAME enable + For example, to enable quota on test-volume: + + # gluster volume quota test-volume enable +Quota is enabled on /test-volume + + +
+
+ Disabling Quota + You can disable Quota, if needed. + + To disable quota: + + + + Disable the quota using the following command: + + # gluster volume quota VOLNAME disable + For example, to disable quota translator on test-volume: + + # gluster volume quota test-volume disable +Quota translator is disabled on /test-volume + + +
+
+ Setting or Replacing Disk Limit + You can create new directories in your storage environment and set the disk limit or set disk limit for +the existing directories. The directory name should be relative to the volume with the export +directory/mount being treated as "/". + + To set or replace disk limit + + + + Set the disk limit using the following command: + + # gluster volume quota VOLNAME limit-usage /directorylimit-value + For example, to set limit on data directory on test-volume where data is a directory under the +export directory: + + # gluster volume quota test-volume limit-usage /data 10GB +Usage limit has been set on /data + + In a multi-level directory hierarchy, the strictest disk limit will be considered for enforcement. + + + + +
+
+ Displaying Disk Limit Information + You can display disk limit information on all the directories on which the limit is set. + + To display disk limit information + + + + Display disk limit information of all the directories on which limit is set, using the following +command: + + # gluster volume quota VOLNAME list + + For example, to see the set disks limit on test-volume: + + # gluster volume quota test-volume list + + + Path__________Limit______Set Size + +/Test/data 10 GB 6 GB +/Test/data1 10 GB 4 GB + + + Display disk limit information on a particular directory on which limit is set, using the following +command: + + # gluster volume quota VOLNAME list /directory name + + For example, to see the set limit on /data directory of test-volume: + # gluster volume quota test-volume list /data + +Path__________Limit______Set Size +/Test/data 10 GB 6 GB + + +
+
+ Updating Memory Cache Size + For performance reasons, quota caches the directory sizes on client. You can set timeout indicating +the maximum valid duration of directory sizes in cache, from the time they are populated. + + For example: If there are multiple clients writing to a single directory, there are chances that some +other client might write till the quota limit is exceeded. However, this new file-size may not get +reflected in the client till size entry in cache has become stale because of timeout. If writes happen +on this client during this duration, they are allowed even though they would lead to exceeding of +quota-limits, since size in cache is not in sync with the actual size. When timeout happens, the size +in cache is updated from servers and will be in sync and no further writes will be allowed. A timeout +of zero will force fetching of directory sizes from server for every operation that modifies file data +and will effectively disables directory size caching on client side. + + To update the memory cache size + + + + Update the memory cache size using the following command: + + # gluster volume set VOLNAME features.quota-timeout value + For example, to update the memory cache size for every 5 seconds on test-volume: + + # gluster volume set test-volume features.quota-timeout 5 +Set volume successful + + +
+
+ Removing Disk Limit + You can remove set disk limit, if you do not want quota anymore. + + To remove disk limit + + + Remove disk limit set on a particular directory using the following command: + + # gluster volume quota VOLNAME remove /directory name + + For example, to remove the disk limit on /data directory of test-volume: + + # gluster volume quota test-volume remove /data +Usage limit set on /data is removed + + +
+
diff --git a/doc/legacy/docbook/admin_geo-replication.xml b/doc/legacy/docbook/admin_geo-replication.xml new file mode 100644 index 000000000..279e9a62c --- /dev/null +++ b/doc/legacy/docbook/admin_geo-replication.xml @@ -0,0 +1,732 @@ + + + + Managing Geo-replication + Geo-replication provides a continuous, asynchronous, and incremental replication service from one site to another over Local Area Networks (LANs), Wide Area Network (WANs), and across the Internet. + Geo-replication uses a master–slave model, whereby replication and mirroring occurs between the following partners: + + + Master – a GlusterFS volume + + + Slave – a slave which can be of the following types: + + + A local directory which can be represented as file URL like file:///path/to/dir. You can use shortened form, for example, /path/to/dir. + + + A GlusterFS Volume - Slave volume can be either a local volume like gluster://localhost:volname (shortened form - :volname) or a volume served by different host like gluster://host:volname (shortened form - host:volname). + + + + Both of the above types can be accessed remotely using SSH tunnel. To use SSH, add an SSH prefix to either a file URL or gluster type URL. For example, ssh://root@remote-host:/path/to/dir (shortened form - root@remote-host:/path/to/dir) or ssh://root@remote-host:gluster://localhost:volname (shortened from - root@remote-host::volname). + + + + This section introduces Geo-replication, illustrates the various deployment scenarios, and explains how to configure the system to provide replication and mirroring in your environment. +
+ Replicated Volumes vs Geo-replication + The following table lists the difference between replicated volumes and geo-replication: + + + + + + + Replicated Volumes + Geo-replication + + + + + Mirrors data across clusters + Mirrors data across geographically distributed clusters + + + Provides high-availability + Ensures backing up of data for disaster recovery + + + Synchronous replication (each and every file operation is sent across all the bricks) + Asynchronous replication (checks for the changes in files periodically and syncs them on detecting differences) + + + + +
+
+ Preparing to Deploy Geo-replication + This section provides an overview of the Geo-replication deployment scenarios, describes how you can check the minimum system requirements, and explores common deployment scenarios. + + + + + + + + + + + + + + + + + +
+ Exploring Geo-replication Deployment Scenarios + Geo-replication provides an incremental replication service over Local Area Networks (LANs), Wide Area Network (WANs), and across the Internet. This section illustrates the most common deployment scenarios for Geo-replication, including the following: + + + Geo-replication over LAN + + + + Geo-replication over WAN + + + + Geo-replication over the Internet + + + Multi-site cascading Geo-replication + + + Geo-replication over LAN + You can configure Geo-replication to mirror data over a Local Area Network. + + + Geo-replication over LAN + + + + + + Geo-replication over WAN + You can configure Geo-replication to replicate data over a Wide Area Network. + + + + Geo-replication over WAN + + + + + + + Geo-replication over Internet + You can configure Geo-replication to mirror data over the Internet. + + + + Geo-replication over Internet + + + + + + + Multi-site cascading Geo-replication + You can configure Geo-replication to mirror data in a cascading fashion across multiple sites. + + + + Multi-site cascading Geo-replication + + + + + + +
+
+ Geo-replication Deployment Overview + Deploying Geo-replication involves the following steps: + + + Verify that your environment matches the minimum system requirement. For more information, see . + + + Determine the appropriate deployment scenario. For more information, see . + + + Start Geo-replication on master and slave systems, as required. For more information, see . + + +
+
+ Checking Geo-replication Minimum Requirements + Before deploying GlusterFS Geo-replication, verify that your systems match the minimum requirements. + The following table outlines the minimum requirements for both master and slave nodes within your environment: + + + + + + + + Component + Master + Slave + + + + + Operating System + GNU/Linux + GNU/Linux + + + Filesystem + GlusterFS 3.2 or higher + GlusterFS 3.2 or higher (GlusterFS needs to be installed, but does not need to be running), ext3, ext4, or XFS (any other POSIX compliant file system would work, but has not been tested extensively) + + + Python + Python 2.4 (with ctypes external module), or Python 2.5 (or higher) + Python 2.4 (with ctypes external module), or Python 2.5 (or higher) + + + Secure shell + OpenSSH version 4.0 (or higher) + SSH2-compliant daemon + + + Remote synchronization + rsync 3.0.7 or higher + rsync 3.0.7 or higher + + + FUSE + GlusterFS supported versions + GlusterFS supported versions + + + + +
+
+ Setting Up the Environment for Geo-replication + Time Synchronization + + + On bricks of a geo-replication master volume, all the servers' time must be uniform. You are recommended to set up NTP (Network Time Protocol) service to keep the bricks sync in time and avoid out-of-time sync effect. + For example: In a Replicated volume where brick1 of the master is at 12.20 hrs and brick 2 of the master is at 12.10 hrs with 10 minutes time lag, all the changes in brick2 between this period may go unnoticed during synchronization of files with Slave. + For more information on setting up NTP, see . + + + To setup Geo-replication for SSH + Password-less login has to be set up between the host machine (where geo-replication Start command will be issued) and the remote machine (where slave process should be launched through SSH). + + + On the node where geo-replication sessions are to be set up, run the following command: + # ssh-keygen -f /var/lib/glusterd/geo-replication/secret.pem + + Press Enter twice to avoid passphrase. + + + + Run the following command on master for all the slave hosts: + # ssh-copy-id -i /var/lib/glusterd/geo-replication/secret.pem.pub user@slavehost + + +
+
+ Setting Up the Environment for a Secure Geo-replication Slave + You can configure a secure slave using SSH so that master is granted a +restricted access. With GlusterFS, you need not specify +configuration parameters regarding the slave on the master-side +configuration. For example, the master does not require the location of +the rsync program on slave but the slave must ensure that rsync is in +the PATH of the user which the master connects using SSH. The only +information that master and slave have to negotiate are the slave-side +user account, slave's resources that master uses as slave resources, and +the master's public key. Secure access to the slave can be established +using the following options: + + + Restricting Remote Command Execution + + + Using Mountbroker for Slaves + + + Using IP based Access Control + + + Backward Compatibility + Your existing Ge-replication environment will work with GlusterFS, +except for the following: + + + The process of secure reconfiguration affects only the glusterfs +instance on slave. The changes are transparent to master with the +exception that you may have to change the SSH target to an unprivileged + account on slave. + + + The following are the some exceptions where this might not work: + + + Geo-replication URLs which specify the slave resource when configuring master will include the following special characters: space, *, ?, [; + + + Slave must have a running instance of glusterd, even if there is no +gluster volume among the mounted slave resources (that is, file tree +slaves are used exclusively) . + + + + +
+ Restricting Remote Command Execution + If you restrict remote command execution, then the Slave audits commands +coming from the master and the commands related to the given +geo-replication session is allowed. The Slave also provides access only +to the files within the slave resource which can be read or manipulated +by the Master. + To restrict remote command execution: + + + Identify the location of the gsyncd helper utility on Slave. This utility is installed in PREFIX/libexec/glusterfs/gsyncd, where PREFIX is a compile-time parameter of glusterfs. For example, --prefix=PREFIX to the configure script with the following common values /usr, /usr/local, and /opt/glusterfs/glusterfs_version. + + + Ensure that command invoked from master to slave passed through the slave's gsyncd utility. + You can use either of the following two options: + + + Set gsyncd with an absolute path as the shell for the account +which the master connects through SSH. If you need to use a privileged +account, then set it up by creating a new user with UID 0. + + + Setup key authentication with command enforcement to gsyncd. You must prefix the copy of master's public key in the Slave account's authorized_keys file with the following command: + command=<path to gsyncd>. + For example, command="PREFIX/glusterfs/gsyncd" ssh-rsa AAAAB3Nza.... + + + + +
+
+ Using Mountbroker for Slaves + mountbroker is a new service of glusterd. This service allows an +unprivileged process to own a GlusterFS mount by registering a label +(and DSL (Domain-specific language) options ) with glusterd through a +glusterd volfile. Using CLI, you can send a mount request to glusterd to +receive an alias (symlink) of the mounted volume. + A request from the agent , the unprivileged slave agents use the +mountbroker service of glusterd to set up an auxiliary gluster mount for +the agent in a special environment which ensures that the agent is only +allowed to access with special parameters that provide administrative +level access to the particular volume. + To setup an auxiliary gluster mount for the agent: + + + Create a new group. For example, geogroup. + + + Create a unprivileged account. For example, geoaccount. Make it a member of geogroup. + + + Create a new directory owned by root and with permissions 0711. For example, create a create mountbroker-root directory /var/mountbroker-root. + + + Add the following options to the glusterd volfile, assuming the name of the slave gluster volume as slavevol: + option mountbroker-root /var/mountbroker-root + option mountbroker-geo-replication.geoaccount slavevol + option geo-replication-log-group geogroup + If you are unable to locate the glusterd volfile at /etc/glusterfs/glusterd.vol, you can create a volfile containing both the default configuration and the above options and place it at /etc/glusterfs/. + A sample glusterd volfile along with default options: + volume management + type mgmt/glusterd + option working-directory /var/lib/glusterd + option transport-type socket,rdma + option transport.socket.keepalive-time 10 + option transport.socket.keepalive-interval 2 + option transport.socket.read-fail-log off + + option mountbroker-root /var/mountbroker-root + option mountbroker-geo-replication.geoaccount slavevol + option geo-replication-log-group geogroup +end-volume + If you host multiple slave volumes on Slave, you can repeat step 2. for each of them and add the following options to the volfile: + option mountbroker-geo-replication.geoaccount2 slavevol2 +option mountbroker-geo-replication.geoaccount3 slavevol3 + + + Setup Master to access Slave as geoaccount@Slave. + You can add multiple slave volumes within the same account (geoaccount) by providing comma-separated list (without spaces) as the argument of mountbroker-geo-replication.geogroup. You can also have multiple options of the form mountbroker-geo-replication.*. It is recommended to use one service account per Master machine. For example, if there are multiple slave volumes on Slave for the master machines Master1, Master2, and Master3, then create a dedicated service user on Slave for them by repeating Step 2. for each (like geogroup1, geogroup2, and geogroup3), and then add the following corresponding options to the volfile: + + option mountbroker-geo-replication.geoaccount1 slavevol11,slavevol12,slavevol13 + option mountbroker-geo-replication.geoaccount2 slavevol21,slavevol22 + option mountbroker-geo-replication.geoaccount3 slavevol31 + +Now set up Master1 to ssh to geoaccount1@Slave, etc. + + You must restart glusterd after making changes in the configuration to effect the updates. + + +
+
+ Using IP based Access Control + You can use IP based access control method to provide access control for +the slave resources using IP address. You can use method for both Slave +and file tree slaves, but in the section, we are focusing on file tree +slaves using this method. + To set access control based on IP address for file tree slaves: + + + Set a general restriction for accessibility of file tree resources: + + # gluster volume geo-replication '/*' config allow-network ::1,127.0.0.1 + This will refuse all requests for spawning slave agents except for +requests initiated locally. + + + If you want the to lease file tree at /data/slave-tree to Master, enter the following command: + # gluster volume geo-replication /data/slave-tree config allow-network MasterIP + MasterIP is the IP address of Master. The slave agent spawn request from +master will be accepted if it is executed at /data/slave-tree. + + + If the Master side network configuration does not enable the Slave to +recognize the exact IP address of Master, you can use CIDR notation to +specify a subnet instead of a single IP address as MasterIP or even +comma-separated lists of CIDR subnets. + If you want to extend IP based access control to gluster slaves, use the following command: + # gluster volume geo-replication '*' config allow-network ::1,127.0.0.1 +
+
+
+
+ Starting Geo-replication + This section describes how to configure and start Gluster Geo-replication in your storage environment, and verify that it is functioning correctly. + + + + + + + + + + + + + + + + + +
+ Starting Geo-replication + To start Gluster Geo-replication + + + Start geo-replication between the hosts using the following command: + + # gluster volume geo-replication MASTER SLAVE start + + For example: + + # gluster volume geo-replication Volume1 example.com:/data/remote_dir start +Starting geo-replication session between Volume1 +example.com:/data/remote_dir has been successful + + You may need to configure the service before starting Gluster Geo-replication. For more information, see . + + + +
+
+ Verifying Successful Deployment + You can use the gluster command to verify the status of Gluster Geo-replication in your environment. + To verify the status Gluster Geo-replication + + + Verify the status by issuing the following command on host: + # gluster volume geo-replication MASTER SLAVE status + + For example: + + # gluster volume geo-replication Volume1 example.com:/data/remote_dir status + + # gluster volume geo-replication Volume1 example.com:/data/remote_dir status + +MASTER SLAVE STATUS +______ ______________________________ ____________ +Volume1 root@example.com:/data/remote_dir Starting.... + + + +
+
+ Displaying Geo-replication Status Information + You can display status information about a specific geo-replication master session, or a particular master-slave session, or all geo-replication sessions, as needed. + To display geo-replication status information + + + Display information of all geo-replication sessions using the following command: + # gluster volume geo-replication Volume1 example.com:/data/remote_dir status + +MASTER SLAVE STATUS +______ ______________________________ ____________ +Volume1 root@example.com:/data/remote_dir Starting.... + + + + + Display information of a particular master slave session using the following command: + + # gluster volume geo-replication MASTER SLAVE status + + For example, to display information of Volume1 and example.com:/data/remote_dir + + # gluster volume geo-replication Volume1 example.com:/data/remote_dir status + + The status of the geo-replication between Volume1 and example.com:/data/remote_dir is displayed. + + + Display information of all geo-replication sessions belonging to a master + # gluster volume geo-replication MASTER status + + For example, to display information of Volume1 + # gluster volume geo-replication Volume1 example.com:/data/remote_dir status + +MASTER SLAVE STATUS +______ ______________________________ ____________ +Volume1 ssh://example.com:gluster://127.0.0.1:remove_volume OK + +Volume1 ssh://example.com:file:///data/remote_dir OK + The status of a session could be one of the following four: + + + Starting: This is the initial phase of the Geo-replication session; it remains in this state for a minute, to make sure no abnormalities are present. + + + OK: The geo-replication session is in a stable state. + + + Faulty: The geo-replication session has witnessed some abnormality and the situation has to be investigated further. For further information, see section. + + + Corrupt: The monitor thread which is monitoring the geo-replication session has died. This situation should not occur normally, if it persists contact Red Hat Support. + + +
+
+ Configuring Geo-replication + To configure Gluster Geo-replication + + + Use the following command at the Gluster command line: + + # gluster volume geo-replication MASTER SLAVE config [options] + + For more information about the options, see . + + For example: + + To view list of all option/value pair, use the following command: + + # gluster volume geo-replication Volume1 example.com:/data/remote_dir config + + + +
+
+ Stopping Geo-replication + You can use the gluster command to stop Gluster Geo-replication (syncing of data from Master to Slave) in your environment. + To stop Gluster Geo-replication + + + Stop geo-replication between the hosts using the following command: + + # gluster volume geo-replication MASTER SLAVE stop + For example: + + # gluster volume geo-replication Volume1 example.com:/data/remote_dir stop +Stopping geo-replication session between Volume1 and +example.com:/data/remote_dir has been successful + See for more information about the gluster command. + + + +
+
+
+ Restoring Data from the Slave + You can restore data from the slave to the master volume, whenever the master volume becomes faulty for reasons like hardware failure. + + The example in this section assumes that you are using the Master Volume (Volume1) with the following configuration: + + machine1# gluster volume info +Type: Distribute +Status: Started +Number of Bricks: 2 +Transport-type: tcp +Bricks: +Brick1: machine1:/export/dir16 +Brick2: machine2:/export/dir16 +Options Reconfigured: +geo-replication.indexing: on + The data is syncing from master volume (Volume1) to slave directory (example.com:/data/remote_dir). To view the status of this geo-replication session run the following command on Master: + # gluster volume geo-replication Volume1 root@example.com:/data/remote_dir status + +MASTER SLAVE STATUS +______ ______________________________ ____________ +Volume1 root@example.com:/data/remote_dir OK + Before Failure + + Assume that the Master volume had 100 files and was mounted at /mnt/gluster on one of the client machines (client). Run the following command on Client machine to view the list of files: + + client# ls /mnt/gluster | wc –l +100 + The slave directory (example.com) will have same data as in the master volume and same can be viewed by running the following command on slave: + + example.com# ls /data/remote_dir/ | wc –l +100 + After Failure + + If one of the bricks (machine2) fails, then the status of Geo-replication session is changed from "OK" to "Faulty". To view the status of this geo-replication session run the following command on Master: + + # gluster volume geo-replication Volume1 root@example.com:/data/remote_dir status + +MASTER SLAVE STATUS +______ ______________________________ ____________ +Volume1 root@example.com:/data/remote_dir Faulty + Machine2 is failed and now you can see discrepancy in number of files between master and slave. Few files will be missing from the master volume but they will be available only on slave as shown below. + + Run the following command on Client: + + client # ls /mnt/gluster | wc –l +52 + Run the following command on slave (example.com): + + Example.com# # ls /data/remote_dir/ | wc –l +100 + To restore data from the slave machine + + + Stop all Master's geo-replication sessions using the following command: + + # gluster volume geo-replication MASTER SLAVE stop + + For example: + + machine1# gluster volume geo-replication Volume1 +example.com:/data/remote_dir stop + +Stopping geo-replication session between Volume1 & +example.com:/data/remote_dir has been successful + + Repeat # gluster volume geo-replication MASTER SLAVE stop command on all active geo-replication sessions of master volume. + + + + Replace the faulty brick in the master by using the following command: + + # gluster volume replace-brick VOLNAME BRICK NEW-BRICK start + + For example: + + machine1# gluster volume replace-brick Volume1 machine2:/export/dir16 machine3:/export/dir16 start +Replace-brick started successfully + + + Commit the migration of data using the following command: + + # gluster volume replace-brick VOLNAME BRICK NEW-BRICK commit force + For example: + + machine1# gluster volume replace-brick Volume1 machine2:/export/dir16 machine3:/export/dir16 commit force +Replace-brick commit successful + + + Verify the migration of brick by viewing the volume info using the following command: + + # gluster volume info VOLNAME + For example: + + machine1# gluster volume info +Volume Name: Volume1 +Type: Distribute +Status: Started +Number of Bricks: 2 +Transport-type: tcp +Bricks: +Brick1: machine1:/export/dir16 +Brick2: machine3:/export/dir16 +Options Reconfigured: +geo-replication.indexing: on + + + Run rsync command manually to sync data from slave to master volume's client (mount point). + + For example: + + example.com# rsync -PavhS --xattrs --ignore-existing /data/remote_dir/ client:/mnt/gluster + Verify that the data is synced by using the following command: + + On master volume, run the following command: + + Client # ls | wc –l +100 + On the Slave run the following command: + + example.com# ls /data/remote_dir/ | wc –l +100 + Now Master volume and Slave directory is synced. + + + + Restart geo-replication session from master to slave using the following command: + + # gluster volume geo-replication MASTER SLAVE start + For example: + + machine1# gluster volume geo-replication Volume1 +example.com:/data/remote_dir start +Starting geo-replication session between Volume1 & +example.com:/data/remote_dir has been successful + + +
+
+ Best Practices + Manually Setting Time + If you have to change the time on your bricks manually, then you must set uniform time on all bricks. This avoids the out-of-time sync issue described in . Setting time backward corrupts the geo-replication index, so the recommended way to set the time manually is: + + + + Stop geo-replication between the master and slave using the following command: + + # gluster volume geo-replication MASTER SLAVE stop + + + + Stop the geo-replication indexing using the following command: + + # gluster volume set MASTER geo-replication.indexing off + + + Set uniform time on + all bricks.s + + + Restart your geo-replication sessions by using the following command: + + # gluster volume geo-replication MASTER SLAVE start + + + Running Geo-replication commands in one system + + It is advisable to run the geo-replication commands in one of the bricks in the trusted storage pool. This is because, the log files for the geo-replication session would be stored in the *Server* where the Geo-replication start is initiated. Hence it would be easier to locate the log-files when required. + + Isolation + Geo-replication slave operation is not sandboxed as of now and is ran as a privileged service. So for the security reason, it is advised to create a sandbox environment (dedicated machine / dedicated virtual machine / chroot/container type solution) by the administrator to run the geo-replication slave in it. Enhancement in this regard will be available in follow-up minor release. + +
+
diff --git a/doc/legacy/docbook/admin_managing_volumes.xml b/doc/legacy/docbook/admin_managing_volumes.xml new file mode 100644 index 000000000..70c1fe0b9 --- /dev/null +++ b/doc/legacy/docbook/admin_managing_volumes.xml @@ -0,0 +1,741 @@ + + +%BOOK_ENTITIES; +]> + + Managing GlusterFS Volumes + This section describes how to perform common GlusterFS management operations, including the following: + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ Tuning Volume Options + You can tune volume options, as needed, while the cluster is online and available. + + Red Hat recommends you to set server.allow-insecure option to ON if there are too many bricks in each volume or if there are too many services which have already utilized all the privileged ports in the system. Turning this option ON allows ports to accept/reject messages from insecure ports. So, use this option only if your deployment requires it. + + To tune volume options + + + Tune volume options using the following command: + # gluster volume set VOLNAME OPTION PARAMETER + For example, to specify the performance cache size for test-volume: + # gluster volume set test-volume performance.cache-size 256MB +Set volume successful + The following table lists the Volume options along with its description and default value: + + The default options given here are subject to modification at any given time and may not be the same for all versions. + + + + + + + + + + Option + Description + Default Value + Available Options + + + + + auth.allow + IP addresses of the clients which should be allowed to access the volume. + * (allow all) + Valid IP address which includes wild card patterns including *, such as 192.168.1.* + + + auth.reject + IP addresses of the clients which should be denied to access the volume. + NONE (reject none) + Valid IP address which includes wild card patterns including *, such as 192.168.2.* + + + client.grace-timeout + Specifies the duration for the lock state to be maintained on the client after a network disconnection. + 10 + 10 - 1800 secs + + + cluster.self-heal-window-size + Specifies the maximum number of blocks per file on which self-heal would happen simultaneously. + 16 + 0 - 1025 blocks + + + cluster.data-self-heal-algorithm + Specifies the type of self-heal. If you set the option as "full", the entire file is copied from source to destinations. If the option is set to "diff" the file blocks that are not in sync are copied to destinations. Reset uses a heuristic model. If the file does not exist on one of the subvolumes, or a zero-byte file exists (created by entry self-heal) the entire content has to be copied anyway, so there is no benefit from using the "diff" algorithm. If the file size is about the same as page size, the entire file can be read and written with a few operations, which will be faster than "diff" which has to read checksums and then read and write. + reset + full | diff | reset + + + cluster.min-free-disk + Specifies the percentage of disk space that must be kept free. Might be useful for non-uniform bricks. + 10% + Percentage of required minimum free disk space + + + cluster.stripe-block-size + Specifies the size of the stripe unit that will be read from or written to. + 128 KB (for all files) + size in bytes + + + cluster.self-heal-daemon + Allows you to turn-off proactive self-heal on replicated volumes. + on + On | Off + + + cluster.ensure-durability + This option makes sure the data/metadata is durable across abrupt shutdown of the brick. + on + On | Off + + + diagnostics.brick-log-level + Changes the log-level of the bricks. + INFO + DEBUG|WARNING|ERROR|CRITICAL|NONE|TRACE + + + diagnostics.client-log-level + Changes the log-level of the clients. + INFO + DEBUG|WARNING|ERROR|CRITICAL|NONE|TRACE + + + diagnostics.latency-measurement + Statistics related to the latency of each operation would be tracked. + off + On | Off + + + diagnostics.dump-fd-stats + Statistics related to file-operations would be tracked. + off + On | Off + + + feature.read-only + Enables you to mount the entire volume as read-only for all the clients (including NFS clients) accessing it. + off + On | Off + + + features.lock-heal + Enables self-healing of locks when the network disconnects. + on + On | Off + + + features.quota-timeout + For performance reasons, quota caches the directory sizes on client. You can set timeout indicating the maximum duration of directory sizes in cache, from the time they are populated, during which they are considered valid. + 0 + 0 - 3600 secs + + + geo-replication.indexing + Use this option to automatically sync the changes in the filesystem from Master to Slave. + off + On | Off + + + network.frame-timeout + The time frame after which the operation has to be declared as dead, if the server does not respond for a particular operation. + 1800 (30 mins) + 1800 secs + + + network.ping-timeout + The time duration for which the client waits to check if the server is responsive. When a ping timeout happens, there is a network disconnect between the client and server. All resources held by server on behalf of the client get cleaned up. When a reconnection happens, all resources will need to be re-acquired before the client can resume its operations on the server. Additionally, the locks will be acquired and the lock tables updated. This reconnect is a very expensive operation and should be avoided. + + 42 Secs + 42 Secs + + + nfs.enable-ino32 + For 32-bit nfs clients or applications that do not support 64-bit inode numbers or large files, use this option from the CLI to make Gluster NFS return 32-bit inode numbers instead of 64-bit inode numbers. Applications that will benefit are those that were either: * Built 32-bit and run on 32-bit machines.* Built 32-bit on 64-bit systems.* Built 64-bit but use a library built 32-bit, especially relevant for python and perl scripts.Either of the conditions above can lead to application on Linux NFS clients failing with "Invalid argument" or "Value too large for defined data type" errors. + off + On | Off + + + nfs.volume-access + Set the access type for the specified sub-volume. + read-write + read-write|read-only + + + nfs.trusted-write + If there is an UNSTABLE write from the client, STABLE flag will be returned to force the client to not send a COMMIT request. In some environments, combined with a replicated GlusterFS setup, this option can improve write performance. This flag allows users to trust Gluster replication logic to sync data to the disks and recover when required. COMMIT requests if received will be handled in a default manner by fsyncing. STABLE writes are still handled in a sync manner. + off + On | Off + + + nfs.trusted-sync + All writes and COMMIT requests are treated as async. This implies that no write requests are guaranteed to be on server disks when the write reply is received at the NFS client. Trusted sync includes trusted-write behavior. + off + On | Off + + + nfs.export-dir + By default, all sub-volumes of NFS are exported as individual exports. Now, this option allows you to export only the specified subdirectory or subdirectories in the volume. This option can also be used in conjunction with nfs3.export-volumes option to restrict exports only to the subdirectories specified through this option. You must provide an absolute path. + Enabled for all sub directories. + Enable | Disable + + + nfs.export-volumes + Enable/Disable exporting entire volumes, instead if used in conjunction with nfs3.export-dir, can allow setting up only subdirectories as exports. + on + On | Off + + + nfs.rpc-auth-unix + Enable/Disable the AUTH_UNIX authentication type. This option is enabled by default for better interoperability. However, you can disable it if required. + on + On | Off + + + nfs.rpc-auth-null + Enable/Disable the AUTH_NULL authentication type. It is not recommended to change the default value for this option. + on + On | Off + + + nfs.rpc-auth-allow<IP- Addresses> + Allow a comma separated list of addresses and/or hostnames to connect to the server. By default, all clients are disallowed. This allows you to define a general rule for all exported volumes. + Reject All + IP address or Host name + + + nfs.rpc-auth-reject IP- Addresses + Reject a comma separated list of addresses and/or hostnames from connecting to the server. By default, all connections are disallowed. This allows you to define a general rule for all exported volumes. + Reject All + IP address or Host name + + + nfs.ports-insecure + Allow client connections from unprivileged ports. By default only privileged ports are allowed. This is a global setting in case insecure ports are to be enabled for all exports using a single option. + off + On | Off + + + nfs.addr-namelookup + Turn-off name lookup for incoming client connections using this option. In some setups, the name server can take too long to reply to DNS queries resulting in timeouts of mount requests. Use this option to turn off name lookups during address authentication. Note, turning this off will prevent you from using hostnames in rpc-auth.addr.* filters. + on + On | Off + + + nfs.register-with- portmap + For systems that need to run multiple NFS servers, you need to prevent more than one from registering with portmap service. Use this option to turn off portmap registration for Gluster NFS. + on + On | Off + + + nfs.port <PORT- NUMBER> + Use this option on systems that need Gluster NFS to be associated with a non-default port number. + 38465- 38467 + + + + nfs.disable + Turn-off volume being exported by NFS + off + On | Off + + + performance.write-behind-window-size + Size of the per-file write-behind buffer. + 1 MB + Write-behind cache size + + + performance.io-thread-count + The number of threads in IO threads translator. + 16 + 0 - 65 + + + performance.flush-behind + If this option is set ON, instructs write-behind translator to perform flush in background, by returning success (or any errors, if any of previous writes were failed) to application even before flush is sent to backend filesystem. + On + On | Off + + + performance.cache-max-file-size + Sets the maximum file size cached by the io-cache translator. Can use the normal size descriptors of KB, MB, GB,TB or PB (for example, 6GB). Maximum size uint64. + 2 ^ 64 -1 bytes + size in bytes + + + performance.cache-min-file-size + Sets the minimum file size cached by the io-cache translator. Values same as "max" above. + 0B + size in bytes + + + performance.cache-refresh-timeout + The cached data for a file will be retained till 'cache-refresh-timeout' seconds, after which data re-validation is performed. + 1 sec + 0 - 61 + + + performance.cache-size + Size of the read cache. + 32 MB + size in bytes + + + server.allow-insecure + Allow client connections from unprivileged ports. By default only privileged ports are allowed. This is a global setting in case insecure ports are to be enabled for all exports using a single option. + on + On | Off + + + server.grace-timeout + Specifies the duration for the lock state to be maintained on the server after a network disconnection. + 10 + 10 - 1800 secs + + + server.statedump-path + Location of the state dump file. + /tmp directory of the brick + New directory path + + + + + You can view the changed volume options using the # gluster volume info VOLNAME command. For more information, see . + + +
+
+ Expanding Volumes + You can expand volumes, as needed, while the cluster is online and available. For example, you might want to add a brick to a distributed volume, thereby increasing the distribution and adding to the capacity of the GlusterFS volume. + Similarly, you might want to add a group of bricks to a distributed replicated volume, increasing the capacity of the GlusterFS volume. + + When expanding distributed replicated and distributed striped volumes, you need to add a number of bricks that is a multiple of the replica or stripe count. For example, to expand a distributed replicated volume with a replica count of 2, you need to add bricks in multiples of 2 (such as 4, 6, 8, etc.). + + To expand a volume + + + On the first server in the cluster, probe the server to which you want to add the new brick using the following command: + # gluster peer probe HOSTNAME + For example: + # gluster peer probe server4 +Probe successful + + + Add the brick using the following command: + # gluster volume add-brick VOLNAME NEW-BRICK + For example: + # gluster volume add-brick test-volume server4:/exp4 +Add Brick successful + + + Check the volume information using the following command: + # gluster volume info + The command displays information similar to the following: + Volume Name: test-volume +Type: Distribute +Status: Started +Number of Bricks: 4 +Bricks: +Brick1: server1:/exp1 +Brick2: server2:/exp2 +Brick3: server3:/exp3 +Brick4: server4:/exp4 + + + Rebalance the volume to ensure that all files are distributed to the new brick. + You can use the rebalance command as described in . + + +
+
+ Shrinking Volumes + You can shrink volumes, as needed, while the cluster is online and available. For example, you might need to remove a brick that has become inaccessible in a distributed volume due to hardware or network failure. + + Data residing on the brick that you are removing will no longer be accessible at the Gluster mount point. Note however that only the configuration information is removed - you can continue to access the data directly from the brick, as necessary. + + When shrinking distributed replicated and distributed striped volumes, you need to remove a number of bricks that is a multiple of the replica or stripe count. For example, to shrink a distributed striped volume with a stripe count of 2, you need to remove bricks in multiples of 2 (such as 4, 6, 8, etc.). In addition, the bricks you are trying to remove must be from the same sub-volume (the same replica or stripe set). + To shrink a volume + + + Remove the brick using the following command: + # gluster volume remove-brick VOLNAME BRICK start + For example, to remove server2:/exp2: + # gluster volume remove-brick test-volume server2:/exp2 + +Removing brick(s) can result in data loss. Do you want to Continue? (y/n) + + + Enter "y" to confirm the operation. The command displays the following message indicating that the remove brick operation is successfully started: + Remove Brick successful + + + (Optional) View the status of the remove brick operation using the following command: + # gluster volume remove-brick VOLNAME BRICK status + For example, to view the status of remove brick operation on server2:/exp2 brick: + # gluster volume remove-brick test-volume server2:/exp2 status + Node Rebalanced-files size scanned status + --------- ---------------- ---- ------- ----------- +617c923e-6450-4065-8e33-865e28d9428f 34 340 162 in progress + + + Check the volume information using the following command: + # gluster volume info + The command displays information similar to the following: + # gluster volume info +Volume Name: test-volume +Type: Distribute +Status: Started +Number of Bricks: 3 +Bricks: +Brick1: server1:/exp1 +Brick3: server3:/exp3 +Brick4: server4:/exp4 + + + Rebalance the volume to ensure that all files are distributed to the new brick. + You can use the rebalance command as described in . + + +
+
+ Migrating Volumes + You can migrate the data from one brick to another, as needed, while the cluster is online and available. + To migrate a volume + + + Make sure the new brick, server5 in this example, is successfully added to the cluster. + For more information, see . + + + Migrate the data from one brick to another using the following command: + # gluster volume replace-brick VOLNAME BRICKNEW-BRICK start + For example, to migrate the data in server3:/exp3 to server5:/exp5 in test-volume: + # gluster volume replace-brick test-volume server3:/exp3 server5:exp5 start +Replace brick start operation successful + + You need to have the FUSE package installed on the server on which you are running the replace-brick command for the command to work. + + + + To pause the migration operation, if needed, use the following command: + # gluster volume replace-brick VOLNAME BRICK NEW-BRICK pause + For example, to pause the data migration from server3:/exp3 to server5:/exp5 in test-volume: + # gluster volume replace-brick test-volume server3:/exp3 server5:exp5 pause +Replace brick pause operation successful + + + To abort the migration operation, if needed, use the following command: + # gluster volume replace-brick VOLNAME BRICK NEW-BRICK abort + For example, to abort the data migration from server3:/exp3 to server5:/exp5 in test-volume: + # gluster volume replace-brick test-volume server3:/exp3 server5:exp5 abort +Replace brick abort operation successful + + + Check the status of the migration operation using the following command: + # gluster volume replace-brick VOLNAME BRICK NEW-BRICK status + For example, to check the data migration status from server3:/exp3 to server5:/exp5 in test-volume: + # gluster volume replace-brick test-volume server3:/exp3 server5:/exp5 status +Current File = /usr/src/linux-headers-2.6.31-14/block/Makefile +Number of files migrated = 10567 +Migration complete + The status command shows the current file being migrated along with the current total number of files migrated. After completion of migration, it displays Migration complete. + + + Commit the migration of data from one brick to another using the following command: + # gluster volume replace-brick VOLNAME BRICK NEW-BRICK commit + For example, to commit the data migration from server3:/exp3 to server5:/exp5 in test-volume: + # gluster volume replace-brick test-volume server3:/exp3 server5:/exp5 commit +replace-brick commit successful + + + Verify the migration of brick by viewing the volume info using the following command: + # gluster volume info VOLNAME + For example, to check the volume information of new brick server5:/exp5 in test-volume: + # gluster volume info test-volume +Volume Name: testvolume +Type: Replicate +Status: Started +Number of Bricks: 4 +Transport-type: tcp +Bricks: +Brick1: server1:/exp1 +Brick2: server2:/exp2 +Brick3: server4:/exp4 +Brick4: server5:/exp5 + +The new volume details are displayed. + + The new volume details are displayed. + In the above example, previously, there were bricks; 1,2,3, and 4 and now brick 3 is replaced by brick 5. + + +
+
+ Rebalancing Volumes + After expanding or shrinking a volume (using the add-brick and remove-brick commands respectively), you need to rebalance the data among the servers. New directories created after expanding or shrinking of the volume will be evenly distributed automatically. For all the existing directories, the distribution can be fixed by rebalancing the layout and/or data. + This section describes how to rebalance GlusterFS volumes in your storage environment, using the following common scenarios: + + + Fix Layout - Fixes the layout changes so that the files can actually go to newly added nodes. For more information, see . + + + Fix Layout and Migrate Data - Rebalances volume by fixing the layout changes and migrating the existing data. For more information, see . + + +
+ Rebalancing Volume to Fix Layout Changes + Fixing the layout is necessary because the layout structure is static for a given directory. In a scenario where new bricks have been added to the existing volume, newly created files in existing directories will still be distributed only among the old bricks. The # gluster volume rebalance VOLNAME fix-layout start command will fix the layout information so that the files can also go to newly added nodes. When this command is issued, all the file stat information which is already cached will get revalidated. + A fix-layout rebalance will only fix the layout changes and does not migrate data. If you want to migrate the existing data, use# gluster volume rebalance VOLNAME start command to rebalance data among the servers. + To rebalance a volume to fix layout changes + + + Start the rebalance operation on any one of the server using the following command: + # gluster volume rebalance VOLNAME fix-layout start + For example: + # gluster volume rebalance test-volume fix-layout start +Starting rebalance on volume test-volume has been successful + + +
+
+ Rebalancing Volume to Fix Layout and Migrate Data + After expanding or shrinking a volume (using the add-brick and remove-brick commands respectively), you need to rebalance the data among the servers. + To rebalance a volume to fix layout and migrate the existing data + + + Start the rebalance operation on any one of the server using the following command: + # gluster volume rebalance VOLNAME start + For example: + # gluster volume rebalance test-volume start +Starting rebalancing on volume test-volume has been successful + + + Start the migration operation forcefully on any one of the server using the following command: + # gluster volume rebalance VOLNAME start force + For example: + # gluster volume rebalance test-volume start force +Starting rebalancing on volume test-volume has been successful + + +
+
+ Displaying Status of Rebalance Operation + You can display the status information about rebalance volume operation, as needed. + To view status of rebalance volume + + + Check the status of the rebalance operation, using the following command: + # gluster volume rebalance VOLNAME status + For example: + # gluster volume rebalance test-volume status + Node Rebalanced-files size scanned status + --------- ---------------- ---- ------- ----------- +617c923e-6450-4065-8e33-865e28d9428f 416 1463 312 in progress + The time to complete the rebalance operation depends on the number of files on the volume along with the corresponding file sizes. Continue checking the rebalance status, verifying that the number of files rebalanced or total files scanned keeps increasing. + For example, running the status command again might display a result similar to the following: + # gluster volume rebalance test-volume status + Node Rebalanced-files size scanned status + --------- ---------------- ---- ------- ----------- +617c923e-6450-4065-8e33-865e28d9428f 498 1783 378 in progress + The rebalance status displays the following when the rebalance is complete: + # gluster volume rebalance test-volume status + Node Rebalanced-files size scanned status + --------- ---------------- ---- ------- ----------- +617c923e-6450-4065-8e33-865e28d9428f 502 1873 334 completed + + +
+
+ Stopping Rebalance Operation + You can stop the rebalance operation, as needed. + To stop rebalance + + + Stop the rebalance operation using the following command: + # gluster volume rebalance VOLNAME stop + For example: + # gluster volume rebalance test-volume stop + Node Rebalanced-files size scanned status + --------- ---------------- ---- ------- ----------- +617c923e-6450-4065-8e33-865e28d9428f 59 590 244 stopped +Stopped rebalance process on volume test-volume + + +
+
+
+ Stopping Volumes + To stop a volume + + + Stop the volume using the following command: + + + # gluster volume stop VOLNAME + For example, to stop test-volume: + # gluster volume stop test-volume +Stopping volume will make its data inaccessible. Do you want to continue? (y/n) + + + + Enter y to confirm the operation. The output of the command displays the following: + + + Stopping volume test-volume has been successful + + +
+
+ Deleting Volumes + To delete a volume + + + Delete the volume using the following command: + # gluster volume delete VOLNAME + For example, to delete test-volume: + # gluster volume delete test-volume +Deleting volume will erase all information about the volume. Do you want to continue? (y/n) + + + Enter y to confirm the operation. The command displays the following: + Deleting volume test-volume has been successful + + +
+
+ Triggering Self-Heal on Replicate + In replicate module, previously you had to manually trigger a self-heal when a brick goes offline and comes back online, to bring all the replicas in sync. Now the pro-active self-heal daemon runs in the background, diagnoses issues and automatically initiates self-healing every 10 minutes on the files which requires healing. + You can view the list of files that need healing, the list of files which are currently/previously healed, list of files which are in split-brain state, and you can manually trigger self-heal on the entire volume or only on the files which need healing. + + + Trigger self-heal only on the files which requires healing: + # gluster volume heal VOLNAME + For example, to trigger self-heal on files which requires healing of test-volume: + # gluster volume heal test-volume +Heal operation on volume test-volume has been successful + + + Trigger self-heal on all the files of a volume: + # gluster volume heal VOLNAME full + For example, to trigger self-heal on all the files of of test-volume: + # gluster volume heal test-volume full +Heal operation on volume test-volume has been successful + + + View the list of files that needs healing: + # gluster volume heal VOLNAME info + For example, to view the list of files on test-volume that needs healing: + # gluster volume heal test-volume info +Brick server1:/gfs/test-volume_0 +Number of entries: 0 + +Brick server2:/gfs/test-volume_1 +Number of entries: 101 +/95.txt +/32.txt +/66.txt +/35.txt +/18.txt +/26.txt +/47.txt +/55.txt +/85.txt +... + + + View the list of files that are self-healed: + # gluster volume heal VOLNAME info healed + For example, to view the list of files on test-volume that are self-healed: + # gluster volume heal test-volume info healed +Brick server1:/gfs/test-volume_0 +Number of entries: 0 + +Brick server2:/gfs/test-volume_1 +Number of entries: 69 +/99.txt +/93.txt +/76.txt +/11.txt +/27.txt +/64.txt +/80.txt +/19.txt +/41.txt +/29.txt +/37.txt +/46.txt +... + + + View the list of files of a particular volume on which the self-heal failed: + # gluster volume heal VOLNAME info failed + For example, to view the list of files of test-volume that are not self-healed: + # gluster volume heal test-volume info failed +Brick server1:/gfs/test-volume_0 +Number of entries: 0 + +Brick server2:/gfs/test-volume_3 +Number of entries: 72 +/90.txt +/95.txt +/77.txt +/71.txt +/87.txt +/24.txt +... + + + View the list of files of a particular volume which are in split-brain state: + # gluster volume heal VOLNAME info split-brain + For example, to view the list of files of test-volume which are in split-brain state: + # gluster volume heal test-volume info split-brain +Brick server1:/gfs/test-volume_2 +Number of entries: 12 +/83.txt +/28.txt +/69.txt +... + +Brick server2:/gfs/test-volume_2 +Number of entries: 12 +/83.txt +/28.txt +/69.txt +... + + +
+
diff --git a/doc/legacy/docbook/admin_monitoring_workload.xml b/doc/legacy/docbook/admin_monitoring_workload.xml new file mode 100644 index 000000000..e85bc51d8 --- /dev/null +++ b/doc/legacy/docbook/admin_monitoring_workload.xml @@ -0,0 +1,878 @@ + + + + Monitoring your GlusterFS Workload + You can monitor the GlusterFS volumes on different parameters. Monitoring volumes helps in capacity planning and performance tuning tasks of the GlusterFS volume. Using these information, you can identify and troubleshoot issues. + You can use Volume Top and Profile commands to view the performance and identify bottlenecks/hotspots of each brick of a volume. This helps system administrators to get vital performance information whenever performance needs to be probed. + You can also perform statedump of the brick processes and nfs server process of a volume, and also view volume status and volume information. +
+ Running GlusterFS Volume Profile Command + GlusterFS Volume Profile command provides an interface to get the per-brick I/O information for each File Operation (FOP) of a volume. The per brick information helps in identifying bottlenecks in the storage system. + + This section describes how to run GlusterFS Volume Profile command by performing the following operations: + + + + + + + + + + + + +
+ Start Profiling + You must start the Profiling to view the File Operation information for each brick. + + To start profiling: + + + Start profiling using the following command: + + + + # gluster volume profile VOLNAME start + For example, to start profiling on test-volume: + + # gluster volume profile test-volume start +Profiling started on test-volume + When profiling on the volume is started, the following additional options are displayed in the Volume Info: + + diagnostics.count-fop-hits: on + +diagnostics.latency-measurement: on +
+
+ Displaying the I/0 Information + You can view the I/O information of each brick. + + To display I/O information: + + + + Display the I/O information using the following command: + + + + # gluster volume profile VOLNAME info + + + For example, to see the I/O information on test-volume: + + + # gluster volume profile test-volume info +Brick: Test:/export/2 +Cumulative Stats: + +Block 1b+ 32b+ 64b+ +Size: + Read: 0 0 0 + Write: 908 28 8 + +Block 128b+ 256b+ 512b+ +Size: + Read: 0 6 4 + Write: 5 23 16 + +Block 1024b+ 2048b+ 4096b+ +Size: + Read: 0 52 17 + Write: 15 120 846 + +Block 8192b+ 16384b+ 32768b+ +Size: + Read: 52 8 34 + Write: 234 134 286 + +Block 65536b+ 131072b+ +Size: + Read: 118 622 + Write: 1341 594 + + +%-latency Avg- Min- Max- calls Fop + latency Latency Latency +___________________________________________________________ +4.82 1132.28 21.00 800970.00 4575 WRITE +5.70 156.47 9.00 665085.00 39163 READDIRP +11.35 315.02 9.00 1433947.00 38698 LOOKUP +11.88 1729.34 21.00 2569638.00 7382 FXATTROP +47.35 104235.02 2485.00 7789367.00 488 FSYNC + +------------------ + +------------------ + +Duration : 335 + +BytesRead : 94505058 + +BytesWritten : 195571980 +
+
+ Stop Profiling + You can stop profiling the volume, if you do not need profiling information anymore. + + To stop profiling + + + + Stop profiling using the following command: + + # gluster volume profile VOLNAME stop + + For example, to stop profiling on test-volume: + # gluster volume profile test-volume stop + Profiling stopped on test-volume + + +
+
+
+ Running GlusterFS Volume TOP Command + GlusterFS Volume Top command allows you to view the glusterfs bricks’ performance metrics like +read, write, file open calls, file read calls, file write calls, directory open calls, and directory real +calls. The top command displays up to 100 results. + + This section describes how to run and view the results for the following GlusterFS Top commands: + + + + + + + + + + + + + + + + + + + + + + + + +
+ Viewing Open fd Count and Maximum fd Count + You can view both current open fd count (list of files that are currently the most opened and the +count) on the brick and the maximum open fd count (count of files that are the currently open and +the count of maximum number of files opened at any given point of time, since the servers are up +and running). If the brick name is not specified, then open fd metrics of all the bricks belonging to +the volume will be displayed. + + To view open fd count and maximum fd count: + + + View open fd count and maximum fd count using the following command: + # gluster volume top VOLNAME open [brick BRICK-NAME] [list-cnt cnt] + + For example, to view open fd count and maximum fd count on brick server:/export of test-volume and list top 10 open calls: + + # gluster volume top test-volume open brick server:/export/ list-cnt 10 + Brick: server:/export/dir1 + Current open fd's: 34 Max open fd's: 209 ==========Open file stats======== + +open file name +call count + +2 /clients/client0/~dmtmp/PARADOX/ + COURSES.DB + +11 /clients/client0/~dmtmp/PARADOX/ + ENROLL.DB + +11 /clients/client0/~dmtmp/PARADOX/ + STUDENTS.DB + +10 /clients/client0/~dmtmp/PWRPNT/ + TIPS.PPT + +10 /clients/client0/~dmtmp/PWRPNT/ + PCBENCHM.PPT + +9 /clients/client7/~dmtmp/PARADOX/ + STUDENTS.DB + +9 /clients/client1/~dmtmp/PARADOX/ + STUDENTS.DB + +9 /clients/client2/~dmtmp/PARADOX/ + STUDENTS.DB + +9 /clients/client0/~dmtmp/PARADOX/ + STUDENTS.DB + +9 /clients/client8/~dmtmp/PARADOX/ + STUDENTS.DB + + +
+
+ Viewing Highest File Read Calls + You can view highest read calls on each brick. If brick name is not specified, then by default, list of +100 files will be displayed. + + To view highest file Read calls: + + + + View highest file Read calls using the following command: + + # gluster volume top VOLNAME read [brick BRICK-NAME] [list-cnt cnt] + For example, to view highest Read calls on brick server:/export of test-volume: + + # gluster volume top test-volume read brick server:/export list-cnt 10 + Brick: server:/export/dir1 ==========Read file stats======== + +read filename +call count + +116 /clients/client0/~dmtmp/SEED/LARGE.FIL + +64 /clients/client0/~dmtmp/SEED/MEDIUM.FIL + +54 /clients/client2/~dmtmp/SEED/LARGE.FIL + +54 /clients/client6/~dmtmp/SEED/LARGE.FIL + +54 /clients/client5/~dmtmp/SEED/LARGE.FIL + +54 /clients/client0/~dmtmp/SEED/LARGE.FIL + +54 /clients/client3/~dmtmp/SEED/LARGE.FIL + +54 /clients/client4/~dmtmp/SEED/LARGE.FIL + +54 /clients/client9/~dmtmp/SEED/LARGE.FIL + +54 /clients/client8/~dmtmp/SEED/LARGE.FIL + + +
+
+ Viewing Highest File Write Calls + You can view list of files which has highest file write calls on each brick. If brick name is not +specified, then by default, list of 100 files will be displayed. + + To view highest file Write calls: + + + + View highest file Write calls using the following command: + + # gluster volume top VOLNAME write [brick BRICK-NAME] [list-cnt cnt] + For example, to view highest Write calls on brick server:/export of test-volume: + + # gluster volume top test-volume write brick server:/export list-cnt 10 + Brick: server:/export/dir1 ==========Write file stats======== +write call count filename + +83 /clients/client0/~dmtmp/SEED/LARGE.FIL + +59 /clients/client7/~dmtmp/SEED/LARGE.FIL + +59 /clients/client1/~dmtmp/SEED/LARGE.FIL + +59 /clients/client2/~dmtmp/SEED/LARGE.FIL + +59 /clients/client0/~dmtmp/SEED/LARGE.FIL + +59 /clients/client8/~dmtmp/SEED/LARGE.FIL + +59 /clients/client5/~dmtmp/SEED/LARGE.FIL + +59 /clients/client4/~dmtmp/SEED/LARGE.FIL + +59 /clients/client6/~dmtmp/SEED/LARGE.FIL + +59 /clients/client3/~dmtmp/SEED/LARGE.FIL + + +
+
+ Viewing Highest Open Calls on Directories + You can view list of files which has highest open calls on directories of each brick. If brick name is +not specified, then the metrics of all the bricks belonging to that volume will be displayed. + + To view list of open calls on each directory + + + View list of open calls on each directory using the following command: + + # gluster volume top VOLNAME opendir [brick BRICK-NAME] [list-cnt cnt] + For example, to view open calls on brick server:/export/ of test-volume: + + # gluster volume top test-volume opendir brick server:/export list-cnt 10 + Brick: server:/export/dir1 ==========Directory open stats======== + +Opendir count directory name + +1001 /clients/client0/~dmtmp + +454 /clients/client8/~dmtmp + +454 /clients/client2/~dmtmp + +454 /clients/client6/~dmtmp + +454 /clients/client5/~dmtmp + +454 /clients/client9/~dmtmp + +443 /clients/client0/~dmtmp/PARADOX + +408 /clients/client1/~dmtmp + +408 /clients/client7/~dmtmp + +402 /clients/client4/~dmtmp + + +
+
+ Viewing Highest Read Calls on Directory + You can view list of files which has highest directory read calls on each brick. If brick name is not +specified, then the metrics of all the bricks belonging to that volume will be displayed. + + To view list of highest directory read calls on each brick + + + + View list of highest directory read calls on each brick using the following command: + + # gluster volume top VOLNAME readdir [brick BRICK-NAME] [list-cnt cnt] + For example, to view highest directory read calls on brick server:/export of test-volume: + # gluster volume top test-volume readdir brick server:/export list-cnt 10 + Brick: server:/export/dir1==========Directory readdirp stats======== + +readdirp count directory name + +1996 /clients/client0/~dmtmp + +1083 /clients/client0/~dmtmp/PARADOX + +904 /clients/client8/~dmtmp + +904 /clients/client2/~dmtmp + +904 /clients/client6/~dmtmp + +904 /clients/client5/~dmtmp + +904 /clients/client9/~dmtmp + +812 /clients/client1/~dmtmp + +812 /clients/client7/~dmtmp + +800 /clients/client4/~dmtmp + + + +
+
+ Viewing List of Read Performance on each Brick + You can view the read throughput of files on each brick. If brick name is not specified, then the +metrics of all the bricks belonging to that volume will be displayed. The output will be the read +throughput. + + ==========Read throughput file stats======== + +read filename Time +through +put(MBp +s) + +2570.00 /clients/client0/~dmtmp/PWRPNT/ -2011-01-31 + TRIDOTS.POT 15:38:36.894610 +2570.00 /clients/client0/~dmtmp/PWRPNT/ -2011-01-31 + PCBENCHM.PPT 15:38:39.815310 +2383.00 /clients/client2/~dmtmp/SEED/ -2011-01-31 + MEDIUM.FIL 15:52:53.631499 + +2340.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 + MEDIUM.FIL 15:38:36.926198 + +2299.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 + LARGE.FIL 15:38:36.930445 + +2259.00 /clients/client0/~dmtmp/PARADOX/ -2011-01-31 + COURSES.X04 15:38:40.549919 + +2221.00 /clients/client0/~dmtmp/PARADOX/ -2011-01-31 + STUDENTS.VAL 15:52:53.298766 + +2221.00 /clients/client3/~dmtmp/SEED/ -2011-01-31 + COURSES.DB 15:39:11.776780 + +2184.00 /clients/client3/~dmtmp/SEED/ -2011-01-31 + MEDIUM.FIL 15:39:10.251764 + +2184.00 /clients/client5/~dmtmp/WORD/ -2011-01-31 + BASEMACH.DOC 15:39:09.336572 This command will initiate a dd for the specified count and block size and measures the +corresponding throughput. + + To view list of read performance on each brick + + + + View list of read performance on each brick using the following command: + + # gluster volume top VOLNAME read-perf [bs blk-size count count] [brick BRICK-NAME] [list-cnt cnt] + + For example, to view read performance on brick server:/export/ of test-volume, 256 block size +of count 1, and list count 10: + + # gluster volume top test-volume read-perf bs 256 count 1 brick server:/export/ list-cnt 10 + Brick: server:/export/dir1 256 bytes (256 B) copied, Throughput: 4.1 MB/s + ==========Read throughput file stats======== + +read filename Time +through +put(MBp +s) + +2912.00 /clients/client0/~dmtmp/PWRPNT/ -2011-01-31 + TRIDOTS.POT 15:38:36.896486 + +2570.00 /clients/client0/~dmtmp/PWRPNT/ -2011-01-31 + PCBENCHM.PPT 15:38:39.815310 + +2383.00 /clients/client2/~dmtmp/SEED/ -2011-01-31 + MEDIUM.FIL 15:52:53.631499 + +2340.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 + MEDIUM.FIL 15:38:36.926198 + +2299.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 + LARGE.FIL 15:38:36.930445 + +2259.00 /clients/client0/~dmtmp/PARADOX/ -2011-01-31 + COURSES.X04 15:38:40.549919 + +2221.00 /clients/client9/~dmtmp/PARADOX/ -2011-01-31 + STUDENTS.VAL 15:52:53.298766 + +2221.00 /clients/client8/~dmtmp/PARADOX/ -2011-01-31 + COURSES.DB 15:39:11.776780 + +2184.00 /clients/client3/~dmtmp/SEED/ -2011-01-31 + MEDIUM.FIL 15:39:10.251764 + +2184.00 /clients/client5/~dmtmp/WORD/ -2011-01-31 + BASEMACH.DOC 15:39:09.336572 + + + +
+
+ Viewing List of Write Performance on each Brick + You can view list of write throughput of files on each brick. If brick name is not specified, then the +metrics of all the bricks belonging to that volume will be displayed. The output will be the write +throughput. + + This command will initiate a dd for the specified count and block size and measures the +corresponding throughput. +To view list of write performance on each brick: + + + + View list of write performance on each brick using the following command: + + # gluster volume top VOLNAME write-perf [bs blk-size count count] [brick BRICK-NAME] [list-cnt cnt] + For example, to view write performance on brick server:/export/ of test-volume, 256 block size +of count 1, and list count 10: + + # gluster volume top test-volume write-perf bs 256 count 1 brick server:/export/ list-cnt 10 + Brick: server:/export/dir1 + + 256 bytes (256 B) copied, Throughput: 2.8 MB/s ==========Write throughput file stats======== + +write filename Time +throughput +(MBps) + +1170.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 + SMALL.FIL 15:39:09.171494 + +1008.00 /clients/client6/~dmtmp/SEED/ -2011-01-31 + LARGE.FIL 15:39:09.73189 + +949.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 + MEDIUM.FIL 15:38:36.927426 + +936.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 + LARGE.FIL 15:38:36.933177 +897.00 /clients/client5/~dmtmp/SEED/ -2011-01-31 + MEDIUM.FIL 15:39:09.33628 + +897.00 /clients/client6/~dmtmp/SEED/ -2011-01-31 + MEDIUM.FIL 15:39:09.27713 + +885.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 + SMALL.FIL 15:38:36.924271 + +528.00 /clients/client5/~dmtmp/SEED/ -2011-01-31 + LARGE.FIL 15:39:09.81893 + +516.00 /clients/client6/~dmtmp/ACCESS/ -2011-01-31 + FASTENER.MDB 15:39:01.797317 + + + +
+
+
+ Displaying Volume Information + You can display information about a specific volume, or all volumes, as needed. + To display volume information + + + Display information about a specific volume using the following command: + # gluster volume info VOLNAME + For example, to display information about test-volume: + # gluster volume info test-volume +Volume Name: test-volume +Type: Distribute +Status: Created +Number of Bricks: 4 +Bricks: +Brick1: server1:/exp1 +Brick2: server2:/exp2 +Brick3: server3:/exp3 +Brick4: server4:/exp4 + + + Display information about all volumes using the following command: + # gluster volume info all + # gluster volume info all + +Volume Name: test-volume +Type: Distribute +Status: Created +Number of Bricks: 4 +Bricks: +Brick1: server1:/exp1 +Brick2: server2:/exp2 +Brick3: server3:/exp3 +Brick4: server4:/exp4 + +Volume Name: mirror +Type: Distributed-Replicate +Status: Started +Number of Bricks: 2 X 2 = 4 +Bricks: +Brick1: server1:/brick1 +Brick2: server2:/brick2 +Brick3: server3:/brick3 +Brick4: server4:/brick4 + +Volume Name: Vol +Type: Distribute +Status: Started +Number of Bricks: 1 +Bricks: +Brick: server:/brick6 + + + + +
+
+ Performing Statedump on a Volume + Statedump is a mechanism through which you can get details of all internal variables and state of the glusterfs process at the time of issuing the command.You can perform statedumps of the brick processes and nfs server process of a volume using the statedump command. The following options can be used to determine what information is to be dumped: + + + mem - Dumps the memory usage and memory pool details of the bricks. + + + iobuf - Dumps iobuf details of the bricks. + + + priv - Dumps private information of loaded translators. + + + callpool - Dumps the pending calls of the volume. + + + fd - Dumps the open fd tables of the volume. + + + inode - Dumps the inode tables of the volume. + + + To display volume statedump + + + Display statedump of a volume or NFS server using the following command: + # gluster volume statedump VOLNAME [nfs] [all|mem|iobuf|callpool|priv|fd|inode] + For example, to display statedump of test-volume: + # gluster volume statedump test-volume +Volume statedump successful + The statedump files are created on the brick servers in the /tmp directory or in the directory set using server.statedump-path volume option. The naming convention of the dump file is <brick-path>.<brick-pid>.dump. + + + By defult, the output of the statedump is stored at /tmp/<brickname.PID.dump> file on that particular server. Change the directory of the statedump file using the following command: + # gluster volume set VOLNAME server.statedump-path path + For example, to change the location of the statedump file of test-volume: + # gluster volume set test-volume server.statedump-path /usr/local/var/log/glusterfs/dumps/ +Set volume successful + You can view the changed path of the statedump file using the following command: + # gluster volume info VOLNAME + + +
+
+ Displaying Volume Status + You can display the status information about a specific volume, brick or all volumes, as needed. Status information can be used to understand the current status of the brick, nfs processes, and overall file system. Status information can also be used to monitor and debug the volume information. You can view status of the volume along with the following details: + + + detail - Displays additional information about the bricks. + + + clients - Displays the list of clients connected to the volume. + + + mem - Displays the memory usage and memory pool details of the bricks. + + + inode - Displays the inode tables of the volume. + + + fd - Displays the open fd (file descriptors) tables of the volume. + + + callpool - Displays the pending calls of the volume. + + + To display volume status + + + Display information about a specific volume using the following command: + # gluster volume status [all|VOLNAME [BRICKNAME]] [detail|clients|mem|inode|fd|callpool] + For example, to display information about test-volume: + # gluster volume status test-volume +STATUS OF VOLUME: test-volume +BRICK PORT ONLINE PID +-------------------------------------------------------- +arch:/export/1 24009 Y 22445 +-------------------------------------------------------- +arch:/export/2 24010 Y 22450 + + + Display information about all volumes using the following command: + # gluster volume status all + + # gluster volume status all +STATUS OF VOLUME: volume-test +BRICK PORT ONLINE PID +-------------------------------------------------------- +arch:/export/4 24010 Y 22455 + +STATUS OF VOLUME: test-volume +BRICK PORT ONLINE PID +-------------------------------------------------------- +arch:/export/1 24009 Y 22445 +-------------------------------------------------------- +arch:/export/2 24010 Y 22450 + + + Display additional information about the bricks using the following command: + # gluster volume status VOLNAME detail + + For example, to display additional information about the bricks of test-volume: + # gluster volume status test-volume details +STATUS OF VOLUME: test-volume +------------------------------------------- +Brick : arch:/export/1 +Port : 24009 +Online : Y +Pid : 16977 +File System : rootfs +Device : rootfs +Mount Options : rw +Disk Space Free : 13.8GB +Total Disk Space : 46.5GB +Inode Size : N/A +Inode Count : N/A +Free Inodes : N/A + +Number of Bricks: 1 +Bricks: +Brick: server:/brick6 + + + Display the list of clients accessing the volumes using the following command: + # gluster volume status VOLNAME clients + + For example, to display the list of clients connected to test-volume: + # gluster volume status test-volume clients +Brick : arch:/export/1 +Clients connected : 2 +Hostname Bytes Read BytesWritten +-------- --------- ------------ +127.0.0.1:1013 776 676 +127.0.0.1:1012 50440 51200 + + + Display the memory usage and memory pool details of the bricks using the following command: + # gluster volume status VOLNAME mem + + For example, to display the memory usage and memory pool details of the bricks of test-volume: + Memory status for volume : test-volume +---------------------------------------------- +Brick : arch:/export/1 +Mallinfo +-------- +Arena : 434176 +Ordblks : 2 +Smblks : 0 +Hblks : 12 +Hblkhd : 40861696 +Usmblks : 0 +Fsmblks : 0 +Uordblks : 332416 +Fordblks : 101760 +Keepcost : 100400 + +Mempool Stats +------------- +Name HotCount ColdCount PaddedSizeof AllocCount MaxAlloc +---- -------- --------- ------------ ---------- -------- +test-volume-server:fd_t 0 16384 92 57 5 +test-volume-server:dentry_t 59 965 84 59 59 +test-volume-server:inode_t 60 964 148 60 60 +test-volume-server:rpcsvc_request_t 0 525 6372 351 2 +glusterfs:struct saved_frame 0 4096 124 2 2 +glusterfs:struct rpc_req 0 4096 2236 2 2 +glusterfs:rpcsvc_request_t 1 524 6372 2 1 +glusterfs:call_stub_t 0 1024 1220 288 1 +glusterfs:call_stack_t 0 8192 2084 290 2 +glusterfs:call_frame_t 0 16384 172 1728 6 + + + Display the inode tables of the volume using the following command: + # gluster volume status VOLNAME inode + + For example, to display the inode tables of the test-volume: + # gluster volume status test-volume inode +inode tables for volume test-volume +---------------------------------------------- +Brick : arch:/export/1 +Active inodes: +GFID Lookups Ref IA type +---- ------- --- ------- +6f3fe173-e07a-4209-abb6-484091d75499 1 9 2 +370d35d7-657e-44dc-bac4-d6dd800ec3d3 1 1 2 + +LRU inodes: +GFID Lookups Ref IA type +---- ------- --- ------- +80f98abe-cdcf-4c1d-b917-ae564cf55763 1 0 1 +3a58973d-d549-4ea6-9977-9aa218f233de 1 0 1 +2ce0197d-87a9-451b-9094-9baa38121155 1 0 2 + + + Display the open fd tables of the volume using the following command: + # gluster volume status VOLNAME fd + + For example, to display the open fd tables of the test-volume: + # gluster volume status test-volume fd + +FD tables for volume test-volume +---------------------------------------------- +Brick : arch:/export/1 +Connection 1: +RefCount = 0 MaxFDs = 128 FirstFree = 4 +FD Entry PID RefCount Flags +-------- --- -------- ----- +0 26311 1 2 +1 26310 3 2 +2 26310 1 2 +3 26311 3 2 + +Connection 2: +RefCount = 0 MaxFDs = 128 FirstFree = 0 +No open fds + +Connection 3: +RefCount = 0 MaxFDs = 128 FirstFree = 0 +No open fds + + + Display the pending calls of the volume using the following command: + # gluster volume status VOLNAME callpool + + Each call has a call stack containing call frames. + For example, to display the pending calls of test-volume: + # gluster volume status test-volume + +Pending calls for volume test-volume +---------------------------------------------- +Brick : arch:/export/1 +Pending calls: 2 +Call Stack1 + UID : 0 + GID : 0 + PID : 26338 + Unique : 192138 + Frames : 7 + Frame 1 + Ref Count = 1 + Translator = test-volume-server + Completed = No + Frame 2 + Ref Count = 0 + Translator = test-volume-posix + Completed = No + Parent = test-volume-access-control + Wind From = default_fsync + Wind To = FIRST_CHILD(this)->fops->fsync + Frame 3 + Ref Count = 1 + Translator = test-volume-access-control + Completed = No + Parent = repl-locks + Wind From = default_fsync + Wind To = FIRST_CHILD(this)->fops->fsync + Frame 4 + Ref Count = 1 + Translator = test-volume-locks + Completed = No + Parent = test-volume-io-threads + Wind From = iot_fsync_wrapper + Wind To = FIRST_CHILD (this)->fops->fsync + Frame 5 + Ref Count = 1 + Translator = test-volume-io-threads + Completed = No + Parent = test-volume-marker + Wind From = default_fsync + Wind To = FIRST_CHILD(this)->fops->fsync + Frame 6 + Ref Count = 1 + Translator = test-volume-marker + Completed = No + Parent = /export/1 + Wind From = io_stats_fsync + Wind To = FIRST_CHILD(this)->fops->fsync + Frame 7 + Ref Count = 1 + Translator = /export/1 + Completed = No + Parent = test-volume-server + Wind From = server_fsync_resume + Wind To = bound_xl->fops->fsync + + +
+
diff --git a/doc/legacy/docbook/admin_setting_volumes.xml b/doc/legacy/docbook/admin_setting_volumes.xml new file mode 100644 index 000000000..6a8468d5f --- /dev/null +++ b/doc/legacy/docbook/admin_setting_volumes.xml @@ -0,0 +1,325 @@ + + +%BOOK_ENTITIES; +]> + + Setting up GlusterFS Server Volumes + A volume is a logical collection of bricks where each brick is an export directory on a server in the trusted storage pool. Most of the gluster management operations are performed on the volume. + To create a new volume in your storage environment, specify the bricks that comprise the volume. After you have created a new volume, you must start it before attempting to mount it. + + + Volumes of the following types can be created in your storage environment: + + + Distributed - Distributed volumes distributes files throughout the bricks in the volume. You can use distributed volumes where the requirement is to scale storage and the redundancy is either not important or is provided by other hardware/software layers. For more information, see . + + + Replicated – Replicated volumes replicates files across bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical. For more information, see . + + + Striped – Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large files. For more information, see . + + + Distributed Striped - Distributed striped volumes stripe data across two or more nodes in the cluster. You should use distributed striped volumes where the requirement is to scale storage and in high concurrency environments accessing very large files is critical. For more information, see . + + + Distributed Replicated - Distributed replicated volumes distributes files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is to scale storage and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments. For more information, see . + + + Distributed Striped Replicated – Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. For best results, you should use distributed striped replicated volumes in highly concurrent environments where parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads. For more information, see . + + + + Striped Replicated – Striped replicated volumes stripes data across replicated bricks in the cluster. For best results, you should use striped replicated volumes in highly concurrent environments where there is parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads. For more +information, see . + + + + + To create a new volume + + + Create a new volume : + # gluster volume create NEW-VOLNAME [stripe COUNT | replica COUNT] [transport tcp | rdma | tcp, rdma] NEW-BRICK1 NEW-BRICK2 NEW-BRICK3... + For example, to create a volume called test-volume consisting of server3:/exp3 and server4:/exp4: + # gluster volume create test-volume server3:/exp3 server4:/exp4 +Creation of test-volume has been successful +Please start the volume to access data. + + +
+ Creating Distributed Volumes + In a distributed volumes files are spread randomly across the bricks in the volume. Use distributed volumes where you need to scale storage and redundancy is either not important or is provided by other hardware/software layers. + + Disk/server failure in distributed volumes can result in a serious loss of data because directory contents are spread randomly across the bricks in the volume. + +
+ Illustration of a Distributed Volume + + + + + +
+ To create a distributed volume + + + Create a trusted storage pool as described earlier in . + + + Create the distributed volume: + # gluster volume create NEW-VOLNAME [transport tcp | rdma | tcp,rdma] NEW-BRICK... + For example, to create a distributed volume with four storage servers using tcp: + # gluster volume create test-volume server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 +Creation of test-volume has been successful +Please start the volume to access data. + (Optional) You can display the volume information: + # gluster volume info +Volume Name: test-volume +Type: Distribute +Status: Created +Number of Bricks: 4 +Transport-type: tcp +Bricks: +Brick1: server1:/exp1 +Brick2: server2:/exp2 +Brick3: server3:/exp3 +Brick4: server4:/exp4 + For example, to create a distributed volume with four storage servers over InfiniBand: + # gluster volume create test-volume transport rdma server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 +Creation of test-volume has been successful +Please start the volume to access data. + If the transport type is not specified, tcp is used as the default. You can also set additional options if required, such as auth.allow or auth.reject. For more information, see + + Make sure you start your volumes before you try to mount them or else client operations after the mount will hang, see for details. + + + +
+
+ Creating Replicated Volumes + Replicated volumes create copies of files across multiple bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical. + + The number of bricks should be equal to of the replica count for a replicated volume. +To protect against server and disk failures, it is recommended that the bricks of the volume are from different servers. + +
+ Illustration of a Replicated Volume + + + + + +
+ To create a replicated volume + + + Create a trusted storage pool as described earlier in . + + + Create the replicated volume: + # gluster volume create NEW-VOLNAME [replica COUNT] [transport tcp | rdma tcp,rdma] NEW-BRICK... + For example, to create a replicated volume with two storage servers: + # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 +Creation of test-volume has been successful +Please start the volume to access data. + If the transport type is not specified, tcp is used as the default. You can also set additional options if required, such as auth.allow or auth.reject. For more information, see + + Make sure you start your volumes before you try to mount them or else client operations after the mount will hang, see for details. + + + +
+
+ Creating Striped Volumes + Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large files. + + The number of bricks should be a equal to the stripe count for a striped volume. + +
+ Illustration of a Striped Volume + + + + + +
+ To create a striped volume + + + Create a trusted storage pool as described earlier in . + + + Create the striped volume: + # gluster volume create NEW-VOLNAME [stripe COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK... + For example, to create a striped volume across two storage servers: + # gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2 +Creation of test-volume has been successful +Please start the volume to access data. + If the transport type is not specified, tcp is used as the default. You can also set additional options if required, such as auth.allow or auth.reject. For more information, see + + Make sure you start your volumes before you try to mount them or else client operations after the mount will hang, see for details. + + + +
+
+ Creating Distributed Striped Volumes + Distributed striped volumes stripes files across two or more nodes in the cluster. For best results, you should use distributed striped volumes where the requirement is to scale storage and in high concurrency environments accessing very large files is critical. + + The number of bricks should be a multiple of the stripe count for a distributed striped volume. + +
+ Illustration of a Distributed Striped Volume + + + + + +
+ To create a distributed striped volume + + + Create a trusted storage pool as described earlier in . + + + Create the distributed striped volume: + # gluster volume create NEW-VOLNAME [stripe COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK... + For example, to create a distributed striped volume across eight storage servers: + # gluster volume create test-volume stripe 4 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8 +Creation of test-volume has been successful +Please start the volume to access data. + If the transport type is not specified, tcp is used as the default. You can also set additional options if required, such as auth.allow or auth.reject. For more information, see + + Make sure you start your volumes before you try to mount them or else client operations after the mount will hang, see for details. + + + +
+
+ Creating Distributed Replicated Volumes + Distributes files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is to scale storage and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments. + + The number of bricks should be a multiple of the replica count for a distributed replicated volume. Also, the order in which bricks are specified has a great effect on data protection. Each replica_count consecutive bricks in the list you give will form a replica set, with all replica sets combined into a volume-wide distribute set. To make sure that replica-set members are not placed on the same node, list the first brick on every server, then the second brick on every server in the same order, and so on. + +
+ Illustration of a Distributed Replicated Volume + + + + + +
+ To create a distributed replicated volume + + + Create a trusted storage pool as described earlier in . + + + Create the distributed replicated volume: + # gluster volume create NEW-VOLNAME [replica COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK... + For example, four node distributed (replicated) volume with a two-way mirror: + + # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 +Creation of test-volume has been successful +Please start the volume to access data. + For example, to create a six node distributed (replicated) volume with a two-way mirror: + # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 +Creation of test-volume has been successful +Please start the volume to access data. + If the transport type is not specified, tcp is used as the default. You can also set additional options if required, such as auth.allow or auth.reject. For more information, see + + Make sure you start your volumes before you try to mount them or else client operations after the mount will hang, see for details. + + + +
+
+ Creating Distributed Striped Replicated Volumes + Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. For best results, you should use distributed striped replicated volumes in highly concurrent environments where parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads. + + The number of bricks should be a multiples of number of stripe count and replica count for +a distributed striped replicated volume. + + + To create a distributed striped replicated volume + + + + Create a trusted storage pool as described earlier in . + + + Create a distributed striped replicated volume using the following command: + # gluster volume create NEW-VOLNAME [stripe COUNT] [replica COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK... + For example, to create a distributed replicated striped volume across eight storage servers: + + # gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8 +Creation of test-volume has been successful +Please start the volume to access data. + If the transport type is not specified, tcp is used as the default. You can also set additional options if required, such as auth.allow or auth.reject. For more information, see + + Make sure you start your volumes before you try to mount them or else client operations after the mount will hang, see for details. + + + +
+
+ Creating Striped Replicated Volumes + Striped replicated volumes stripes data across replicated bricks in the cluster. For best results, you should use striped replicated volumes in highly concurrent environments where there is parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads. + + The number of bricks should be a multiple of the replicate count and stripe count for a +striped replicated volume. + + +
+ Illustration of a Striped Replicated Volume + + + + + +
+ To create a striped replicated volume + + + + Create a trusted storage pool consisting of the storage servers that will comprise the volume. + For more information, see . + + + Create a striped replicated volume : + # gluster volume create NEW-VOLNAME [stripe COUNT] [replica COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK... + For example, to create a striped replicated volume across four storage servers: + + + # gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 +Creation of test-volume has been successful +Please start the volume to access data. + To create a striped replicated volume across six storage servers: + + # gluster volume create test-volume stripe 3 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 +Creation of test-volume has been successful +Please start the volume to access data. + If the transport type is not specified, tcp is used as the default. You can also set additional options if required, such as auth.allow or auth.reject. For more information, see + + Make sure you start your volumes before you try to mount them or else client operations after the mount will hang, see for details. + + + +
+
+ Starting Volumes + You must start your volumes before you try to mount them. + To start a volume + + + Start a volume: + # gluster volume start VOLNAME + For example, to start test-volume: + # gluster volume start test-volume +Starting test-volume has been successful + + +
+
diff --git a/doc/legacy/docbook/admin_settingup_clients.xml b/doc/legacy/docbook/admin_settingup_clients.xml new file mode 100644 index 000000000..22979acf4 --- /dev/null +++ b/doc/legacy/docbook/admin_settingup_clients.xml @@ -0,0 +1,511 @@ + + +%BOOK_ENTITIES; +]> + + Accessing Data - Setting Up GlusterFS Client + You can access gluster volumes in multiple ways. You can use Gluster Native Client method for high concurrency, performance and transparent failover in GNU/Linux clients. You can also use NFS v3 to access gluster volumes. Extensive testing has be done on GNU/Linux clients and NFS implementation in other operating system, such as FreeBSD, and Mac OS X, as well as Windows 7 (Professional and Up) and Windows Server 2003. Other NFS client implementations may work with gluster NFS server. + You can use CIFS to access volumes when using Microsoft Windows as well as SAMBA clients. For this access method, Samba packages need to be present on the client side. +
+ Gluster Native Client + The Gluster Native Client is a FUSE-based client running in user space. Gluster Native Client is the recommended method for accessing volumes when high concurrency and high write performance is required. + This section introduces the Gluster Native Client and explains how to install the software on client machines. This section also describes how to mount volumes on clients (both manually and automatically) and how to verify that the volume has mounted successfully. +
+ Installing the Gluster Native Client + Before you begin installing the Gluster Native Client, you need to verify that the FUSE module is loaded on the client and has access to the required modules as follows: + + + Add the FUSE loadable kernel module (LKM) to the Linux kernel: + # modprobe fuse + + + Verify that the FUSE module is loaded: + # dmesg | grep -i fuse + fuse init (API version 7.13) + + +
+ Installing on Red Hat Package Manager (RPM) Distributions + To install Gluster Native Client on RPM distribution-based systems + + + Install required prerequisites on the client using the following command: + $ sudo yum -y install openssh-server wget fuse fuse-libs openib libibverbs + + + Ensure that TCP and UDP ports 24007 and 24008 are open on all Gluster servers. Apart from these ports, you need to open one port for each brick starting from port 24009. For example: if you have five bricks, you need to have ports 24009 to 24013 open. + You can use the following chains with iptables: + $ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT + $ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24009:24014 -j ACCEPT + + If you already have iptable chains, make sure that the above ACCEPT rules precede the DROP rules. This can be achieved by providing a lower rule number than the DROP rule. + + + + Download the latest glusterfs, glusterfs-fuse, and glusterfs-rdma RPM files to each client. The glusterfs package contains the Gluster Native Client. The glusterfs-fuse package contains the FUSE translator required for mounting on client systems and the glusterfs-rdma packages contain OpenFabrics verbs RDMA module for Infiniband. + You can download the software at . + + + Install Gluster Native Client on the client. + $ sudo rpm -i glusterfs-3.3.0qa30-1.x86_64.rpm + $ sudo rpm -i glusterfs-fuse-3.3.0qa30-1.x86_64.rpm + $ sudo rpm -i glusterfs-rdma-3.3.0qa30-1.x86_64.rpm + + The RDMA module is only required when using Infiniband. + + + +
+
+ Installing on Debian-based Distributions + To install Gluster Native Client on Debian-based distributions + + + Install OpenSSH Server on each client using the following command: + $ sudo apt-get install openssh-server vim wget + + + Download the latest GlusterFS .deb file and checksum to each client. + You can download the software at . + + + For each .deb file, get the checksum (using the following command) and compare it against the checksum for that file in the md5sum file. + +$ md5sum GlusterFS_DEB_file.deb + The md5sum of the packages is available at: + + + Uninstall GlusterFS v3.1 (or an earlier version) from the client using the following command: + + $ sudo dpkg -r glusterfs + (Optional) Run $ sudo dpkg -purge glusterfs to purge the configuration files. + + + Install Gluster Native Client on the client using the following command: + + $ sudo dpkg -i GlusterFS_DEB_file + For example: + + $ sudo dpkg -i glusterfs-3.3.x.deb + + + Ensure that TCP and UDP ports 24007 and 24008 are open on all Gluster servers. Apart from these ports, you need to open one port for each brick starting from port 24009. For example: if you have five bricks, you need to have ports 24009 to 24013 open. + + You can use the following chains with iptables: + + $ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT + $ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24009:24014 -j ACCEPT + + If you already have iptable chains, make sure that the above ACCEPT rules precede the DROP rules. This can be achieved by providing a lower rule number than the DROP rule. + + + +
+
+ Performing a Source Installation + To build and install Gluster Native Client from the source code + + + Create a new directory using the following commands: + # mkdir glusterfs + # cd glusterfs + + + Download the source code. + + You can download the source at . + + + Extract the source code using the following command: + + # tar -xvzf SOURCE-FILE + + + Run the configuration utility using the following command: + + # ./configure + GlusterFS configure summary + ================== + FUSE client : yes + Infiniband verbs : yes + epoll IO multiplex : yes + argp-standalone : no + fusermount : no + readline : yes + The configuration summary shows the components that will be built with Gluster Native Client. + + + Build the Gluster Native Client software using the following commands: + + # make + # make install + + + Verify that the correct version of Gluster Native Client is installed, using the following command: + + # glusterfs –-version + + +
+
+
+ Mounting Volumes + After installing the Gluster Native Client, you need to mount Gluster volumes to access data. There are two methods you can choose: + + + + + + + + + After mounting a volume, you can test the mounted volume using the procedure described in . + + Server names selected during creation of Volumes should be resolvable in the client machine. You can use appropriate /etc/hosts entries or DNS server to resolve server names to IP addresses. + +
+ Manually Mounting Volumes + To manually mount a Gluster volume + + + To mount a volume, use the following command: + + # mount -t glusterfs HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR + + For example: + + # mount -t glusterfs server1:/test-volume /mnt/glusterfs + + The server specified in the mount command is only used to fetch the gluster configuration volfile describing the volume name. Subsequently, the client will communicate directly with the servers mentioned in the volfile (which might not even include the one used for mount). + + + If you see a usage message like "Usage: mount.glusterfs", mount usually requires you to create a directory to be used as the mount point. Run "mkdir /mnt/glusterfs" before you attempt to run the mount command listed above. + + + + Mounting Options + You can specify the following options when using the mount -t glusterfs command. Note that you need to separate all options with commas. + + + backupvolfile-server=server-name + volfile-max-fetch-attempts=number of attempts + log-level=loglevel + + log-file=logfile + + transport=transport-type + + direct-io-mode=[enable|disable] + + + For example: + + # mount -t glusterfs -o backupvolfile-server=volfile_server2 --volfile-max-fetch-attempts=2 log-level=WARNING,log-file=/var/log/gluster.log server1:/test-volume /mnt/glusterfs + If option is added while mounting fuse client, when the first +volfile server fails, then the server specified in option is used as volfile server to mount +the client. + In --volfile-max-fetch-attempts=X option, specify the number of attempts to fetch volume files while mounting a volume. This option is useful when you mount a server with multiple IP addresses or when round-robin DNS is configured for the server-name.. +
+
+ Automatically Mounting Volumes + You can configure your system to automatically mount the Gluster volume each time your system starts. + The server specified in the mount command is only used to fetch the gluster configuration volfile describing the volume name. Subsequently, the client will communicate directly with the servers mentioned in the volfile (which might not even include the one used for mount). + To automatically mount a Gluster volume + + + To mount a volume, edit the /etc/fstab file and add the following line: + + HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR glusterfs defaults,_netdev 0 0 + For example: + + server1:/test-volume /mnt/glusterfs glusterfs defaults,_netdev 0 0 + + + Mounting Options + You can specify the following options when updating the /etc/fstab file. Note that you need to separate all options with commas. + + + log-level=loglevel + + log-file=logfile + + transport=transport-type + + direct-io-mode=[enable|disable] + + + For example: + + HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR glusterfs defaults,_netdev,log-level=WARNING,log-file=/var/log/gluster.log 0 0 +
+
+ Testing Mounted Volumes + To test mounted volumes + + + Use the following command: + + # mount + If the gluster volume was successfully mounted, the output of the mount command on the client will be similar to this example: + + + server1:/test-volume on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072 + + + + + Use the following command: + + # df + + The output of df command on the client will display the aggregated storage space from all the bricks in a volume similar to this example: + + # df -h /mnt/glusterfs Filesystem Size Used Avail Use% Mounted on server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs + + + Change to the directory and list the contents by entering the following: + + # cd MOUNTDIR + # ls + + + For example, + # cd /mnt/glusterfs + # ls + + +
+
+
+
+ NFS + You can use NFS v3 to access to gluster volumes. Extensive testing has be done on GNU/Linux clients and NFS implementation in other operating system, such as FreeBSD, and Mac OS X, as well as Windows 7 (Professional and Up), Windows Server 2003, and others, may work with gluster NFS server implementation. + GlusterFS now includes network lock manager (NLM) v4. NLM enables applications on NFSv3 clients to do record locking on files on NFS server. It is started automatically whenever the NFS server is run. + You must install nfs-common package on both servers and clients (only for Debian-based) distribution. + This section describes how to use NFS to mount Gluster volumes (both manually and automatically) and how to verify that the volume has been mounted successfully. +
+ Using NFS to Mount Volumes + You can use either of the following methods to mount Gluster volumes: + + + + + + + + + Prerequisite: Install nfs-common package on both servers and clients (only for Debian-based distribution), using the following command: + $ sudo aptitude install nfs-common + After mounting a volume, you can test the mounted volume using the procedure described in . +
+ Manually Mounting Volumes Using NFS + To manually mount a Gluster volume using NFS + + + To mount a volume, use the following command: + + # mount -t nfs -o vers=3 HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR + + For example: + # mount -t nfs -o vers=3 server1:/test-volume /mnt/glusterfs + + Gluster NFS server does not support UDP. If the NFS client you are using defaults to connecting using UDP, the following message appears: + + requested NFS version or transport protocol is not supported. + + To connect using TCP + + + Add the following option to the mount command: + + -o mountproto=tcp + For example: + + # mount -o mountproto=tcp -t nfs server1:/test-volume /mnt/glusterfs + + + To mount Gluster NFS server from a Solaris client + + + Use the following command: + + # mount -o proto=tcp,vers=3 nfs://HOSTNAME-OR-IPADDRESS:38467/VOLNAME MOUNTDIR + +For example: + # mount -o proto=tcp,vers=3 nfs://server1:38467/test-volume /mnt/glusterfs + + +
+
+ Automatically Mounting Volumes Using NFS + You can configure your system to automatically mount Gluster volumes using NFS each time the system starts. + To automatically mount a Gluster volume using NFS + + + To mount a volume, edit the /etc/fstab file and add the following line: + HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR nfs defaults,_netdev,vers=3 0 0 + For example, + server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,vers=3 0 0 + + Gluster NFS server does not support UDP. If the NFS client you are using defaults to connecting using UDP, the following message appears: + requested NFS version or transport protocol is not supported. + + + To connect using TCP + + + Add the following entry in /etc/fstab file : + HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR nfs defaults,_netdev,mountproto=tcp 0 0 + For example, + server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,mountproto=tcp 0 0 + + + To automount NFS mounts + Gluster supports *nix standard method of automounting NFS mounts. Update the /etc/auto.master and /etc/auto.misc and restart the autofs service. After that, whenever a user or process attempts to access the directory it will be mounted in the background. +
+
+ Testing Volumes Mounted Using NFS + You can confirm that Gluster directories are mounting successfully. + To test mounted volumes + + + Use the mount command by entering the following: + # mount + For example, the output of the mount command on the client will display an entry like the following: + server1:/test-volume on /mnt/glusterfs type nfs (rw,vers=3,addr=server1) + + + + + Use the df command by entering the following: + # df + For example, the output of df command on the client will display the aggregated storage space from all the bricks in a volume. + # df -h /mnt/glusterfs +Filesystem Size Used Avail Use% Mounted on +server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs + + + Change to the directory and list the contents by entering the following: + # cd MOUNTDIR + # ls + For example, + + # cd /mnt/glusterfs + + # ls + + +
+
+
+
+ CIFS + You can use CIFS to access to volumes when using Microsoft Windows as well as SAMBA clients. For this access method, Samba packages need to be present on the client side. You can export glusterfs mount point as the samba export, and then mount it using CIFS protocol. + This section describes how to mount CIFS shares on Microsoft Windows-based clients (both manually and automatically) and how to verify that the volume has mounted successfully. + + CIFS access using the Mac OS X Finder is not supported, however, you can use the Mac OS X command line to access Gluster volumes using CIFS. + +
+ Using CIFS to Mount Volumes + You can use either of the following methods to mount Gluster volumes: + + + + + + + + + After mounting a volume, you can test the mounted volume using the procedure described in . + You can also use Samba for exporting Gluster Volumes through CIFS protocol. +
+ Exporting Gluster Volumes Through Samba + We recommend you to use Samba for exporting Gluster volumes through the CIFS protocol. + To export volumes through CIFS protocol + + + Mount a Gluster volume. For more information on mounting volumes, see . + + + Setup Samba configuration to export the mount point of the Gluster volume. + For example, if a Gluster volume is mounted on /mnt/gluster, you must edit smb.conf file to enable exporting this through CIFS. Open smb.conf file in an editor and add the following lines for a simple configuration: + [glustertest] + + comment = For testing a Gluster volume exported through CIFS + + path = /mnt/glusterfs + + read only = no + + guest ok = yes + + + Save the changes and start the smb service using your systems init scripts (/etc/init.d/smb [re]start). + + To be able mount from any server in the trusted storage pool, you must repeat these steps on each Gluster node. For more advanced configurations, see Samba documentation. + +
+
+ Manually Mounting Volumes Using CIFS + You can manually mount Gluster volumes using CIFS on Microsoft Windows-based client machines. + To manually mount a Gluster volume using CIFS + + + Using Windows Explorer, choose Tools > Map Network Drive… from the menu. The Map Network Drive window appears. + + + Choose the drive letter using the Drive drop-down list. + + + Click Browse, select the volume to map to the network drive, and click OK. + + + Click Finish. + + + The network drive (mapped to the volume) appears in the Computer window. + Alternatively, to manually mount a Gluster volume using CIFS. + + + Click Start > Run and enter the following: + + \\SERVERNAME\VOLNAME + + For example: + + \\server1\test-volume + + + +
+
+ Automatically Mounting Volumes Using CIFS + You can configure your system to automatically mount Gluster volumes using CIFS on Microsoft Windows-based clients each time the system starts. + To automatically mount a Gluster volume using CIFS + The network drive (mapped to the volume) appears in the Computer window and is reconnected each time the system starts. + + + Using Windows Explorer, choose Tools > Map Network Drive… from the menu. The Map Network Drive window appears. + + + Choose the drive letter using the Drive drop-down list. + + + Click Browse, select the volume to map to the network drive, and click OK. + + + Click the Reconnect at logon checkbox. + + + Click Finish. + + +
+
+ Testing Volumes Mounted Using CIFS + You can confirm that Gluster directories are mounting successfully by navigating to the directory using Windows Explorer. +
+
+
+
diff --git a/doc/legacy/docbook/admin_start_stop_daemon.xml b/doc/legacy/docbook/admin_start_stop_daemon.xml new file mode 100644 index 000000000..bdab0b8b6 --- /dev/null +++ b/doc/legacy/docbook/admin_start_stop_daemon.xml @@ -0,0 +1,56 @@ + + +%BOOK_ENTITIES; +]> + + Managing the glusterd Service + After installing GlusterFS, you must start glusterd service. The glusterd service serves as the Gluster elastic volume manager, overseeing glusterfs processes, and co-ordinating dynamic volume operations, such as adding and removing volumes across multiple storage servers non-disruptively. + This section describes how to start the glusterd service in the following ways: + + + + + + + + + + You must start glusterd on all GlusterFS servers. + +
+ Starting and Stopping glusterd Manually + This section describes how to start and stop glusterd manually + + + To start glusterd manually, enter the following command: + # /etc/init.d/glusterd start + + + To stop glusterd manually, enter the following command: + # /etc/init.d/glusterd stop + + +
+
+ Starting glusterd Automatically + This section describes how to configure the system to automatically start the glusterd service every time the system boots. + To automatically start the glusterd service every time the system boots, enter the following from the command line: + # chkconfig glusterd on +
+ Red Hat-based Systems + To configure Red Hat-based systems to automatically start the glusterd service every time the system boots, enter the following from the command line: + # chkconfig glusterd on +
+
+ Debian-based Systems + To configure Debian-based systems to automatically start the glusterd service every time the system boots, enter the following from the command line: + # update-rc.d glusterd defaults +
+
+ Systems Other than Red Hat and Debain + To configure systems other than Red Hat or Debian to automatically start the glusterd service every time the system boots, enter the following entry to the /etc/rc.local file: + # echo "glusterd" >> /etc/rc.local +
+
+
diff --git a/doc/legacy/docbook/admin_storage_pools.xml b/doc/legacy/docbook/admin_storage_pools.xml new file mode 100644 index 000000000..87b6320bd --- /dev/null +++ b/doc/legacy/docbook/admin_storage_pools.xml @@ -0,0 +1,57 @@ + + + + Setting up Trusted Storage Pools + Before you can configure a GlusterFS volume, you must create a trusted storage pool consisting of the storage servers that provides bricks to a volume. + A storage pool is a trusted network of storage servers. When you start the first server, the storage pool consists of that server alone. To add additional storage servers to the storage pool, you can use the probe command from a storage server that is already trusted. + + Do not self-probe the first server/localhost. + + The GlusterFS service must be running on all storage servers that you want to add to the storage pool. See for more information. +
+ Adding Servers to Trusted Storage Pool + To create a trusted storage pool, add servers to the trusted storage pool + + + The hostnames used to create the storage pool must be resolvable by DNS. + To add a server to the storage pool: + # gluster peer probe server + For example, to create a trusted storage pool of four servers, add three servers to the storage pool from server1: + # gluster peer probe server2 +Probe successful + +# gluster peer probe server3 +Probe successful + +# gluster peer probe server4 +Probe successful + + + + Verify the peer status from the first server using the following commands: + # gluster peer status +Number of Peers: 3 + +Hostname: server2 +Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5 +State: Peer in Cluster (Connected) + +Hostname: server3 +Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7 +State: Peer in Cluster (Connected) + +Hostname: server4 +Uuid: 3e0caba-9df7-4f66-8e5d-cbc348f29ff7 +State: Peer in Cluster (Connected) + + +
+
+ Removing Servers from the Trusted Storage Pool + To remove a server from the storage pool: + # gluster peer detach server + For example, to remove server4 from the trusted storage pool: + # gluster peer detach server4 +Detach successful +
+
diff --git a/doc/legacy/docbook/admin_troubleshooting.xml b/doc/legacy/docbook/admin_troubleshooting.xml new file mode 100644 index 000000000..af1259ada --- /dev/null +++ b/doc/legacy/docbook/admin_troubleshooting.xml @@ -0,0 +1,518 @@ + + + + + Troubleshooting GlusterFS + This section describes how to manage GlusterFS logs and most common troubleshooting scenarios +related to GlusterFS. + +
+ Managing GlusterFS Logs + This section describes how to manage GlusterFS logs by performing the following operation: + + + + + Rotating Logs + + + +
+ Rotating Logs + Administrators can rotate the log file in a volume, as needed. + + To rotate a log file + + + Rotate the log file using the following command: + + # gluster volume log rotate VOLNAME + For example, to rotate the log file on test-volume: + + # gluster volume log rotate test-volume +log rotate successful + + + When a log file is rotated, the contents of the current log file are moved to log-file- +name.epoch-time-stamp. + + + + +
+
+
+ Troubleshooting Geo-replication + This section describes the most common troubleshooting scenarios related to GlusterFS Geo-replication. + +
+ Locating Log Files + For every Geo-replication session, the following three log files are associated to it (four, if the slave is a +gluster volume): + + + + Master-log-file - log file for the process which monitors the Master volume + + + + Slave-log-file - log file for process which initiates the changes in slave + + + + Master-gluster-log-file - log file for the maintenance mount point that Geo-replication module +uses to monitor the master volume + + + + Slave-gluster-log-file - is the slave's counterpart of it + + + + Master Log File + + To get the Master-log-file for geo-replication, use the following command: + + gluster volume geo-replication MASTER SLAVE config log-file + + For example: + + # gluster volume geo-replication Volume1 example.com:/data/remote_dir config log-file + Slave Log File + To get the log file for Geo-replication on slave (glusterd must be running on slave machine), use the +following commands: + + + + On master, run the following command: + + # gluster volume geo-replication Volume1 example.com:/data/remote_dir config session-owner 5f6e5200-756f-11e0-a1f0-0800200c9a66 + Displays the session owner details. + + + + On slave, run the following command: + + # gluster volume geo-replication /data/remote_dir config log-file /var/log/gluster/${session-owner}:remote-mirror.log + + + Replace the session owner details (output of Step 1) to the output of the Step 2 to get the +location of the log file. + + /var/log/gluster/5f6e5200-756f-11e0-a1f0-0800200c9a66:remote-mirror.log + + + +
+
+ Rotating Geo-replication Logs + Administrators can rotate the log file of a particular master-slave session, as needed. +When you run geo-replication's log-rotate command, the log file +is backed up with the current timestamp suffixed to the file +name and signal is sent to gsyncd to start logging to a new +log file. + To rotate a geo-replication log file + + + Rotate log file for a particular master-slave session using the following command: + + # gluster volume geo-replication master slave log-rotate + + For example, to rotate the log file of master Volume1 and slave example.com:/data/remote_dir +: + + # gluster volume geo-replication Volume1 example.com:/data/remote_dir log rotate +log rotate successful + + + Rotate log file for all sessions for a master volume using the following command: + + # gluster volume geo-replication master log-rotate + + For example, to rotate the log file of master Volume1: + + # gluster volume geo-replication Volume1 log rotate +log rotate successful + + + Rotate log file for all sessions using the following command: + + # gluster volume geo-replication log-rotate + + For example, to rotate the log file for all sessions: + # gluster volume geo-replication log rotate +log rotate successful + + +
+
+ Synchronization is not complete + Description: GlusterFS Geo-replication did not synchronize the data completely but still the geo- +replication status displayed is OK. + + Solution: You can enforce a full sync of the data by erasing the index and restarting GlusterFS Geo- +replication. After restarting, GlusterFS Geo-replication begins synchronizing all the data. All files are compared using checksum, which can be a lengthy and high resource utilization operation on large +data sets. If the error situation persists, contact Red Hat Support. + + For more information about erasing index, see . + +
+
+ Issues in Data Synchronization + Description: Geo-replication display status as OK, but the files do not get synced, only +directories and symlink gets synced with the following error message in the log: + + [2011-05-02 13:42:13.467644] E [master:288:regjob] GMaster: failed to sync ./some_file` + Solution: Geo-replication invokes rsync v3.0.0 or higher on the host and the remote machine. You must verify if +you have installed the required version. + +
+
+ Geo-replication status displays Faulty very often + Description: Geo-replication displays status as faulty very often with a backtrace similar to +the following: + + 2011-04-28 14:06:18.378859] E [syncdutils:131:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File "/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 152, in twraptf(*aa) File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 118, in listen rid, exc, res = recv(self.inf) File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 42, in recv return pickle.load(inf) EOFError + Solution: This error indicates that the RPC communication between the master gsyncd module and slave +gsyncd module is broken and this can happen for various reasons. Check if it satisfies all the following +pre-requisites: + + + + Password-less SSH is set up properly between the host and the remote machine. + + + + If FUSE is installed in the machine, because geo-replication module mounts the GlusterFS volume +using FUSE to sync data. + + + + If the Slave is a volume, check if that volume is started. + + + + If the Slave is a plain directory, verify if the directory has been created already with the +required permissions. + + + + If GlusterFS 3.2 or higher is not installed in the default location (in Master) and has been prefixed to be +installed in a custom location, configure the gluster-command for it to point to the exact +location. + + + + If GlusterFS 3.2 or higher is not installed in the default location (in slave) and has been prefixed to be +installed in a custom location, configure the remote-gsyncd-command for it to point to the +exact place where gsyncd is located. + + + +
+
+ Intermediate Master goes to Faulty State + Description: In a cascading set-up, the intermediate master goes to faulty state with the following +log: + + raise RuntimeError ("aborting on uuid change from %s to %s" % \ RuntimeError: aborting on uuid change from af07e07c-427f-4586-ab9f- 4bf7d299be81 to de6b5040-8f4e-4575-8831-c4f55bd41154 + Solution: In a cascading set-up the Intermediate master is loyal to the original primary master. The +above log means that the geo-replication module has detected change in primary master. +If this is the desired behavior, delete the config option volume-id in the session initiated from the +intermediate master. + +
+
+
+ Troubleshooting POSIX ACLs + This section describes the most common troubleshooting issues related to POSIX ACLs. + +
+ setfacl command fails with “setfacl: <file or directory name>: Operation not supported” error + You may face this error when the backend file systems in one of the servers is not mounted with +the "-o acl" option. The same can be confirmed by viewing the following error message in the log file +of the server "Posix access control list is not supported". + + Solution: Remount the backend file system with "-o acl" option. For more information, see . + +
+
+
+ Troubleshooting Hadoop Compatible Storage + This section describes the most common troubleshooting issues related to Hadoop Compatible +Storage. + + +
+ Time Sync + Running MapReduce job may throw exceptions if the time is out-of-sync on the hosts in the cluster. + + + Solution: Sync the time on all hosts using ntpd program. + +
+
+
+ Troubleshooting NFS + This section describes the most common troubleshooting issues related to NFS . + +
+ mount command on NFS client fails with “RPC Error: Program not registered” + Start portmap or rpcbind service on the NFS server. + + This error is encountered when the server has not started correctly. + + On most Linux distributions this is fixed by starting portmap: + + $ /etc/init.d/portmap start + + On some distributions where portmap has been replaced by rpcbind, the following command is +required: + + $ /etc/init.d/rpcbind start + After starting portmap or rpcbind, gluster NFS server needs to be restarted. + +
+
+ NFS server start-up fails with “Port is already in use” error in the log file." + Another Gluster NFS server is running on the same machine. + + This error can arise in case there is already a Gluster NFS server running on the same machine. +This situation can be confirmed from the log file, if the following error lines exist: + + [2010-05-26 23:40:49] E [rpc-socket.c:126:rpcsvc_socket_listen] rpc-socket: binding socket failed:Address already in use +[2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use +[2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection +[2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed +[2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 +[2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed +[2010-05-26 23:40:49] C [nfs.c:531:notify] nfs: Failed to initialize protocols + To resolve this error one of the Gluster NFS servers will have to be shutdown. At this time, +Gluster NFS server does not support running multiple NFS servers on the same machine. + +
+
+ mount command fails with “rpc.statd” related error message + If the mount command fails with the following error message: + + mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. + Start rpc.statd + For NFS clients to mount the NFS server, rpc.statd service must be running on the clients. + Start +rpc.statd service by running the following command: + + $ rpc.statd +
+
+ mount command takes too long to finish. + Start rpcbind service on the NFS client. + + The problem is that the rpcbind or portmap service is not running on the NFS client. The +resolution for this is to start either of these services by running the following command: + + $ /etc/init.d/portmap start + + On some distributions where portmap has been replaced by rpcbind, the following command is +required: + + $ /etc/init.d/rpcbind start +
+
+ NFS server glusterfsd starts but initialization fails with “nfsrpc- service: portmap registration of program failed” error message in the log. + NFS start-up can succeed but the initialization of the NFS service can still fail preventing clients +from accessing the mount points. Such a situation can be confirmed from the following error +messages in the log file: + + [2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap +[2010-05-26 23:33:47] E [rpcsvc.c:2682:rpcsvc_program_register] rpc-service: portmap registration of program failed +[2010-05-26 23:33:47] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 +[2010-05-26 23:33:47] E [nfs.c:125:nfs_init_versions] nfs: Program init failed +[2010-05-26 23:33:47] C [nfs.c:531:notify] nfs: Failed to initialize protocols +[2010-05-26 23:33:49] E [rpcsvc.c:2614:rpcsvc_program_unregister_portmap] rpc-service: Could not unregister with portmap +[2010-05-26 23:33:49] E [rpcsvc.c:2731:rpcsvc_program_unregister] rpc-service: portmap unregistration of program failed +[2010-05-26 23:33:49] E [rpcsvc.c:2744:rpcsvc_program_unregister] rpc-service: Program unregistration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 + + + Start portmap or rpcbind service on the NFS server. + + On most Linux distributions, portmap can be started using the following command: + + $ /etc/init.d/portmap start + On some distributions where portmap has been replaced by rpcbind, run the following command: + + $ /etc/init.d/rpcbind start + After starting portmap or rpcbind, gluster NFS server needs to be restarted. + + + + Stop another NFS server running on the same machine. + + Such an error is also seen when there is another NFS server running on the same machine but it is +not the Gluster NFS server. On Linux systems, this could be the kernel NFS server. Resolution +involves stopping the other NFS server or not running the Gluster NFS server on the machine. +Before stopping the kernel NFS server, ensure that no critical service depends on access to that +NFS server's exports. + + On Linux, kernel NFS servers can be stopped by using either of the following commands +depending on the distribution in use: + + $ /etc/init.d/nfs-kernel-server stop + + $ /etc/init.d/nfs stop + + + Restart Gluster NFS server. + + + +
+
+ mount command fails with NFS server failed error. + mount command fails with following error + + mount: mount to NFS server '10.1.10.11' failed: timed out (retrying). + Perform one of the following to resolve this issue: + + + + Disable name lookup requests from NFS server to a DNS server. + + The NFS server attempts to authenticate NFS clients by performing a reverse DNS lookup to +match hostnames in the volume file with the client IP addresses. There can be a situation where +the NFS server either is not able to connect to the DNS server or the DNS server is taking too long +to responsd to DNS request. These delays can result in delayed replies from the NFS server to the +NFS client resulting in the timeout error seen above. + + NFS server provides a work-around that disables DNS requests, instead relying only on the client +IP addresses for authentication. The following option can be added for successful mounting in +such situations: + + option rpc-auth.addr.namelookup off + + Note: Remember that disabling the NFS server forces authentication of clients to use only IP +addresses and if the authentication rules in the volume file use hostnames, those authentication +rules will fail and disallow mounting for those clients. + + + or + + + NFS version used by the NFS client is other than version 3. + + Gluster NFS server supports version 3 of NFS protocol. In recent Linux kernels, the default NFS +version has been changed from 3 to 4. It is possible that the client machine is unable to connect +to the Gluster NFS server because it is using version 4 messages which are not understood by +Gluster NFS server. The timeout can be resolved by forcing the NFS client to use version 3. The +vers option to mount command is used for this purpose: + + $ mount nfsserver:export -o vers=3 mount-point + + + +
+
+ showmount fails with clnt_create: RPC: Unable to receive + Check your firewall setting to open ports 111 for portmap requests/replies and Gluster NFS +server requests/replies. Gluster NFS server operates over the following port numbers: 38465, +38466, and 38467. + + For more information, see . + +
+
+ Application fails with "Invalid argument" or "Value too large for defined data type" error. + These two errors generally happen for 32-bit nfs clients or applications that do not support 64-bit +inode numbers or large files. +Use the following option from the CLI to make Gluster NFS return 32-bit inode numbers instead: +nfs.enable-ino32 <on|off> + + Applications that will benefit are those that were either: + + + + built 32-bit and run on 32-bit machines such that they do not support large files by default + + + built 32-bit on 64-bit systems + + + + This option is disabled by default so NFS returns 64-bit inode numbers by default. + + Applications which can be rebuilt from source are recommended to rebuild using the following +flag with gcc: + -D_FILE_OFFSET_BITS=64 + +
+
+
+ Troubleshooting File Locks + In GlusterFS 3.3 you can use statedump command to list the locks held on files. The statedump output also provides information on each lock with its range, basename, PID of the application holding the lock, and so on. You can analyze the output to know about the locks whose owner/application is no longer running or interested in that lock. After ensuring that the no application is using the file, you can clear the lock using the following clear lock command: + # gluster volume clear-locks VOLNAME path kind {blocked | granted | all}{inode [range] | entry [basename] | posix [range]} + For more information on performing statedump, see + To identify locked file and clear locks + + + Perform statedump on the volume to view the files that are locked using the following command: + # gluster volume statedump VOLNAME inode + For example, to display statedump of test-volume: + # gluster volume statedump test-volume +Volume statedump successful + The statedump files are created on the brick servers in the /tmp directory or in the directory set using server.statedump-path volume option. The naming convention of the dump file is <brick-path>.<brick-pid>.dump. + The following are the sample contents of the statedump file. It indicates that GlusterFS has entered into a state where there is an entry lock (entrylk) and an inode lock (inodelk). Ensure that those are stale locks and no resources own them. + [xlator.features.locks.vol-locks.inode] +path=/ +mandatory=0 +entrylk-count=1 +lock-dump.domain.domain=vol-replicate-0 +xlator.feature.locks.lock-dump.domain.entrylk.entrylk[0](ACTIVE)=type=ENTRYLK_WRLCK on basename=file1, pid = 714782904, owner=ffffff2a3c7f0000, transport=0x20e0670, , granted at Mon Feb 27 16:01:01 2012 + +conn.2.bound_xl./gfs/brick1.hashsize=14057 +conn.2.bound_xl./gfs/brick1.name=/gfs/brick1/inode +conn.2.bound_xl./gfs/brick1.lru_limit=16384 +conn.2.bound_xl./gfs/brick1.active_size=2 +conn.2.bound_xl./gfs/brick1.lru_size=0 +conn.2.bound_xl./gfs/brick1.purge_size=0 + +[conn.2.bound_xl./gfs/brick1.active.1] +gfid=538a3d4a-01b0-4d03-9dc9-843cd8704d07 +nlookup=1 +ref=2 +ia_type=1 +[xlator.features.locks.vol-locks.inode] +path=/file1 +mandatory=0 +inodelk-count=1 +lock-dump.domain.domain=vol-replicate-0 +inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 714787072, owner=00ffff2a3c7f0000, transport=0x20e0670, , granted at Mon Feb 27 16:01:01 2012 + + + Clear the lock using the following command: + # gluster volume clear-locks VOLNAME path kind granted entry basename + For example, to clear the entry lock on file1 of test-volume: + + # gluster volume clear-locks test-volume / kind granted entry file1 +Volume clear-locks successful +vol-locks: entry blocked locks=0 granted locks=1 + + + Clear the inode lock using the following command: + # gluster volume clear-locks VOLNAME path kind granted inode range + For example, to clear the inode lock on file1 of test-volume: + + # gluster volume clear-locks test-volume /file1 kind granted inode 0,0-0 +Volume clear-locks successful +vol-locks: inode blocked locks=0 granted locks=1 + You can perform statedump on test-volume again to verify that the above inode and entry locks are cleared. + + +
+
diff --git a/doc/legacy/docbook/gfs_introduction.xml b/doc/legacy/docbook/gfs_introduction.xml new file mode 100644 index 000000000..5fd887305 --- /dev/null +++ b/doc/legacy/docbook/gfs_introduction.xml @@ -0,0 +1,54 @@ + + + + Introducing Gluster File System + GlusterFS is an open source, clustered file system capable of scaling to several petabytes and handling thousands of clients. GlusterFS can be flexibly combined with commodity physical, virtual, and cloud resources to deliver highly available and performant enterprise storage at a fraction of the cost of traditional solutions. + GlusterFS clusters together storage building blocks over Infiniband RDMA and/or TCP/IP interconnect, aggregating disk and memory resources and managing data in a single global namespace. GlusterFS is based on a stackable user space design, delivering exceptional performance for diverse workloads. + +
+ Virtualized Cloud Environments + + + Virtualized Cloud Environments + + + + + +
+ GlusterFS is designed for today's high-performance, virtualized cloud environments. Unlike traditional data centers, cloud environments require multi-tenancy along with the ability to grow or shrink resources on demand. Enterprises can scale capacity, performance, and availability on demand, with no vendor lock-in, across on-premise, public cloud, and hybrid environments. + GlusterFS is in production at thousands of enterprises spanning media, healthcare, government, education, web 2.0, and financial services. The following table lists the commercial offerings and its documentation location: + + + + + + + + Product + Documentation Location + + + + + Red Hat Storage Software Appliance + + + + + + Red Hat Virtual Storage Appliance + + + + + + Red Hat Storage + + + + + + + +
diff --git a/doc/legacy/docbook/glossary.xml b/doc/legacy/docbook/glossary.xml new file mode 100644 index 000000000..a8544b8cd --- /dev/null +++ b/doc/legacy/docbook/glossary.xml @@ -0,0 +1,126 @@ + + + + Glossary + + + Brick + + A Brick is the GlusterFS basic unit of storage, represented by an export directory on a server in the trusted storage pool. A Brick is expressed by combining a server with an export directory in the following format: + SERVER:EXPORT + For example: + myhostname:/exports/myexportdir/ + + + + Cluster + + A cluster is a group of linked computers, working together closely thus in many respects forming a single computer. + + + + Distributed File System + + A file system that allows multiple clients to concurrently access data over a computer network. + + + + Filesystem + + A method of storing and organizing computer files and their data. Essentially, it organizes these files into a database for the storage, organization, manipulation, and retrieval by the computer's operating system. + Source: Wikipedia + + + + FUSE + + Filesystem in Userspace (FUSE) is a loadable kernel module for Unix-like computer operating systems that lets non-privileged users create their own file systems without editing kernel code. This is achieved by running file system code in user space while the FUSE module provides only a "bridge" to the actual kernel interfaces. + Source: Wikipedia + + + + Geo-Replication + + Geo-replication provides a continuous, asynchronous, and incremental replication service from site to another over Local Area Networks (LAN), Wide Area Network (WAN), and across the Internet. + + + + glusterd + + The Gluster management daemon that needs to run on all servers in the trusted storage pool. + + + + Metadata + + Metadata is data providing information about one or more other pieces of data. + + + + Namespace + + Namespace is an abstract container or environment created to hold a logical grouping of unique identifiers or symbols. Each Gluster volume exposes a single namespace as a POSIX mount point that contains every file in the cluster. + + + + Open Source + + Open source describes practices in production and development that promote access to the end product's source materials. Some consider open source a philosophy, others consider it a pragmatic methodology. + Before the term open source became widely adopted, developers and producers used a variety of phrases to describe the concept; open source gained hold with the rise of the Internet, and the attendant need for massive retooling of the computing source code. + Opening the source code enabled a self-enhancing diversity of production models, communication paths, and interactive communities. Subsequently, a new, three-word phrase "open source software" was born to describe the environment that the new copyright, licensing, domain, and consumer issues created. + Source: Wikipedia + + + + Petabyte + + A petabyte (derived from the SI prefix peta- ) is a unit of information equal to one quadrillion (short scale) bytes, or 1000 terabytes. The unit symbol for the petabyte is PB. The prefix peta- (P) indicates a power of 1000: + 1 PB = 1,000,000,000,000,000 B = 10005 B = 1015 B. + The term "pebibyte" (PiB), using a binary prefix, is used for the corresponding power of 1024. + Source: Wikipedia + + + + POSIX + + Portable Operating System Interface (for Unix) is the name of a family of related standards specified by the IEEE to define the application programming interface (API), along with shell and utilities interfaces for software compatible with variants of the Unix operating system. Gluster exports a fully POSIX compliant file system. + + + + RAID + + Redundant Array of Inexpensive Disks (RAID) is a technology that provides increased storage reliability through redundancy, combining multiple low-cost, less-reliable disk drives components into a logical unit where all drives in the array are interdependent. + + + + RRDNS + + Round Robin Domain Name Service (RRDNS) is a method to distribute load across application servers. RRDNS is implemented by creating multiple A records with the same name and different IP addresses in the zone file of a DNS server. + + + + Trusted Storage Pool + + A storage pool is a trusted network of storage servers. When you start the first server, the storage pool consists of that server alone. + + + + Userspace + + Applications running in user space don’t directly interact with hardware, instead using the kernel to moderate access. Userspace applications are generally more portable than applications in kernel space. Gluster is a user space application. + + + + Volfile + + Volfile is a configuration file used by glusterfs process. Volfile will be usually located at /var/lib/glusterd/vols/VOLNAME. + + + + Volume + + A volume is a logical collection of bricks. Most of the gluster management operations happen on the volume. + + + + diff --git a/doc/legacy/docbook/publican.cfg b/doc/legacy/docbook/publican.cfg new file mode 100644 index 000000000..e42fa1b3d --- /dev/null +++ b/doc/legacy/docbook/publican.cfg @@ -0,0 +1,12 @@ +# Config::Simple 4.59 +# Thu Apr 5 11:09:15 2012 + +xml_lang: "en-US" +type: Book +brand: Gluster_Brand +prod_url: http://www.gluster.org +doc_url: http://www.gluster.com/community/documentation/index.php/Main_Page +condition: gfs +show_remarks: 1 + + diff --git a/doc/legacy/fdl.texi b/doc/legacy/fdl.texi new file mode 100644 index 000000000..e33c687cd --- /dev/null +++ b/doc/legacy/fdl.texi @@ -0,0 +1,454 @@ + +@c @node GNU Free Documentation License +@c @appendixsec GNU Free Documentation License + +@cindex FDL, GNU Free Documentation License +@center Version 1.2, November 2002 + +@display +Copyright @copyright{} 2000,2001,2002 Free Software Foundation, Inc. +59 Temple Place, Suite 330, Boston, MA 02111-1307, USA + +Everyone is permitted to copy and distribute verbatim copies +of this license document, but changing it is not allowed. +@end display + +@enumerate 0 +@item +PREAMBLE + +The purpose of this License is to make a manual, textbook, or other +functional and useful document @dfn{free} in the sense of freedom: to +assure everyone the effective freedom to copy and redistribute it, +with or without modifying it, either commercially or noncommercially. +Secondarily, this License preserves for the author and publisher a way +to get credit for their work, while not being considered responsible +for modifications made by others. + +This License is a kind of ``copyleft'', which means that derivative +works of the document must themselves be free in the same sense. It +complements the GNU General Public License, which is a copyleft +license designed for free software. + +We have designed this License in order to use it for manuals for free +software, because free software needs free documentation: a free +program should come with manuals providing the same freedoms that the +software does. But this License is not limited to software manuals; +it can be used for any textual work, regardless of subject matter or +whether it is published as a printed book. We recommend this License +principally for works whose purpose is instruction or reference. + +@item +APPLICABILITY AND DEFINITIONS + +This License applies to any manual or other work, in any medium, that +contains a notice placed by the copyright holder saying it can be +distributed under the terms of this License. Such a notice grants a +world-wide, royalty-free license, unlimited in duration, to use that +work under the conditions stated herein. The ``Document'', below, +refers to any such manual or work. Any member of the public is a +licensee, and is addressed as ``you''. You accept the license if you +copy, modify or distribute the work in a way requiring permission +under copyright law. + +A ``Modified Version'' of the Document means any work containing the +Document or a portion of it, either copied verbatim, or with +modifications and/or translated into another language. + +A ``Secondary Section'' is a named appendix or a front-matter section +of the Document that deals exclusively with the relationship of the +publishers or authors of the Document to the Document's overall +subject (or to related matters) and contains nothing that could fall +directly within that overall subject. (Thus, if the Document is in +part a textbook of mathematics, a Secondary Section may not explain +any mathematics.) The relationship could be a matter of historical +connection with the subject or with related matters, or of legal, +commercial, philosophical, ethical or political position regarding +them. + +The ``Invariant Sections'' are certain Secondary Sections whose titles +are designated, as being those of Invariant Sections, in the notice +that says that the Document is released under this License. If a +section does not fit the above definition of Secondary then it is not +allowed to be designated as Invariant. The Document may contain zero +Invariant Sections. If the Document does not identify any Invariant +Sections then there are none. + +The ``Cover Texts'' are certain short passages of text that are listed, +as Front-Cover Texts or Back-Cover Texts, in the notice that says that +the Document is released under this License. A Front-Cover Text may +be at most 5 words, and a Back-Cover Text may be at most 25 words. + +A ``Transparent'' copy of the Document means a machine-readable copy, +represented in a format whose specification is available to the +general public, that is suitable for revising the document +straightforwardly with generic text editors or (for images composed of +pixels) generic paint programs or (for drawings) some widely available +drawing editor, and that is suitable for input to text formatters or +for automatic translation to a variety of formats suitable for input +to text formatters. A copy made in an otherwise Transparent file +format whose markup, or absence of markup, has been arranged to thwart +or discourage subsequent modification by readers is not Transparent. +An image format is not Transparent if used for any substantial amount +of text. A copy that is not ``Transparent'' is called ``Opaque''. + +Examples of suitable formats for Transparent copies include plain +@sc{ascii} without markup, Texinfo input format, La@TeX{} input +format, @acronym{SGML} or @acronym{XML} using a publicly available +@acronym{DTD}, and standard-conforming simple @acronym{HTML}, +PostScript or @acronym{PDF} designed for human modification. Examples +of transparent image formats include @acronym{PNG}, @acronym{XCF} and +@acronym{JPG}. Opaque formats include proprietary formats that can be +read and edited only by proprietary word processors, @acronym{SGML} or +@acronym{XML} for which the @acronym{DTD} and/or processing tools are +not generally available, and the machine-generated @acronym{HTML}, +PostScript or @acronym{PDF} produced by some word processors for +output purposes only. + +The ``Title Page'' means, for a printed book, the title page itself, +plus such following pages as are needed to hold, legibly, the material +this License requires to appear in the title page. For works in +formats which do not have any title page as such, ``Title Page'' means +the text near the most prominent appearance of the work's title, +preceding the beginning of the body of the text. + +A section ``Entitled XYZ'' means a named subunit of the Document whose +title either is precisely XYZ or contains XYZ in parentheses following +text that translates XYZ in another language. (Here XYZ stands for a +specific section name mentioned below, such as ``Acknowledgements'', +``Dedications'', ``Endorsements'', or ``History''.) To ``Preserve the Title'' +of such a section when you modify the Document means that it remains a +section ``Entitled XYZ'' according to this definition. + +The Document may include Warranty Disclaimers next to the notice which +states that this License applies to the Document. These Warranty +Disclaimers are considered to be included by reference in this +License, but only as regards disclaiming warranties: any other +implication that these Warranty Disclaimers may have is void and has +no effect on the meaning of this License. + +@item +VERBATIM COPYING + +You may copy and distribute the Document in any medium, either +commercially or noncommercially, provided that this License, the +copyright notices, and the license notice saying this License applies +to the Document are reproduced in all copies, and that you add no other +conditions whatsoever to those of this License. You may not use +technical measures to obstruct or control the reading or further +copying of the copies you make or distribute. However, you may accept +compensation in exchange for copies. If you distribute a large enough +number of copies you must also follow the conditions in section 3. + +You may also lend copies, under the same conditions stated above, and +you may publicly display copies. + +@item +COPYING IN QUANTITY + +If you publish printed copies (or copies in media that commonly have +printed covers) of the Document, numbering more than 100, and the +Document's license notice requires Cover Texts, you must enclose the +copies in covers that carry, clearly and legibly, all these Cover +Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on +the back cover. Both covers must also clearly and legibly identify +you as the publisher of these copies. The front cover must present +the full title with all words of the title equally prominent and +visible. You may add other material on the covers in addition. +Copying with changes limited to the covers, as long as they preserve +the title of the Document and satisfy these conditions, can be treated +as verbatim copying in other respects. + +If the required texts for either cover are too voluminous to fit +legibly, you should put the first ones listed (as many as fit +reasonably) on the actual cover, and continue the rest onto adjacent +pages. + +If you publish or distribute Opaque copies of the Document numbering +more than 100, you must either include a machine-readable Transparent +copy along with each Opaque copy, or state in or with each Opaque copy +a computer-network location from which the general network-using +public has access to download using public-standard network protocols +a complete Transparent copy of the Document, free of added material. +If you use the latter option, you must take reasonably prudent steps, +when you begin distribution of Opaque copies in quantity, to ensure +that this Transparent copy will remain thus accessible at the stated +location until at least one year after the last time you distribute an +Opaque copy (directly or through your agents or retailers) of that +edition to the public. + +It is requested, but not required, that you contact the authors of the +Document well before redistributing any large number of copies, to give +them a chance to provide you with an updated version of the Document. + +@item +MODIFICATIONS + +You may copy and distribute a Modified Version of the Document under +the conditions of sections 2 and 3 above, provided that you release +the Modified Version under precisely this License, with the Modified +Version filling the role of the Document, thus licensing distribution +and modification of the Modified Version to whoever possesses a copy +of it. In addition, you must do these things in the Modified Version: + +@enumerate A +@item +Use in the Title Page (and on the covers, if any) a title distinct +from that of the Document, and from those of previous versions +(which should, if there were any, be listed in the History section +of the Document). You may use the same title as a previous version +if the original publisher of that version gives permission. + +@item +List on the Title Page, as authors, one or more persons or entities +responsible for authorship of the modifications in the Modified +Version, together with at least five of the principal authors of the +Document (all of its principal authors, if it has fewer than five), +unless they release you from this requirement. + +@item +State on the Title page the name of the publisher of the +Modified Version, as the publisher. + +@item +Preserve all the copyright notices of the Document. + +@item +Add an appropriate copyright notice for your modifications +adjacent to the other copyright notices. + +@item +Include, immediately after the copyright notices, a license notice +giving the public permission to use the Modified Version under the +terms of this License, in the form shown in the Addendum below. + +@item +Preserve in that license notice the full lists of Invariant Sections +and required Cover Texts given in the Document's license notice. + +@item +Include an unaltered copy of this License. + +@item +Preserve the section Entitled ``History'', Preserve its Title, and add +to it an item stating at least the title, year, new authors, and +publisher of the Modified Version as given on the Title Page. If +there is no section Entitled ``History'' in the Document, create one +stating the title, year, authors, and publisher of the Document as +given on its Title Page, then add an item describing the Modified +Version as stated in the previous sentence. + +@item +Preserve the network location, if any, given in the Document for +public access to a Transparent copy of the Document, and likewise +the network locations given in the Document for previous versions +it was based on. These may be placed in the ``History'' section. +You may omit a network location for a work that was published at +least four years before the Document itself, or if the original +publisher of the version it refers to gives permission. + +@item +For any section Entitled ``Acknowledgements'' or ``Dedications'', Preserve +the Title of the section, and preserve in the section all the +substance and tone of each of the contributor acknowledgements and/or +dedications given therein. + +@item +Preserve all the Invariant Sections of the Document, +unaltered in their text and in their titles. Section numbers +or the equivalent are not considered part of the section titles. + +@item +Delete any section Entitled ``Endorsements''. Such a section +may not be included in the Modified Version. + +@item +Do not retitle any existing section to be Entitled ``Endorsements'' or +to conflict in title with any Invariant Section. + +@item +Preserve any Warranty Disclaimers. +@end enumerate + +If the Modified Version includes new front-matter sections or +appendices that qualify as Secondary Sections and contain no material +copied from the Document, you may at your option designate some or all +of these sections as invariant. To do this, add their titles to the +list of Invariant Sections in the Modified Version's license notice. +These titles must be distinct from any other section titles. + +You may add a section Entitled ``Endorsements'', provided it contains +nothing but endorsements of your Modified Version by various +parties---for example, statements of peer review or that the text has +been approved by an organization as the authoritative definition of a +standard. + +You may add a passage of up to five words as a Front-Cover Text, and a +passage of up to 25 words as a Back-Cover Text, to the end of the list +of Cover Texts in the Modified Version. Only one passage of +Front-Cover Text and one of Back-Cover Text may be added by (or +through arrangements made by) any one entity. If the Document already +includes a cover text for the same cover, previously added by you or +by arrangement made by the same entity you are acting on behalf of, +you may not add another; but you may replace the old one, on explicit +permission from the previous publisher that added the old one. + +The author(s) and publisher(s) of the Document do not by this License +give permission to use their names for publicity for or to assert or +imply endorsement of any Modified Version. + +@item +COMBINING DOCUMENTS + +You may combine the Document with other documents released under this +License, under the terms defined in section 4 above for modified +versions, provided that you include in the combination all of the +Invariant Sections of all of the original documents, unmodified, and +list them all as Invariant Sections of your combined work in its +license notice, and that you preserve all their Warranty Disclaimers. + +The combined work need only contain one copy of this License, and +multiple identical Invariant Sections may be replaced with a single +copy. If there are multiple Invariant Sections with the same name but +different contents, make the title of each such section unique by +adding at the end of it, in parentheses, the name of the original +author or publisher of that section if known, or else a unique number. +Make the same adjustment to the section titles in the list of +Invariant Sections in the license notice of the combined work. + +In the combination, you must combine any sections Entitled ``History'' +in the various original documents, forming one section Entitled +``History''; likewise combine any sections Entitled ``Acknowledgements'', +and any sections Entitled ``Dedications''. You must delete all +sections Entitled ``Endorsements.'' + +@item +COLLECTIONS OF DOCUMENTS + +You may make a collection consisting of the Document and other documents +released under this License, and replace the individual copies of this +License in the various documents with a single copy that is included in +the collection, provided that you follow the rules of this License for +verbatim copying of each of the documents in all other respects. + +You may extract a single document from such a collection, and distribute +it individually under this License, provided you insert a copy of this +License into the extracted document, and follow this License in all +other respects regarding verbatim copying of that document. + +@item +AGGREGATION WITH INDEPENDENT WORKS + +A compilation of the Document or its derivatives with other separate +and independent documents or works, in or on a volume of a storage or +distribution medium, is called an ``aggregate'' if the copyright +resulting from the compilation is not used to limit the legal rights +of the compilation's users beyond what the individual works permit. +When the Document is included in an aggregate, this License does not +apply to the other works in the aggregate which are not themselves +derivative works of the Document. + +If the Cover Text requirement of section 3 is applicable to these +copies of the Document, then if the Document is less than one half of +the entire aggregate, the Document's Cover Texts may be placed on +covers that bracket the Document within the aggregate, or the +electronic equivalent of covers if the Document is in electronic form. +Otherwise they must appear on printed covers that bracket the whole +aggregate. + +@item +TRANSLATION + +Translation is considered a kind of modification, so you may +distribute translations of the Document under the terms of section 4. +Replacing Invariant Sections with translations requires special +permission from their copyright holders, but you may include +translations of some or all Invariant Sections in addition to the +original versions of these Invariant Sections. You may include a +translation of this License, and all the license notices in the +Document, and any Warranty Disclaimers, provided that you also include +the original English version of this License and the original versions +of those notices and disclaimers. In case of a disagreement between +the translation and the original version of this License or a notice +or disclaimer, the original version will prevail. + +If a section in the Document is Entitled ``Acknowledgements'', +``Dedications'', or ``History'', the requirement (section 4) to Preserve +its Title (section 1) will typically require changing the actual +title. + +@item +TERMINATION + +You may not copy, modify, sublicense, or distribute the Document except +as expressly provided for under this License. Any other attempt to +copy, modify, sublicense or distribute the Document is void, and will +automatically terminate your rights under this License. However, +parties who have received copies, or rights, from you under this +License will not have their licenses terminated so long as such +parties remain in full compliance. + +@item +FUTURE REVISIONS OF THIS LICENSE + +The Free Software Foundation may publish new, revised versions +of the GNU Free Documentation License from time to time. Such new +versions will be similar in spirit to the present version, but may +differ in detail to address new problems or concerns. See +@uref{http://www.gnu.org/copyleft/}. + +Each version of the License is given a distinguishing version number. +If the Document specifies that a particular numbered version of this +License ``or any later version'' applies to it, you have the option of +following the terms and conditions either of that specified version or +of any later version that has been published (not as a draft) by the +Free Software Foundation. If the Document does not specify a version +number of this License, you may choose any version ever published (not +as a draft) by the Free Software Foundation. +@end enumerate + +@page +@c @appendixsubsec ADDENDUM: How to use this License for your +@c documents +@subsection ADDENDUM: How to use this License for your documents + +To use this License in a document you have written, include a copy of +the License in the document and put the following copyright and +license notices just after the title page: + +@smallexample +@group + Copyright (C) @var{year} @var{your name}. + Permission is granted to copy, distribute and/or modify this document + under the terms of the GNU Free Documentation License, Version 1.2 + or any later version published by the Free Software Foundation; + with no Invariant Sections, no Front-Cover Texts, and no Back-Cover + Texts. A copy of the license is included in the section entitled ``GNU + Free Documentation License''. +@end group +@end smallexample + +If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, +replace the ``with...Texts.'' line with this: + +@smallexample +@group + with the Invariant Sections being @var{list their titles}, with + the Front-Cover Texts being @var{list}, and with the Back-Cover Texts + being @var{list}. +@end group +@end smallexample + +If you have Invariant Sections without Cover Texts, or some other +combination of the three, merge those two alternatives to suit the +situation. + +If your document contains nontrivial examples of program code, we +recommend releasing these examples in parallel under your choice of +free software license, such as the GNU General Public License, +to permit their use in free software. + +@c Local Variables: +@c ispell-local-pdict: "ispell-dict" +@c End: + diff --git a/doc/legacy/fuse.odg b/doc/legacy/fuse.odg new file mode 100644 index 000000000..61bd103c7 Binary files /dev/null and b/doc/legacy/fuse.odg differ diff --git a/doc/legacy/fuse.pdf b/doc/legacy/fuse.pdf new file mode 100644 index 000000000..a7d13faff Binary files /dev/null and b/doc/legacy/fuse.pdf differ diff --git a/doc/legacy/ha.odg b/doc/legacy/ha.odg new file mode 100644 index 000000000..e4b8b72d0 Binary files /dev/null and b/doc/legacy/ha.odg differ diff --git a/doc/legacy/ha.pdf b/doc/legacy/ha.pdf new file mode 100644 index 000000000..e372c0ab0 Binary files /dev/null and b/doc/legacy/ha.pdf differ diff --git a/doc/legacy/stripe.odg b/doc/legacy/stripe.odg new file mode 100644 index 000000000..79441bf14 Binary files /dev/null and b/doc/legacy/stripe.odg differ diff --git a/doc/legacy/stripe.pdf b/doc/legacy/stripe.pdf new file mode 100644 index 000000000..b94446feb Binary files /dev/null and b/doc/legacy/stripe.pdf differ diff --git a/doc/legacy/unify.odg b/doc/legacy/unify.odg new file mode 100644 index 000000000..ccaa9bf16 Binary files /dev/null and b/doc/legacy/unify.odg differ diff --git a/doc/legacy/unify.pdf b/doc/legacy/unify.pdf new file mode 100644 index 000000000..c22027f66 Binary files /dev/null and b/doc/legacy/unify.pdf differ diff --git a/doc/legacy/user-guide.info b/doc/legacy/user-guide.info new file mode 100644 index 000000000..2bbadb351 --- /dev/null +++ b/doc/legacy/user-guide.info @@ -0,0 +1,2697 @@ +This is ../../../doc/user-guide/user-guide.info, produced by makeinfo version 4.13 from ../../../doc/user-guide/user-guide.texi. + +START-INFO-DIR-ENTRY +* GlusterFS: (user-guide). GlusterFS distributed filesystem user guide +END-INFO-DIR-ENTRY + + This is the user manual for GlusterFS 2.0. + + Copyright (c) 2007-2011 Gluster, Inc. Permission is granted to +copy, distribute and/or modify this document under the terms of the GNU +Free Documentation License, Version 1.2 or any later version published +by the Free Software Foundation; with no Invariant Sections, no +Front-Cover Texts, and no Back-Cover Texts. A copy of the license is +included in the chapter entitled "GNU Free Documentation License". + + +File: user-guide.info, Node: Top, Next: Acknowledgements, Up: (dir) + +GlusterFS 2.0 User Guide +************************ + +This is the user manual for GlusterFS 2.0. + + Copyright (c) 2007-2011 Gluster, Inc. Permission is granted to +copy, distribute and/or modify this document under the terms of the GNU +Free Documentation License, Version 1.2 or any later version published +by the Free Software Foundation; with no Invariant Sections, no +Front-Cover Texts, and no Back-Cover Texts. A copy of the license is +included in the chapter entitled "GNU Free Documentation License". + +* Menu: + +* Acknowledgements:: +* Introduction:: +* Installation and Invocation:: +* Concepts:: +* Translators:: +* Usage Scenarios:: +* Troubleshooting:: +* GNU Free Documentation Licence:: +* Index:: + + --- The Detailed Node Listing --- + +Installation and Invocation + +* Pre requisites:: +* Getting GlusterFS:: +* Building:: +* Running GlusterFS:: +* A Tutorial Introduction:: + +Running GlusterFS + +* Server:: +* Client:: + +Concepts + +* Filesystems in Userspace:: +* Translator:: +* Volume specification file:: + +Translators + +* Storage Translators:: +* Client and Server Translators:: +* Clustering Translators:: +* Performance Translators:: +* Features Translators:: + +Storage Translators + +* POSIX:: + +Client and Server Translators + +* Transport modules:: +* Client protocol:: +* Server protocol:: + +Clustering Translators + +* Unify:: +* Replicate:: +* Stripe:: + +Performance Translators + +* Read Ahead:: +* Write Behind:: +* IO Threads:: +* IO Cache:: + +Features Translators + +* POSIX Locks:: +* Fixed ID:: + +Miscellaneous Translators + +* ROT-13:: +* Trace:: + + +File: user-guide.info, Node: Acknowledgements, Next: Introduction, Prev: Top, Up: Top + +Acknowledgements +**************** + +GlusterFS continues to be a wonderful and enriching experience for all +of us involved. + + GlusterFS development would not have been possible at this pace if +not for our enthusiastic users. People from around the world have +helped us with bug reports, performance numbers, and feature +suggestions. A huge thanks to them all. + + Matthew Paine - for RPMs & general enthu + + Leonardo Rodrigues de Mello - for DEBs + + Julian Perez & Adam D'Auria - for multi-server tutorial + + Paul England - for HA spec + + Brent Nelson - for many bug reports + + Jacques Mattheij - for Europe mirror. + + Patrick Negri - for TCP non-blocking connect. + http://gluster.org/core-team.php () + Gluster + + +File: user-guide.info, Node: Introduction, Next: Installation and Invocation, Prev: Acknowledgements, Up: Top + +1 Introduction +************** + +GlusterFS is a distributed filesystem. It works at the file level, not +block level. + + A network filesystem is one which allows us to access remote files. A +distributed filesystem is one that stores data on multiple machines and +makes them all appear to be a part of the same filesystem. + + Need for distributed filesystems + + * Scalability: A distributed filesystem allows us to store more data + than what can be stored on a single machine. + + * Redundancy: We might want to replicate crucial data on to several + machines. + + * Uniform access: One can mount a remote volume (for example your + home directory) from any machine and access the same data. + +1.1 Contacting us +================= + +You can reach us through the mailing list *gluster-devel* +(). + + You can also find many of the developers on IRC, on the `#gluster' +channel on Freenode (). + + The GlusterFS documentation wiki is also useful: + + + For commercial support, you can contact Gluster at: + + 3194 Winding Vista Common + Fremont, CA 94539 + USA. + + Phone: +1 (510) 354 6801 + Toll free: +1 (888) 813 6309 + Fax: +1 (510) 372 0604 + + You can also email us at . + + +File: user-guide.info, Node: Installation and Invocation, Next: Concepts, Prev: Introduction, Up: Top + +2 Installation and Invocation +***************************** + +* Menu: + +* Pre requisites:: +* Getting GlusterFS:: +* Building:: +* Running GlusterFS:: +* A Tutorial Introduction:: + + +File: user-guide.info, Node: Pre requisites, Next: Getting GlusterFS, Up: Installation and Invocation + +2.1 Pre requisites +================== + +Before installing GlusterFS make sure you have the following components +installed. + +2.1.1 FUSE +---------- + +You'll need FUSE version 2.6.0 or higher to use GlusterFS. You can omit +installing FUSE if you want to build _only_ the server. Note that you +won't be able to mount a GlusterFS filesystem on a machine that does +not have FUSE installed. + + FUSE can be downloaded from: + + To get the best performance from GlusterFS, however, it is +recommended that you use our patched version of FUSE. See Patched FUSE +for details. + +2.1.2 Patched FUSE +------------------ + +The GlusterFS project maintains a patched version of FUSE meant to be +used with GlusterFS. The patches increase GlusterFS performance. It is +recommended that all users use the patched FUSE. + + The patched FUSE tarball can be downloaded from: + + + + The specific changes made to FUSE are: + + * The communication channel size between FUSE kernel module and + GlusterFS has been increased to 1MB, permitting large reads and + writes to be sent in bigger chunks. + + * The kernel's read-ahead boundry has been extended upto 1MB. + + * Block size returned in the `stat()'/`fstat()' calls tuned to 1MB, + to make cp and similar commands perform I/O using that block size. + + * `flock()' locking support has been added (although some rework in + GlusterFS is needed for perfect compliance). + +2.1.3 libibverbs (optional) +--------------------------- + +This is only needed if you want GlusterFS to use InfiniBand as the +interconnect mechanism between server and client. You can get it from: + + . + +2.1.4 Bison and Flex +-------------------- + +These should be already installed on most Linux systems. If not, use +your distribution's normal software installation procedures to install +them. Make sure you install the relevant developer packages also. + + +File: user-guide.info, Node: Getting GlusterFS, Next: Building, Prev: Pre requisites, Up: Installation and Invocation + +2.2 Getting GlusterFS +===================== + +There are many ways to get hold of GlusterFS. For a production +deployment, the recommended method is to download the latest release +tarball. Release tarballs are available at: +. + + If you want the bleeding edge development source, you can get them +from the GNU Arch(1) repository. First you must install GNU Arch +itself. Then register the GlusterFS archive by doing: + + $ tla register-archive http://arch.sv.gnu.org/archives/gluster + + Now you can check out the source itself: + + $ tla get -A gluster@sv.gnu.org glusterfs--mainline--3.0 + + ---------- Footnotes ---------- + + (1) + + +File: user-guide.info, Node: Building, Next: Running GlusterFS, Prev: Getting GlusterFS, Up: Installation and Invocation + +2.3 Building +============ + +You can skip this section if you're installing from RPMs or DEBs. + + GlusterFS uses the Autotools mechanism to build. As such, the +procedure is straight-forward. First, change into the GlusterFS source +directory. + + $ cd glusterfs- + + If you checked out the source from the Arch repository, you'll need +to run `./autogen.sh' first. Note that you'll need to have Autoconf and +Automake installed for this. + + Run `configure'. + + $ ./configure + + The configure script accepts the following options: + +`--disable-ibverbs' + Disable the InfiniBand transport mechanism. + +`--disable-fuse-client' + Disable the FUSE client. + +`--disable-server' + Disable building of the GlusterFS server. + +`--disable-bdb' + Disable building of Berkeley DB based storage translator. + +`--disable-mod_glusterfs' + Disable building of Apache/lighttpd glusterfs plugins. + +`--disable-epoll' + Use poll instead of epoll. + +`--disable-libglusterfsclient' + Disable building of libglusterfsclient + + + Build and install GlusterFS. + + # make install + + The binaries (`glusterfsd' and `glusterfs') will be by default +installed in `/usr/local/sbin/'. Translator, scheduler, and transport +shared libraries will be installed in +`/usr/local/lib/glusterfs//'. Sample volume specification +files will be in `/usr/local/etc/glusterfs/'. This document itself can +be found in `/usr/local/share/doc/glusterfs/'. If you passed the +`--prefix' argument to the configure script, then replace `/usr/local' +in the preceding paths with the prefix. + + +File: user-guide.info, Node: Running GlusterFS, Next: A Tutorial Introduction, Prev: Building, Up: Installation and Invocation + +2.4 Running GlusterFS +===================== + +* Menu: + +* Server:: +* Client:: + + +File: user-guide.info, Node: Server, Next: Client, Up: Running GlusterFS + +2.4.1 Server +------------ + +The GlusterFS server is necessary to export storage volumes to remote +clients (See *note Server protocol:: for more info). This section +documents the invocation of the GlusterFS server program and all the +command-line options accepted by it. + + Basic Options + +`-f, --volfile=' + Use the volume file as the volume specification. + +`-s, --volfile-server=' + Server to get volume file from. This option overrides -volfile + option. + +`-l, --log-file=' + Specify the path for the log file. + +`-L, --log-level=' + Set the log level for the server. Log level should be one of DEBUG, + WARNING, ERROR, CRITICAL, or NONE. + + Advanced Options + +`--debug' + Run in debug mode. This option sets -no-daemon, -log-level to + DEBUG and -log-file to console. + +`-N, --no-daemon' + Run glusterfsd as a foreground process. + +`-p, --pid-file=' + Path for the PID file. + +`--volfile-id=' + 'key' of the volfile to be fetched from server. + +`--volfile-server-port=' + Listening port number of volfile server. + +`--volfile-server-transport=[tcp|ib-verbs]' + Transport type to get volfile from server. [default: `tcp'] + +`--xlator-options=' + Add/override a translator option for a volume with specified value. + + Miscellaneous Options + +`-?, --help' + Show this help text. + +`--usage' + Display a short usage message. + +`-V, --version' + Show version information. + + +File: user-guide.info, Node: Client, Prev: Server, Up: Running GlusterFS + +2.4.2 Client +------------ + +The GlusterFS client process is necessary to access remote storage +volumes and mount them locally using FUSE. This section documents the +invocation of the client process and all its command-line arguments. + + # glusterfs [options] + + The `mountpoint' is the directory where you want the GlusterFS +filesystem to appear. Example: + + # glusterfs -f /usr/local/etc/glusterfs-client.vol /mnt + + The command-line options are detailed below. + + Basic Options + +`-f, --volfile=' + Use the volume file as the volume specification. + +`-s, --volfile-server=' + Server to get volume file from. This option overrides -volfile + option. + +`-l, --log-file=' + Specify the path for the log file. + +`-L, --log-level=' + Set the log level for the server. Log level should be one of DEBUG, + WARNING, ERROR, CRITICAL, or NONE. + + Advanced Options + +`--debug' + Run in debug mode. This option sets -no-daemon, -log-level to + DEBUG and -log-file to console. + +`-N, --no-daemon' + Run `glusterfs' as a foreground process. + +`-p, --pid-file=' + Path for the PID file. + +`--volfile-id=' + 'key' of the volfile to be fetched from server. + +`--volfile-server-port=' + Listening port number of volfile server. + +`--volfile-server-transport=[tcp|ib-verbs]' + Transport type to get volfile from server. [default: `tcp'] + +`--xlator-options=' + Add/override a translator option for a volume with specified value. + +`--volume-name=' + Volume name in client spec to use. Defaults to the root volume. + + FUSE Options + +`--attribute-timeout=' + Attribute timeout for inodes in the kernel, in seconds. Defaults + to 1 second. + +`--disable-direct-io-mode' + Disable direct I/O mode in FUSE kernel module. + +`-e, --entry-timeout=' + Entry timeout for directory entries in the kernel, in seconds. + Defaults to 1 second. + + Missellaneous Options + +`-?, --help' + Show this help information. + +`-V, --version' + Show version information. + + +File: user-guide.info, Node: A Tutorial Introduction, Prev: Running GlusterFS, Up: Installation and Invocation + +2.5 A Tutorial Introduction +=========================== + +This section will show you how to quickly get GlusterFS up and running. +We'll configure GlusterFS as a simple network filesystem, with one +server and one client. In this mode of usage, GlusterFS can serve as a +replacement for NFS. + + We'll make use of two machines; call them _server_ and _client_ (If +you don't want to setup two machines, just run everything that follows +on the same machine). In the examples that follow, the shell prompts +will use these names to clarify the machine on which the command is +being run. For example, a command that should be run on the server will +be shown with the prompt: + + [root@server]# + + Our goal is to make a directory on the _server_ (say, `/export') +accessible to the _client_. + + First of all, get GlusterFS installed on both the machines, as +described in the previous sections. Make sure you have the FUSE kernel +module loaded. You can ensure this by running: + + [root@server]# modprobe fuse + + Before we can run the GlusterFS client or server programs, we need +to write two files called _volume specifications_ (equivalently refered +to as _volfiles_). The volfile describes the _translator tree_ on a +node. The next chapter will explain the concepts of `translator' and +`volume specification' in detail. For now, just assume that the volfile +is like an NFS `/etc/export' file. + + On the server, create a text file somewhere (we'll assume the path +`/tmp/glusterfsd.vol') with the following contents. + + volume colon-o + type storage/posix + option directory /export + end-volume + + volume server + type protocol/server + subvolumes colon-o + option transport-type tcp + option auth.addr.colon-o.allow * + end-volume + + A brief explanation of the file's contents. The first section +defines a storage volume, named "colon-o" (the volume names are +arbitrary), which exports the `/export' directory. The second section +defines options for the translator which will make the storage volume +accessible remotely. It specifies `colon-o' as a subvolume. This +defines the _translator tree_, about which more will be said in the +next chapter. The two options specify that the TCP protocol is to be +used (as opposed to InfiniBand, for example), and that access to the +storage volume is to be provided to clients with any IP address at all. +If you wanted to restrict access to this server to only your subnet for +example, you'd specify something like `192.168.1.*' in the second +option line. + + On the client machine, create the following text file (again, we'll +assume the path to be `/tmp/glusterfs-client.vol'). Replace +_server-ip-address_ with the IP address of your server machine. If you +are doing all this on a single machine, use `127.0.0.1'. + + volume client + type protocol/client + option transport-type tcp + option remote-host _server-ip-address_ + option remote-subvolume colon-o + end-volume + + Now we need to start both the server and client programs. To start +the server: + + [root@server]# glusterfsd -f /tmp/glusterfs-server.vol + + To start the client: + + [root@client]# glusterfs -f /tmp/glusterfs-client.vol /mnt/glusterfs + + You should now be able to see the files under the server's `/export' +directory in the `/mnt/glusterfs' directory on the client. That's it; +GlusterFS is now working as a network file system. + + +File: user-guide.info, Node: Concepts, Next: Translators, Prev: Installation and Invocation, Up: Top + +3 Concepts +********** + +* Menu: + +* Filesystems in Userspace:: +* Translator:: +* Volume specification file:: + + +File: user-guide.info, Node: Filesystems in Userspace, Next: Translator, Up: Concepts + +3.1 Filesystems in Userspace +============================ + +A filesystem is usually implemented in kernel space. Kernel space +development is much harder than userspace development. FUSE is a kernel +module/library that allows us to write a filesystem completely in +userspace. + + FUSE consists of a kernel module which interacts with the userspace +implementation using a device file `/dev/fuse'. When a process makes a +syscall on a FUSE filesystem, VFS hands the request to the FUSE module, +which writes the request to `/dev/fuse'. The userspace implementation +polls `/dev/fuse', and when a request arrives, processes it and writes +the result back to `/dev/fuse'. The kernel then reads from the device +file and returns the result to the user process. + + In case of GlusterFS, the userspace program is the GlusterFS client. +The control flow is shown in the diagram below. The GlusterFS client +services the request by sending it to the server, which in turn hands +it to the local POSIX filesystem. + + + Fig 1. Control flow in GlusterFS + + +File: user-guide.info, Node: Translator, Next: Volume specification file, Prev: Filesystems in Userspace, Up: Concepts + +3.2 Translator +============== + +The _translator_ is the most important concept in GlusterFS. In fact, +GlusterFS is nothing but a collection of translators working together, +forming a translator _tree_. + + The idea of a translator is perhaps best understood using an +analogy. Consider the VFS in the Linux kernel. The VFS abstracts the +various filesystem implementations (such as EXT3, ReiserFS, XFS, etc.) +supported by the kernel. When an application calls the kernel to +perform an operation on a file, the kernel passes the request on to the +appropriate filesystem implementation. + + For example, let's say there are two partitions on a Linux machine: +`/', which is an EXT3 partition, and `/usr', which is a ReiserFS +partition. Now if an application wants to open a file called, say, +`/etc/fstab', then the kernel will internally pass the request to the +EXT3 implementation. If on the other hand, an application wants to +read a file called `/usr/src/linux/CREDITS', then the kernel will call +upon the ReiserFS implementation to do the job. + + The "filesystem implementation" objects are analogous to GlusterFS +translators. A GlusterFS translator implements all the filesystem +operations. Whereas in VFS there is a two-level tree (with the kernel +at the root and all the filesystem implementation as its children), in +GlusterFS there exists a more elaborate tree structure. + + We can now define translators more precisely. A GlusterFS translator +is a shared object (`.so') that implements every filesystem call. +GlusterFS translators can be arranged in an arbitrary tree structure +(subject to constraints imposed by the translators). When GlusterFS +receives a filesystem call, it passes it on to the translator at the +root of the translator tree. The root translator may in turn pass it on +to any or all of its children, and so on, until the leaf nodes are +reached. The result of a filesystem call is communicated in the reverse +fashion, from the leaf nodes up to the root node, and then on to the +application. + + So what might a translator tree look like? + + + Fig 2. A sample translator tree + + The diagram depicts three servers and one GlusterFS client. It is +important to note that conceptually, the translator tree spans machine +boundaries. Thus, the client machine in the diagram, `10.0.0.1', can +access the aggregated storage of the filesystems on the server machines +`10.0.0.2', `10.0.0.3', and `10.0.0.4'. The translator diagram will +make more sense once you've read the next chapter and understood the +functions of the various translators. + + +File: user-guide.info, Node: Volume specification file, Prev: Translator, Up: Concepts + +3.3 Volume specification file +============================= + +The volume specification file describes the translator tree for both the +server and client programs. + + A volume specification file is a sequence of volume definitions. +The syntax of a volume definition is explained below: + + *volume* _volume-name_ + *type* _translator-name_ + *option* _option-name_ _option-value_ + ... + *subvolumes* _subvolume1_ _subvolume2_ ... + *end-volume* + + ... + +_volume-name_ + An identifier for the volume. This is just a human-readable name, + and can contain any alphanumeric character. For instance, + "storage-1", "colon-o", or "forty-two". + +_translator-name_ + Name of one of the available translators. Example: + `protocol/client', `cluster/unify'. + +_option-name_ + Name of a valid option for the translator. + +_option-value_ + Value for the option. Everything following the "option" keyword to + the end of the line is considered the value; it is up to the + translator to parse it. + +_subvolume1_, _subvolume2_, ... + Volume names of sub-volumes. The sub-volumes must already have + been defined earlier in the file. + + There are a few rules you must follow when writing a volume +specification file: + + * Everything following a ``#'' is considered a comment and is + ignored. Blank lines are also ignored. + + * All names and keywords are case-sensitive. + + * The order of options inside a volume definition does not matter. + + * An option value may not span multiple lines. + + * If an option is not specified, it will assume its default value. + + * A sub-volume must have already been defined before it can be + referenced. This means you have to write the specification file + "bottom-up", starting from the leaf nodes of the translator tree + and moving up to the root. + + A simple example volume specification file is shown below: + + # This is a comment line + volume client + type protocol/client + option transport-type tcp + option remote-host localhost # Also a comment + option remote-subvolume brick + # The subvolumes line may be absent + end-volume + + volume iot + type performance/io-threads + option thread-count 4 + subvolumes client + end-volume + + volume wb + type performance/write-behind + subvolumes iot + end-volume + + +File: user-guide.info, Node: Translators, Next: Usage Scenarios, Prev: Concepts, Up: Top + +4 Translators +************* + +* Menu: + +* Storage Translators:: +* Client and Server Translators:: +* Clustering Translators:: +* Performance Translators:: +* Features Translators:: +* Miscellaneous Translators:: + + This chapter documents all the available GlusterFS translators in +detail. Each translator section will show its name (for example, +`cluster/unify'), briefly describe its purpose and workings, and list +every option accepted by that translator and their meaning. + + +File: user-guide.info, Node: Storage Translators, Next: Client and Server Translators, Up: Translators + +4.1 Storage Translators +======================= + +The storage translators form the "backend" for GlusterFS. Currently, +the only available storage translator is the POSIX translator, which +stores files on a normal POSIX filesystem. A pleasant consequence of +this is that your data will still be accessible if GlusterFS crashes or +cannot be started. + + Other storage backends are planned for the future. One of the +possibilities is an Amazon S3 translator. Amazon S3 is an unlimited +online storage service accessible through a web services API. The S3 +translator will allow you to access the storage as a normal POSIX +filesystem. (1) + +* Menu: + +* POSIX:: +* BDB:: + + ---------- Footnotes ---------- + + (1) Some more discussion about this can be found at: + +http://developer.amazonwebservices.com/connect/message.jspa?messageID=52873 + + +File: user-guide.info, Node: POSIX, Next: BDB, Up: Storage Translators + +4.1.1 POSIX +----------- + + type storage/posix + + The `posix' translator uses a normal POSIX filesystem as its +"backend" to actually store files and directories. This can be any +filesystem that supports extended attributes (EXT3, ReiserFS, XFS, +...). Extended attributes are used by some translators to store +metadata, for example, by the replicate and stripe translators. See +*note Replicate:: and *note Stripe::, respectively for details. + +`directory ' + The directory on the local filesystem which is to be used for + storage. + + +File: user-guide.info, Node: BDB, Prev: POSIX, Up: Storage Translators + +4.1.2 BDB +--------- + + type storage/bdb + + The `BDB' translator uses a Berkeley DB database as its "backend" to +actually store files as key-value pair in the database and directories +as regular POSIX directories. Note that BDB does not provide extended +attribute support for regular files. Do not use BDB as storage +translator while using any translator that demands extended attributes +on "backend". + +`directory ' + The directory on the local filesystem which is to be used for + storage. + +`mode [cache|persistent] (cache)' + When BDB is run in `cache' mode, recovery of back-end is not + completely guaranteed. `persistent' guarantees that BDB can + recover back-end from Berkeley DB even if GlusterFS crashes. + +`errfile ' + The path of the file to be used as `errfile' for Berkeley DB to + report detailed error messages, if any. Note that all the contents + of this file will be written by Berkeley DB, not GlusterFS. + +`logdir ' + + +File: user-guide.info, Node: Client and Server Translators, Next: Clustering Translators, Prev: Storage Translators, Up: Translators + +4.2 Client and Server Translators +================================= + +The client and server translator enable GlusterFS to export a +translator tree over the network or access a remote GlusterFS server. +These two translators implement GlusterFS's network protocol. + +* Menu: + +* Transport modules:: +* Client protocol:: +* Server protocol:: + + +File: user-guide.info, Node: Transport modules, Next: Client protocol, Up: Client and Server Translators + +4.2.1 Transport modules +----------------------- + +The client and server translators are capable of using any of the +pluggable transport modules. Currently available transport modules are +`tcp', which uses a TCP connection between client and server to +communicate; `ib-sdp', which uses a TCP connection over InfiniBand, and +`ibverbs', which uses high-speed InfiniBand connections. + + Each transport module comes in two different versions, one to be +used on the server side and the other on the client side. + +4.2.1.1 TCP +........... + +The TCP transport module uses a TCP/IP connection between the server +and the client. + + option transport-type tcp + + The TCP client module accepts the following options: + +`non-blocking-connect [no|off|on|yes] (on)' + Whether to make the connection attempt asynchronous. + +`remote-port (24007)' + Server port to connect to. + +`remote-host *' + Hostname or IP address of the server. If the host name resolves to + multiple IP addresses, all of them will be tried in a round-robin + fashion. This feature can be used to implement fail-over. + + The TCP server module accepts the following options: + +`bind-address
(0.0.0.0)' + The local interface on which the server should listen to requests. + Default is to listen on all interfaces. + +`listen-port (24007)' + The local port to listen on. + +4.2.1.2 IB-SDP +.............. + + option transport-type ib-sdp + + kernel implements socket interface for ib hardware. SDP is over +ib-verbs. This module accepts the same options as `tcp' + +4.2.1.3 ibverbs +............... + + option transport-type tcp + + InfiniBand is a scalable switched fabric interconnect mechanism +primarily used in high-performance computing. InfiniBand can deliver +data throughput of the order of 10 Gbit/s, with latencies of 4-5 ms. + + The `ib-verbs' transport accesses the InfiniBand hardware through +the "verbs" API, which is the lowest level of software access possible +and which gives the highest performance. On InfiniBand hardware, it is +always best to use `ib-verbs'. Use `ib-sdp' only if you cannot get +`ib-verbs' working for some reason. + + The `ib-verbs' client module accepts the following options: + +`non-blocking-connect [no|off|on|yes] (on)' + Whether to make the connection attempt asynchronous. + +`remote-port (24007)' + Server port to connect to. + +`remote-host *' + Hostname or IP address of the server. If the host name resolves to + multiple IP addresses, all of them will be tried in a round-robin + fashion. This feature can be used to implement fail-over. + + The `ib-verbs' server module accepts the following options: + +`bind-address
(0.0.0.0)' + The local interface on which the server should listen to requests. + Default is to listen on all interfaces. + +`listen-port (24007)' + The local port to listen on. + + The following options are common to both the client and server +modules: + + If you are familiar with InfiniBand jargon, the mode is used by +GlusterFS is "reliable connection-oriented channel transfer". + +`ib-verbs-work-request-send-count (64)' + Length of the send queue in datagrams. [Reason to + increase/decrease?] + +`ib-verbs-work-request-recv-count (64)' + Length of the receive queue in datagrams. [Reason to + increase/decrease?] + +`ib-verbs-work-request-send-size (128KB)' + Size of each datagram that is sent. [Reason to increase/decrease?] + +`ib-verbs-work-request-recv-size (128KB)' + Size of each datagram that is received. [Reason to + increase/decrease?] + +`ib-verbs-port (1)' + Port number for ib-verbs. + +`ib-verbs-mtu [256|512|1024|2048|4096] (2048)' + The Maximum Transmission Unit [Reason to increase/decrease?] + +`ib-verbs-device-name (first device in the list)' + InfiniBand device to be used. + + For maximum performance, you should ensure that the send/receive +counts on both the client and server are the same. + + ib-verbs is preferred over ib-sdp. + + +File: user-guide.info, Node: Client protocol, Next: Server protocol, Prev: Transport modules, Up: Client and Server Translators + +4.2.2 Client +------------ + + type procotol/client + + The client translator enables the GlusterFS client to access a +remote server's translator tree. + +`transport-type [tcp,ib-sdp,ib-verbs] (tcp)' + The transport type to use. You should use the client versions of + all the transport modules (`tcp', `ib-sdp', `ib-verbs'). + +`remote-subvolume *' + The name of the volume on the remote host to attach to. Note that + this is _not_ the name of the `protocol/server' volume on the + server. It should be any volume under the server. + +`transport-timeout (120- seconds)' + Inactivity timeout. If a reply is expected and no activity takes + place on the connection within this time, the transport connection + will be broken, and a new connection will be attempted. + + +File: user-guide.info, Node: Server protocol, Prev: Client protocol, Up: Client and Server Translators + +4.2.3 Server +------------ + + type protocol/server + + The server translator exports a translator tree and makes it +accessible to remote GlusterFS clients. + +`client-volume-filename (/glusterfs-client.vol)' + The volume specification file to use for the client. This is the + file the client will receive when it is invoked with the + `--server' option (*note Client::). + +`transport-type [tcp,ib-verbs,ib-sdp] (tcp)' + The transport to use. You should use the server versions of all + the transport modules (`tcp', `ib-sdp', `ib-verbs'). + +`auth.addr..allow ' + IP addresses of the clients that are allowed to attach to the + specified volume. This can be a wildcard. For example, a wildcard + of the form `192.168.*.*' allows any host in the `192.168.x.x' + subnet to connect to the server. + + + +File: user-guide.info, Node: Clustering Translators, Next: Performance Translators, Prev: Client and Server Translators, Up: Translators + +4.3 Clustering Translators +========================== + +The clustering translators are the most important GlusterFS +translators, since it is these that make GlusterFS a cluster +filesystem. These translators together enable GlusterFS to access an +arbitrarily large amount of storage, and provide RAID-like redundancy +and distribution over the entire cluster. + + There are three clustering translators: *unify*, *replicate*, and +*stripe*. The unify translator aggregates storage from many server +nodes. The replicate translator provides file replication. The stripe +translator allows a file to be spread across many server nodes. The +following sections look at each of these translators in detail. + +* Menu: + +* Unify:: +* Replicate:: +* Stripe:: + + +File: user-guide.info, Node: Unify, Next: Replicate, Up: Clustering Translators + +4.3.1 Unify +----------- + + type cluster/unify + + The unify translator presents a `unified' view of all its +sub-volumes. That is, it makes the union of all its sub-volumes appear +as a single volume. It is the unify translator that gives GlusterFS the +ability to access an arbitrarily large amount of storage. + + For unify to work correctly, certain invariants need to be +maintained across the entire network. These are: + + * The directory structure of all the sub-volumes must be identical. + + * A particular file can exist on only one of the sub-volumes. + Phrasing it in another way, a pathname such as + `/home/calvin/homework.txt') is unique across the entire cluster. + + + +Looking at the second requirement, you might wonder how one can +accomplish storing redundant copies of a file, if no file can exist +multiple times. To answer, we must remember that these invariants are +from _unify's perspective_. A translator such as replicate at a lower +level in the translator tree than unify may subvert this picture. + + The first invariant might seem quite tedious to ensure. We shall see +later that this is not so, since unify's _self-heal_ mechanism takes +care of maintaining it. + + The second invariant implies that unify needs some way to decide +which file goes where. Unify makes use of _scheduler_ modules for this +purpose. + + When a file needs to be created, unify's scheduler decides upon the +sub-volume to be used to store the file. There are many schedulers +available, each using a different algorithm and suitable for different +purposes. + + The various schedulers are described in detail in the sections that +follow. + +4.3.1.1 ALU +........... + + option scheduler alu + + ALU stands for "Adaptive Least Usage". It is the most advanced +scheduler available in GlusterFS. It balances the load across volumes +taking several factors in account. It adapts itself to changing I/O +patterns according to its configuration. When properly configured, it +can eliminate the need for regular tuning of the filesystem to keep +volume load nicely balanced. + + The ALU scheduler is composed of multiple least-usage +sub-schedulers. Each sub-scheduler keeps track of a certain type of +load, for each of the sub-volumes, getting statistics from the +sub-volumes themselves. The sub-schedulers are these: + + * disk-usage: The used and free disk space on the volume. + + * read-usage: The amount of reading done from this volume. + + * write-usage: The amount of writing done to this volume. + + * open-files-usage: The number of files currently open from this + volume. + + * disk-speed-usage: The speed at which the disks are spinning. This + is a constant value and therefore not very useful. + + The ALU scheduler needs to know which of these sub-schedulers to use, +and in which order to evaluate them. This is done through the `option +alu.order' configuration directive. + + Each sub-scheduler needs to know two things: when to kick in (the +entry-threshold), and how long to stay in control (the exit-threshold). +For example: when unifying three disks of 100GB, keeping an exact +balance of disk-usage is not necesary. Instead, there could be a 1GB +margin, which can be used to nicely balance other factors, such as +read-usage. The disk-usage scheduler can be told to kick in only when a +certain threshold of discrepancy is passed, such as 1GB. When it +assumes control under this condition, it will write all subsequent data +to the least-used volume. If it is doing so, it is unwise to stop right +after the values are below the entry-threshold again, since that would +make it very likely that the situation will occur again very soon. Such +a situation would cause the ALU to spend most of its time disk-usage +scheduling, which is unfair to the other sub-schedulers. The +exit-threshold therefore defines the amount of data that needs to be +written to the least-used disk, before control is relinquished again. + + In addition to the sub-schedulers, the ALU scheduler also has +"limits" options. These can stop the creation of new files on a volume +once values drop below a certain threshold. For example, setting +`option alu.limits.min-free-disk 5GB' will stop the scheduling of files +to volumes that have less than 5GB of free disk space, leaving the +files on that disk some room to grow. + + The actual values you assign to the thresholds for sub-schedulers and +limits depend on your situation. If you have fast-growing files, you'll +want to stop file-creation on a disk much earlier than when hardly any +of your files are growing. If you care less about disk-usage balance +than about read-usage balance, you'll want a bigger disk-usage +scheduler entry-threshold and a smaller read-usage scheduler +entry-threshold. + + For thresholds defining a size, values specifying "KB", "MB" and "GB" +are allowed. For example: `option alu.limits.min-free-disk 5GB'. + +`alu.order * ("disk-usage:write-usage:read-usage:open-files-usage:disk-speed")' + +`alu.disk-usage.entry-threshold (1GB)' + +`alu.disk-usage.exit-threshold (512MB)' + +`alu.write-usage.entry-threshold <%> (25)' + +`alu.write-usage.exit-threshold <%> (5)' + +`alu.read-usage.entry-threshold <%> (25)' + +`alu.read-usage.exit-threshold <%> (5)' + +`alu.open-files-usage.entry-threshold (1000)' + +`alu.open-files-usage.exit-threshold (100)' + +`alu.limits.min-free-disk <%>' + +`alu.limits.max-open-files ' + +4.3.1.2 Round Robin (RR) +........................ + + option scheduler rr + + Round-Robin (RR) scheduler creates files in a round-robin fashion. +Each client will have its own round-robin loop. When your files are +mostly similar in size and I/O access pattern, this scheduler is a good +choice. RR scheduler checks for free disk space on the server before +scheduling, so you can know when to add another server node. The +default value of min-free-disk is 5% and is checked on file creation +calls, with atleast 10 seconds (by default) elapsing between two checks. + + Options: +`rr.limits.min-free-disk <%> (5)' + Minimum free disk space a node must have for RR to schedule a file + to it. + +`rr.refresh-interval (10 seconds)' + Time between two successive free disk space checks. + +4.3.1.3 Random +.............. + + option scheduler random + + The random scheduler schedules file creation randomly among its +child nodes. Like the round-robin scheduler, it also checks for a +minimum amount of free disk space before scheduling a file to a node. + +`random.limits.min-free-disk <%> (5)' + Minimum free disk space a node must have for random to schedule a + file to it. + +`random.refresh-interval (10 seconds)' + Time between two successive free disk space checks. + +4.3.1.4 NUFA +............ + + option scheduler nufa + + It is common in many GlusterFS computing environments for all +deployed machines to act as both servers and clients. For example, a +research lab may have 40 workstations each with its own storage. All of +these workstations might act as servers exporting a volume as well as +clients accessing the entire cluster's storage. In such a situation, +it makes sense to store locally created files on the local workstation +itself (assuming files are accessed most by the workstation that +created them). The Non-Uniform File Allocation (NUFA) scheduler +accomplishes that. + + NUFA gives the local system first priority for file creation over +other nodes. If the local volume does not have more free disk space +than a specified amount (5% by default) then NUFA schedules files among +the other child volumes in a round-robin fashion. + + NUFA is named after the similar strategy used for memory access, +NUMA(1). + +`nufa.limits.min-free-disk <%> (5)' + Minimum disk space that must be free (local or remote) for NUFA to + schedule a file to it. + +`nufa.refresh-interval (10 seconds)' + Time between two successive free disk space checks. + +`nufa.local-volume-name ' + The name of the volume corresponding to the local system. This + volume must be one of the children of the unify volume. This + option is mandatory. + +4.3.1.5 Namespace +................. + +Namespace volume needed because: - persistent inode numbers. - file +exists even when node is down. + + namespace files are simply touched. on every lookup it is checked. + +`namespace *' + Name of the namespace volume (which should be one of the unify + volume's children). + +`self-heal [on|off] (on)' + Enable/disable self-heal. Unless you know what you are doing, do + not disable self-heal. + +4.3.1.6 Self Heal +................. + +* When a 'lookup()/stat()' call is made on directory for the first +time, a self-heal call is made, which checks for the consistancy of its +child nodes. If an entry is present in storage node, but not in +namespace, that entry is created in namespace, and vica-versa. There is +an writedir() API introduced which is used for the same. It also checks +for permissions, and uid/gid consistencies. + + * This check is also done when an server goes down and comes up. + + * If one starts with an empty namespace export, but has data in +storage nodes, a 'find .>/dev/null' or 'ls -lR >/dev/null' should help +to build namespace in one shot. Even otherwise, namespace is built on +demand when a file is looked up for the first time. + + NOTE: There are some issues (Kernel 'Oops' msgs) seen with +fuse-2.6.3, when someone deletes namespace in backend, when glusterfs is +running. But with fuse-2.6.5, this issue is not there. + + ---------- Footnotes ---------- + + (1) Non-Uniform Memory Access: + + + +File: user-guide.info, Node: Replicate, Next: Stripe, Prev: Unify, Up: Clustering Translators + +4.3.2 Replicate (formerly AFR) +------------------------------ + + type cluster/replicate + + Replicate provides RAID-1 like functionality for GlusterFS. +Replicate replicates files and directories across the subvolumes. Hence +if Replicate has four subvolumes, there will be four copies of all +files and directories. Replicate provides high-availability, i.e., in +case one of the subvolumes go down (e. g. server crash, network +disconnection) Replicate will still service the requests using the +redundant copies. + + Replicate also provides self-heal functionality, i.e., in case the +crashed servers come up, the outdated files and directories will be +updated with the latest versions. Replicate uses extended attributes of +the backend file system to track the versioning of files and +directories and provide the self-heal feature. + + volume replicate-example + type cluster/replicate + subvolumes brick1 brick2 brick3 + end-volume + + This sample configuration will replicate all directories and files on +brick1, brick2 and brick3. + + All the read operations happen from the first alive child. If all the +three sub-volumes are up, reads will be done from brick1; if brick1 is +down read will be done from brick2. In case read() was being done on +brick1 and it goes down, replicate transparently falls back to brick2. + + The next release of GlusterFS will add the following features: + * Ability to specify the sub-volume from which read operations are + to be done (this will help users who have one of the sub-volumes + as a local storage volume). + + * Allow scheduling of read operations amongst the sub-volumes in a + round-robin fashion. + + The order of the subvolumes list should be same across all the +'replicate's as they will be used for locking purposes. + +4.3.2.1 Self Heal +................. + +Replicate has self-heal feature, which updates the outdated file and +directory copies by the most recent versions. For example consider the +following config: + + volume replicate-example + type cluster/replicate + subvolumes brick1 brick2 + end-volume + +4.3.2.2 File self-heal +...................... + +Now if we create a file foo.txt on replicate-example, the file will be +created on brick1 and brick2. The file will have two extended +attributes associated with it in the backend filesystem. One is +trusted.afr.createtime and the other is trusted.afr.version. The +trusted.afr.createtime xattr has the create time (in terms of seconds +since epoch) and trusted.afr.version is a number that is incremented +each time a file is modified. This increment happens during close +(incase any write was done before close). + + If brick1 goes down, we edit foo.txt the version gets incremented. +Now the brick1 comes back up, when we open() on foo.txt replicate will +check if their versions are same. If they are not same, the outdated +copy is replaced by the latest copy and its version is updated. After +the sync the open() proceeds in the usual manner and the application +calling open() can continue on its access to the file. + + If brick1 goes down, we delete foo.txt and create a file with the +same name again i.e foo.txt. Now brick1 comes back up, clearly there is +a chance that the version on brick1 being more than the version on +brick2, this is where createtime extended attribute helps in deciding +which the outdated copy is. Hence we need to consider both createtime +and version to decide on the latest copy. + + The version attribute is incremented during the close() call. Version +will not be incremented in case there was no write() done. In case the +fd that the close() gets was got by create() call, we also create the +createtime extended attribute. + +4.3.2.3 Directory self-heal +........................... + +Suppose brick1 goes down, we delete foo.txt, brick1 comes back up, now +we should not create foo.txt on brick2 but we should delete foo.txt on +brick1. We handle this situation by having the createtime and version +attribute on the directory similar to the file. when lookup() is done +on the directory, we compare the createtime/version attributes of the +copies and see which files needs to be deleted and delete those files +and update the extended attributes of the outdated directory copy. +Each time a directory is modified (a file or a subdirectory is created +or deleted inside the directory) and one of the subvols is down, we +increment the directory's version. + + lookup() is a call initiated by the kernel on a file or directory +just before any access to that file or directory. In glusterfs, by +default, lookup() will not be called in case it was called in the past +one second on that particular file or directory. + + The extended attributes can be seen in the backend filesystem using +the `getfattr' command. (`getfattr -n trusted.afr.version ') + +`debug [on|off] (off)' + +`self-heal [on|off] (on)' + +`replicate (*:1)' + +`lock-node (first child is used by default)' + + +File: user-guide.info, Node: Stripe, Prev: Replicate, Up: Clustering Translators + +4.3.3 Stripe +------------ + + type cluster/stripe + + The stripe translator distributes the contents of a file over its +sub-volumes. It does this by creating a file equal in size to the +total size of the file on each of its sub-volumes. It then writes only +a part of the file to each sub-volume, leaving the rest of it empty. +These empty regions are called `holes' in Unix terminology. The holes +do not consume any disk space. + + The diagram below makes this clear. + + + +You can configure stripe so that only filenames matching a pattern are +striped. You can also configure the size of the data to be stored on +each sub-volume. + +`block-size : (*:0 no striping)' + Distribute files matching `' over the sub-volumes, + storing at least `' on each sub-volume. For example, + + option block-size *.mpg:1M + + distributes all files ending in `.mpg', storing at least 1 MB on + each sub-volume. + + Any number of `block-size' option lines may be present, specifying + different sizes for different file name patterns. + + +File: user-guide.info, Node: Performance Translators, Next: Features Translators, Prev: Clustering Translators, Up: Translators + +4.4 Performance Translators +=========================== + +* Menu: + +* Read Ahead:: +* Write Behind:: +* IO Threads:: +* IO Cache:: +* Booster:: + + +File: user-guide.info, Node: Read Ahead, Next: Write Behind, Up: Performance Translators + +4.4.1 Read Ahead +---------------- + + type performance/read-ahead + + The read-ahead translator pre-fetches data in advance on every read. +This benefits applications that mostly process files in sequential +order, since the next block of data will already be available by the +time the application is done with the current one. + + Additionally, the read-ahead translator also behaves as a +read-aggregator. Many small read operations are combined and issued as +fewer, larger read requests to the server. + + Read-ahead deals in "pages" as the unit of data fetched. The page +size is configurable, as is the "page count", which is the number of +pages that are pre-fetched. + + Read-ahead is best used with InfiniBand (using the ib-verbs +transport). On FastEthernet and Gigabit Ethernet networks, GlusterFS +can achieve the link-maximum throughput even without read-ahead, making +it quite superflous. + + Note that read-ahead only happens if the reads are perfectly +sequential. If your application accesses data in a random fashion, +using read-ahead might actually lead to a performance loss, since +read-ahead will pointlessly fetch pages which won't be used by the +application. + + Options: +`page-size (256KB)' + The unit of data that is pre-fetched. + +`page-count (2)' + The number of pages that are pre-fetched. + +`force-atime-update [on|off|yes|no] (off|no)' + Whether to force an access time (atime) update on the file on + every read. Without this, the atime will be slightly imprecise, as + it will reflect the time when the read-ahead translator read the + data, not when the application actually read it. + + +File: user-guide.info, Node: Write Behind, Next: IO Threads, Prev: Read Ahead, Up: Performance Translators + +4.4.2 Write Behind +------------------ + + type performance/write-behind + + The write-behind translator improves the latency of a write +operation. It does this by relegating the write operation to the +background and returning to the application even as the write is in +progress. Using the write-behind translator, successive write requests +can be pipelined. This mode of write-behind operation is best used on +the client side, to enable decreased write latency for the application. + + The write-behind translator can also aggregate write requests. If the +`aggregate-size' option is specified, then successive writes upto that +size are accumulated and written in a single operation. This mode of +operation is best used on the server side, as this will decrease the +disk's head movement when multiple files are being written to in +parallel. + + The `aggregate-size' option has a default value of 128KB. Although +this works well for most users, you should always experiment with +different values to determine the one that will deliver maximum +performance. This is because the performance of write-behind depends on +your interconnect, size of RAM, and the work load. + +`aggregate-size (128KB)' + Amount of data to accumulate before doing a write + +`flush-behind [on|yes|off|no] (off|no)' + + +File: user-guide.info, Node: IO Threads, Next: IO Cache, Prev: Write Behind, Up: Performance Translators + +4.4.3 IO Threads +---------------- + + type performance/io-threads + + The IO threads translator is intended to increase the responsiveness +of the server to metadata operations by doing file I/O (read, write) in +a background thread. Since the GlusterFS server is single-threaded, +using the IO threads translator can significantly improve performance. +This translator is best used on the server side, loaded just below the +server protocol translator. + + IO threads operates by handing out read and write requests to a +separate thread. The total number of threads in existence at a time is +constant, and configurable. + +`thread-count (1)' + Number of threads to use. + + +File: user-guide.info, Node: IO Cache, Next: Booster, Prev: IO Threads, Up: Performance Translators + +4.4.4 IO Cache +-------------- + + type performance/io-cache + + The IO cache translator caches data that has been read. This is +useful if many applications read the same data multiple times, and if +reads are much more frequent than writes (for example, IO caching may be +useful in a web hosting environment, where most clients will simply +read some files and only a few will write to them). + + The IO cache translator reads data from its child in `page-size' +chunks. It caches data upto `cache-size' bytes. The cache is +maintained as a prioritized least-recently-used (LRU) list, with +priorities determined by user-specified patterns to match filenames. + + When the IO cache translator detects a write operation, the cache +for that file is flushed. + + The IO cache translator periodically verifies the consistency of +cached data, using the modification times on the files. The +verification timeout is configurable. + +`page-size (128KB)' + Size of a page. + +`cache-size (n) (32MB)' + Total amount of data to be cached. + +`force-revalidate-timeout (1)' + Timeout to force a cache consistency verification, in seconds. + +`priority (*:0)' + Filename patterns listed in order of priority. + + +File: user-guide.info, Node: Booster, Prev: IO Cache, Up: Performance Translators + +4.4.5 Booster +------------- + + type performance/booster + + The booster translator gives applications a faster path to +communicate read and write requests to GlusterFS. Normally, all +requests to GlusterFS from applications go through FUSE, as indicated +in *note Filesystems in Userspace::. Using the booster translator in +conjunction with the GlusterFS booster shared library, an application +can bypass the FUSE path and send read/write requests directly to the +GlusterFS client process. + + The booster mechanism consists of two parts: the booster translator, +and the booster shared library. The booster translator is meant to be +loaded on the client side, usually at the root of the translator tree. +The booster shared library should be `LD_PRELOAD'ed with the +application. + + The booster translator when loaded opens a Unix domain socket and +listens for read/write requests on it. The booster shared library +intercepts read and write system calls and sends the requests to the +GlusterFS process directly using the Unix domain socket, bypassing FUSE. +This leads to superior performance. + + Once you've loaded the booster translator in your volume +specification file, you can start your application as: + + $ LD_PRELOAD=/usr/local/bin/glusterfs-booster.so your_app + + The booster translator accepts no options. + + +File: user-guide.info, Node: Features Translators, Next: Miscellaneous Translators, Prev: Performance Translators, Up: Translators + +4.5 Features Translators +======================== + +* Menu: + +* POSIX Locks:: +* Fixed ID:: + + +File: user-guide.info, Node: POSIX Locks, Next: Fixed ID, Up: Features Translators + +4.5.1 POSIX Locks +----------------- + + type features/posix-locks + + This translator provides storage independent POSIX record locking +support (`fcntl' locking). Typically you'll want to load this on the +server side, just above the POSIX storage translator. Using this +translator you can get both advisory locking and mandatory locking +support. It also handles `flock()' locks properly. + + Caveat: Consider a file that does not have its mandatory locking bits +(+setgid, -group execution) turned on. Assume that this file is now +opened by a process on a client that has the write-behind xlator +loaded. The write-behind xlator does not cache anything for files which +have mandatory locking enabled, to avoid incoherence. Let's say that +mandatory locking is now enabled on this file through another client. +The former client will not know about this change, and write-behind may +erroneously report a write as being successful when in fact it would +fail due to the region it is writing to being locked. + + There seems to be no easy way to fix this. To work around this +problem, it is recommended that you never enable the mandatory bits on +a file while it is open. + +`mandatory [on|off] (on)' + Turns mandatory locking on. + + +File: user-guide.info, Node: Fixed ID, Prev: POSIX Locks, Up: Features Translators + +4.5.2 Fixed ID +-------------- + + type features/fixed-id + + The fixed ID translator makes all filesystem requests from the client +to appear to be coming from a fixed, specified UID/GID, regardless of +which user actually initiated the request. + +`fixed-uid [if not set, not used]' + The UID to send to the server + +`fixed-gid [if not set, not used]' + The GID to send to the server + + +File: user-guide.info, Node: Miscellaneous Translators, Prev: Features Translators, Up: Translators + +4.6 Miscellaneous Translators +============================= + +* Menu: + +* ROT-13:: +* Trace:: + + +File: user-guide.info, Node: ROT-13, Next: Trace, Up: Miscellaneous Translators + +4.6.1 ROT-13 +------------ + + type encryption/rot-13 + + ROT-13 is a toy translator that can "encrypt" and "decrypt" file +contents using the ROT-13 algorithm. ROT-13 is a trivial algorithm that +rotates each alphabet by thirteen places. Thus, 'A' becomes 'N', 'B' +becomes 'O', and 'Z' becomes 'M'. + + It goes without saying that you shouldn't use this translator if you +need _real_ encryption (a future release of GlusterFS will have real +encryption translators). + +`encrypt-write [on|off] (on)' + Whether to encrypt on write + +`decrypt-read [on|off] (on)' + Whether to decrypt on read + + +File: user-guide.info, Node: Trace, Prev: ROT-13, Up: Miscellaneous Translators + +4.6.2 Trace +----------- + + type debug/trace + + The trace translator is intended for debugging purposes. When +loaded, it logs all the system calls received by the server or client +(wherever trace is loaded), their arguments, and the results. You must +use a GlusterFS log level of DEBUG (See *note Running GlusterFS::) for +trace to work. + + Sample trace output (lines have been wrapped for readability): + 2007-10-30 00:08:58 D [trace.c:1579:trace_opendir] trace: callid: 68 + (*this=0x8059e40, loc=0x8091984 {path=/iozone3_283, inode=0x8091f00}, + fd=0x8091d50) + + 2007-10-30 00:08:58 D [trace.c:630:trace_opendir_cbk] trace: + (*this=0x8059e40, op_ret=4, op_errno=1, fd=0x8091d50) + + 2007-10-30 00:08:58 D [trace.c:1602:trace_readdir] trace: callid: 69 + (*this=0x8059e40, size=4096, offset=0 fd=0x8091d50) + + 2007-10-30 00:08:58 D [trace.c:215:trace_readdir_cbk] trace: + (*this=0x8059e40, op_ret=0, op_errno=0, count=4) + + 2007-10-30 00:08:58 D [trace.c:1624:trace_closedir] trace: callid: 71 + (*this=0x8059e40, *fd=0x8091d50) + + 2007-10-30 00:08:58 D [trace.c:809:trace_closedir_cbk] trace: + (*this=0x8059e40, op_ret=0, op_errno=1) + + +File: user-guide.info, Node: Usage Scenarios, Next: Troubleshooting, Prev: Translators, Up: Top + +5 Usage Scenarios +***************** + +5.1 Advanced Striping +===================== + +This section is based on the Advanced Striping tutorial written by +Anand Avati on the GlusterFS wiki (1). + +5.1.1 Mixed Storage Requirements +-------------------------------- + +There are two ways of scheduling the I/O. One at file level (using +unify translator) and other at block level (using stripe translator). +Striped I/O is good for files that are potentially large and require +high parallel throughput (for example, a single file of 400GB being +accessed by 100s and 1000s of systems simultaneously and randomly). For +most of the cases, file level scheduling works best. + + In the real world, it is desirable to mix file level and block level +scheduling on a single storage volume. Alternatively users can choose +to have two separate volumes and hence two mount points, but the +applications may demand a single storage system to host both. + + This document explains how to mix file level scheduling with stripe. + +5.1.2 Configuration Brief +------------------------- + +This setup demonstrates how users can configure unify translator with +appropriate I/O scheduler for file level scheduling and strip for only +matching patterns. This way, GlusterFS chooses appropriate I/O profile +and knows how to efficiently handle both the types of data. + + A simple technique to achieve this effect is to create a stripe set +of unify and stripe blocks, where unify is the first sub-volume. Files +that do not match the stripe policy passed on to first unify sub-volume +and inturn scheduled arcoss the cluster using its file level I/O +scheduler. + + 5.1.3 Preparing GlusterFS Envoronment +------------------------------------- + +Create the directories /export/namespace, /export/unify and +/export/stripe on all the storage bricks. + + Place the following server and client volume spec file under +/etc/glusterfs (or appropriate installed path) and replace the IP +addresses / access control fields to match your environment. + + ## file: /etc/glusterfs/glusterfsd.vol + volume posix-unify + type storage/posix + option directory /export/for-unify + end-volume + + volume posix-stripe + type storage/posix + option directory /export/for-stripe + end-volume + + volume posix-namespace + type storage/posix + option directory /export/for-namespace + end-volume + + volume server + type protocol/server + option transport-type tcp + option auth.addr.posix-unify.allow 192.168.1.* + option auth.addr.posix-stripe.allow 192.168.1.* + option auth.addr.posix-namespace.allow 192.168.1.* + subvolumes posix-unify posix-stripe posix-namespace + end-volume + + ## file: /etc/glusterfs/glusterfs.vol + volume client-namespace + type protocol/client + option transport-type tcp + option remote-host 192.168.1.1 + option remote-subvolume posix-namespace + end-volume + + volume client-unify-1 + type protocol/client + option transport-type tcp + option remote-host 192.168.1.1 + option remote-subvolume posix-unify + end-volume + + volume client-unify-2 + type protocol/client + option transport-type tcp + option remote-host 192.168.1.2 + option remote-subvolume posix-unify + end-volume + + volume client-unify-3 + type protocol/client + option transport-type tcp + option remote-host 192.168.1.3 + option remote-subvolume posix-unify + end-volume + + volume client-unify-4 + type protocol/client + option transport-type tcp + option remote-host 192.168.1.4 + option remote-subvolume posix-unify + end-volume + + volume client-stripe-1 + type protocol/client + option transport-type tcp + option remote-host 192.168.1.1 + option remote-subvolume posix-stripe + end-volume + + volume client-stripe-2 + type protocol/client + option transport-type tcp + option remote-host 192.168.1.2 + option remote-subvolume posix-stripe + end-volume + + volume client-stripe-3 + type protocol/client + option transport-type tcp + option remote-host 192.168.1.3 + option remote-subvolume posix-stripe + end-volume + + volume client-stripe-4 + type protocol/client + option transport-type tcp + option remote-host 192.168.1.4 + option remote-subvolume posix-stripe + end-volume + + volume unify + type cluster/unify + option scheduler rr + subvolumes cluster-unify-1 cluster-unify-2 cluster-unify-3 cluster-unify-4 + end-volume + + volume stripe + type cluster/stripe + option block-size *.img:2MB # All files ending with .img are striped with 2MB stripe block size. + subvolumes unify cluster-stripe-1 cluster-stripe-2 cluster-stripe-3 cluster-stripe-4 + end-volume + + Bring up the Storage + + Starting GlusterFS Server: If you have installed through binary +package, you can start the service through init.d startup script. If +not: + + [root@server]# glusterfsd + + Mounting GlusterFS Volumes: + + [root@client]# glusterfs -s [BRICK-IP-ADDRESS] /mnt/cluster + + Improving upon this Setup + + Infiniband Verbs RDMA transport is much faster than TCP/IP GigE +transport. + + Use of performance translators such as read-ahead, write-behind, +io-cache, io-threads, booster is recommended. + + Replace round-robin (rr) scheduler with ALU to handle more dynamic +storage environments. + + ---------- Footnotes ---------- + + (1) +http://gluster.org/docs/index.php/Mixing_Striped_and_Regular_Files + + +File: user-guide.info, Node: Troubleshooting, Next: GNU Free Documentation Licence, Prev: Usage Scenarios, Up: Top + +6 Troubleshooting +***************** + +This chapter is a general troubleshooting guide to GlusterFS. It lists +common GlusterFS server and client error messages, debugging hints, and +concludes with the suggested procedure to report bugs in GlusterFS. + +6.1 GlusterFS error messages +============================ + +6.1.1 Server errors +------------------- + + glusterfsd: FATAL: could not open specfile: + '/etc/glusterfs/glusterfsd.vol' + + The GlusterFS server expects the volume specification file to be at +`/etc/glusterfs/glusterfsd.vol'. The example specification file will be +installed as `/etc/glusterfs/glusterfsd.vol.sample'. You need to edit +it and rename it, or provide a different specification file using the +`--spec-file' command line option (See *note Server::). + + gf_log_init: failed to open logfile "/usr/var/log/glusterfs/glusterfsd.log" + (Permission denied) + + You don't have permission to create files in the +`/usr/var/log/glusterfs' directory. Make sure you are running GlusterFS +as root. Alternatively, specify a different path for the log file using +the `--log-file' option (See *note Server::). + +6.1.2 Client errors +------------------- + + fusermount: failed to access mountpoint /mnt: + Transport endpoint is not connected + + A previous failed (or hung) mount of GlusterFS is preventing it from +being mounted again in the same location. The fix is to do: + + # umount /mnt + + and try mounting again. + + *"Transport endpoint is not connected".* + + If you get this error when you try a command such as `ls' or `cat', +it means the GlusterFS mount did not succeed. Try running GlusterFS in +`DEBUG' logging level and study the log messages to discover the cause. + + *"Connect to server failed", "SERVER-ADDRESS: Connection refused".* + + GluserFS Server is not running or dead. Check your network +connections and firewall settings. To check if the server is reachable, +try: + + telnet IP-ADDRESS 24007 + + If the server is accessible, your `telnet' command should connect and +block. If not you will see an error message such as `telnet: Unable to +connect to remote host: Connection refused'. 24007 is the default +GlusterFS port. If you have changed it, then use the corresponding port +instead. + + gf_log_init: failed to open logfile "/usr/var/log/glusterfs/glusterfs.log" + (Permission denied) + + You don't have permission to create files in the +`/usr/var/log/glusterfs' directory. Make sure you are running GlusterFS +as root. Alternatively, specify a different path for the log file using +the `--log-file' option (See *note Client::). + +6.2 FUSE error messages +======================= + +`modprobe fuse' fails with: "Unknown symbol in module, or unknown +parameter". + + If you are using fuse-2.6.x on Redhat Enterprise Linux Work Station 4 +and Advanced Server 4 with 2.6.9-42.ELlargesmp, 2.6.9-42.ELsmp, +2.6.9-42.EL kernels and get this error while loading FUSE kernel +module, you need to apply the following patch. + + For fuse-2.6.2: + + + + For fuse-2.6.3: + + + +6.3 AppArmour and GlusterFS +=========================== + +Under OpenSuSE GNU/Linux, the AppArmour security feature does not allow +GlusterFS to create temporary files or network socket connections even +while running as root. You will see error messages like `Unable to open +log file: Operation not permitted' or `Connection refused'. Disabling +AppArmour using YaST or properly configuring AppArmour to recognize +`glusterfsd' or `glusterfs'/`fusermount' should solve the problem. + +6.4 Reporting a bug +=================== + +If you encounter a bug in GlusterFS, please follow the below guidelines +when you report it to the mailing list. Be sure to report it! User +feedback is crucial to the health of the project and we value it highly. + +6.4.1 General instructions +-------------------------- + +When running GlusterFS in a non-production environment, be sure to +build it with the following command: + + $ make CFLAGS='-g -O0 -DDEBUG' + + This includes debugging information which will be helpful in getting +backtraces (see below) and also disable optimization. Enabling +optimization can result in incorrect line numbers being reported to gdb. + +6.4.2 Volume specification files +-------------------------------- + +Attach all relevant server and client spec files you were using when +you encountered the bug. Also tell us details of your setup, i.e., how +many clients and how many servers. + +6.4.3 Log files +--------------- + +Set the loglevel of your client and server programs to DEBUG (by +passing the -L DEBUG option) and attach the log files with your bug +report. Obviously, if only the client is failing (for example), you +only need to send us the client log file. + +6.4.4 Backtrace +--------------- + +If GlusterFS has encountered a segmentation fault or has crashed for +some other reason, include the backtrace with the bug report. You can +get the backtrace using the following procedure. + + Run the GlusterFS client or server inside gdb. + + $ gdb ./glusterfs + (gdb) set args -f client.spec -N -l/path/to/log/file -LDEBUG /mnt/point + (gdb) run + + Now when the process segfaults, you can get the backtrace by typing: + + (gdb) bt + + If the GlusterFS process has crashed and dumped a core file (you can +find this in / if running as a daemon and in the current directory +otherwise), you can do: + + $ gdb /path/to/glusterfs /path/to/core. + + and then get the backtrace. + + If the GlusterFS server or client seems to be hung, then you can get +the backtrace by attaching gdb to the process. First get the `PID' of +the process (using ps), and then do: + + $ gdb ./glusterfs + + Press Ctrl-C to interrupt the process and then generate the +backtrace. + +6.4.5 Reproducing the bug +------------------------- + +If the bug is reproducible, please include the steps necessary to do +so. If the bug is not reproducible, send us the bug report anyway. + +6.4.6 Other information +----------------------- + +If you think it is relevant, send us also the version of FUSE you're +using, the kernel version, platform. + + +File: user-guide.info, Node: GNU Free Documentation Licence, Next: Index, Prev: Troubleshooting, Up: Top + +Appendix A GNU Free Documentation Licence +***************************************** + + Version 1.2, November 2002 + + Copyright (C) 2000,2001,2002 Free Software Foundation, Inc. + 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA + + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + 0. PREAMBLE + + The purpose of this License is to make a manual, textbook, or other + functional and useful document "free" in the sense of freedom: to + assure everyone the effective freedom to copy and redistribute it, + with or without modifying it, either commercially or + noncommercially. Secondarily, this License preserves for the + author and publisher a way to get credit for their work, while not + being considered responsible for modifications made by others. + + This License is a kind of "copyleft", which means that derivative + works of the document must themselves be free in the same sense. + It complements the GNU General Public License, which is a copyleft + license designed for free software. + + We have designed this License in order to use it for manuals for + free software, because free software needs free documentation: a + free program should come with manuals providing the same freedoms + that the software does. But this License is not limited to + software manuals; it can be used for any textual work, regardless + of subject matter or whether it is published as a printed book. + We recommend this License principally for works whose purpose is + instruction or reference. + + 1. APPLICABILITY AND DEFINITIONS + + This License applies to any manual or other work, in any medium, + that contains a notice placed by the copyright holder saying it + can be distributed under the terms of this License. Such a notice + grants a world-wide, royalty-free license, unlimited in duration, + to use that work under the conditions stated herein. The + "Document", below, refers to any such manual or work. Any member + of the public is a licensee, and is addressed as "you". You + accept the license if you copy, modify or distribute the work in a + way requiring permission under copyright law. + + A "Modified Version" of the Document means any work containing the + Document or a portion of it, either copied verbatim, or with + modifications and/or translated into another language. + + A "Secondary Section" is a named appendix or a front-matter section + of the Document that deals exclusively with the relationship of the + publishers or authors of the Document to the Document's overall + subject (or to related matters) and contains nothing that could + fall directly within that overall subject. (Thus, if the Document + is in part a textbook of mathematics, a Secondary Section may not + explain any mathematics.) The relationship could be a matter of + historical connection with the subject or with related matters, or + of legal, commercial, philosophical, ethical or political position + regarding them. + + The "Invariant Sections" are certain Secondary Sections whose + titles are designated, as being those of Invariant Sections, in + the notice that says that the Document is released under this + License. If a section does not fit the above definition of + Secondary then it is not allowed to be designated as Invariant. + The Document may contain zero Invariant Sections. If the Document + does not identify any Invariant Sections then there are none. + + The "Cover Texts" are certain short passages of text that are + listed, as Front-Cover Texts or Back-Cover Texts, in the notice + that says that the Document is released under this License. A + Front-Cover Text may be at most 5 words, and a Back-Cover Text may + be at most 25 words. + + A "Transparent" copy of the Document means a machine-readable copy, + represented in a format whose specification is available to the + general public, that is suitable for revising the document + straightforwardly with generic text editors or (for images + composed of pixels) generic paint programs or (for drawings) some + widely available drawing editor, and that is suitable for input to + text formatters or for automatic translation to a variety of + formats suitable for input to text formatters. A copy made in an + otherwise Transparent file format whose markup, or absence of + markup, has been arranged to thwart or discourage subsequent + modification by readers is not Transparent. An image format is + not Transparent if used for any substantial amount of text. A + copy that is not "Transparent" is called "Opaque". + + Examples of suitable formats for Transparent copies include plain + ASCII without markup, Texinfo input format, LaTeX input format, + SGML or XML using a publicly available DTD, and + standard-conforming simple HTML, PostScript or PDF designed for + human modification. Examples of transparent image formats include + PNG, XCF and JPG. Opaque formats include proprietary formats that + can be read and edited only by proprietary word processors, SGML or + XML for which the DTD and/or processing tools are not generally + available, and the machine-generated HTML, PostScript or PDF + produced by some word processors for output purposes only. + + The "Title Page" means, for a printed book, the title page itself, + plus such following pages as are needed to hold, legibly, the + material this License requires to appear in the title page. For + works in formats which do not have any title page as such, "Title + Page" means the text near the most prominent appearance of the + work's title, preceding the beginning of the body of the text. + + A section "Entitled XYZ" means a named subunit of the Document + whose title either is precisely XYZ or contains XYZ in parentheses + following text that translates XYZ in another language. (Here XYZ + stands for a specific section name mentioned below, such as + "Acknowledgements", "Dedications", "Endorsements", or "History".) + To "Preserve the Title" of such a section when you modify the + Document means that it remains a section "Entitled XYZ" according + to this definition. + + The Document may include Warranty Disclaimers next to the notice + which states that this License applies to the Document. These + Warranty Disclaimers are considered to be included by reference in + this License, but only as regards disclaiming warranties: any other + implication that these Warranty Disclaimers may have is void and + has no effect on the meaning of this License. + + 2. VERBATIM COPYING + + You may copy and distribute the Document in any medium, either + commercially or noncommercially, provided that this License, the + copyright notices, and the license notice saying this License + applies to the Document are reproduced in all copies, and that you + add no other conditions whatsoever to those of this License. You + may not use technical measures to obstruct or control the reading + or further copying of the copies you make or distribute. However, + you may accept compensation in exchange for copies. If you + distribute a large enough number of copies you must also follow + the conditions in section 3. + + You may also lend copies, under the same conditions stated above, + and you may publicly display copies. + + 3. COPYING IN QUANTITY + + If you publish printed copies (or copies in media that commonly + have printed covers) of the Document, numbering more than 100, and + the Document's license notice requires Cover Texts, you must + enclose the copies in covers that carry, clearly and legibly, all + these Cover Texts: Front-Cover Texts on the front cover, and + Back-Cover Texts on the back cover. Both covers must also clearly + and legibly identify you as the publisher of these copies. The + front cover must present the full title with all words of the + title equally prominent and visible. You may add other material + on the covers in addition. Copying with changes limited to the + covers, as long as they preserve the title of the Document and + satisfy these conditions, can be treated as verbatim copying in + other respects. + + If the required texts for either cover are too voluminous to fit + legibly, you should put the first ones listed (as many as fit + reasonably) on the actual cover, and continue the rest onto + adjacent pages. + + If you publish or distribute Opaque copies of the Document + numbering more than 100, you must either include a + machine-readable Transparent copy along with each Opaque copy, or + state in or with each Opaque copy a computer-network location from + which the general network-using public has access to download + using public-standard network protocols a complete Transparent + copy of the Document, free of added material. If you use the + latter option, you must take reasonably prudent steps, when you + begin distribution of Opaque copies in quantity, to ensure that + this Transparent copy will remain thus accessible at the stated + location until at least one year after the last time you + distribute an Opaque copy (directly or through your agents or + retailers) of that edition to the public. + + It is requested, but not required, that you contact the authors of + the Document well before redistributing any large number of + copies, to give them a chance to provide you with an updated + version of the Document. + + 4. MODIFICATIONS + + You may copy and distribute a Modified Version of the Document + under the conditions of sections 2 and 3 above, provided that you + release the Modified Version under precisely this License, with + the Modified Version filling the role of the Document, thus + licensing distribution and modification of the Modified Version to + whoever possesses a copy of it. In addition, you must do these + things in the Modified Version: + + A. Use in the Title Page (and on the covers, if any) a title + distinct from that of the Document, and from those of + previous versions (which should, if there were any, be listed + in the History section of the Document). You may use the + same title as a previous version if the original publisher of + that version gives permission. + + B. List on the Title Page, as authors, one or more persons or + entities responsible for authorship of the modifications in + the Modified Version, together with at least five of the + principal authors of the Document (all of its principal + authors, if it has fewer than five), unless they release you + from this requirement. + + C. State on the Title page the name of the publisher of the + Modified Version, as the publisher. + + D. Preserve all the copyright notices of the Document. + + E. Add an appropriate copyright notice for your modifications + adjacent to the other copyright notices. + + F. Include, immediately after the copyright notices, a license + notice giving the public permission to use the Modified + Version under the terms of this License, in the form shown in + the Addendum below. + + G. Preserve in that license notice the full lists of Invariant + Sections and required Cover Texts given in the Document's + license notice. + + H. Include an unaltered copy of this License. + + I. Preserve the section Entitled "History", Preserve its Title, + and add to it an item stating at least the title, year, new + authors, and publisher of the Modified Version as given on + the Title Page. If there is no section Entitled "History" in + the Document, create one stating the title, year, authors, + and publisher of the Document as given on its Title Page, + then add an item describing the Modified Version as stated in + the previous sentence. + + J. Preserve the network location, if any, given in the Document + for public access to a Transparent copy of the Document, and + likewise the network locations given in the Document for + previous versions it was based on. These may be placed in + the "History" section. You may omit a network location for a + work that was published at least four years before the + Document itself, or if the original publisher of the version + it refers to gives permission. + + K. For any section Entitled "Acknowledgements" or "Dedications", + Preserve the Title of the section, and preserve in the + section all the substance and tone of each of the contributor + acknowledgements and/or dedications given therein. + + L. Preserve all the Invariant Sections of the Document, + unaltered in their text and in their titles. Section numbers + or the equivalent are not considered part of the section + titles. + + M. Delete any section Entitled "Endorsements". Such a section + may not be included in the Modified Version. + + N. Do not retitle any existing section to be Entitled + "Endorsements" or to conflict in title with any Invariant + Section. + + O. Preserve any Warranty Disclaimers. + + If the Modified Version includes new front-matter sections or + appendices that qualify as Secondary Sections and contain no + material copied from the Document, you may at your option + designate some or all of these sections as invariant. To do this, + add their titles to the list of Invariant Sections in the Modified + Version's license notice. These titles must be distinct from any + other section titles. + + You may add a section Entitled "Endorsements", provided it contains + nothing but endorsements of your Modified Version by various + parties--for example, statements of peer review or that the text + has been approved by an organization as the authoritative + definition of a standard. + + You may add a passage of up to five words as a Front-Cover Text, + and a passage of up to 25 words as a Back-Cover Text, to the end + of the list of Cover Texts in the Modified Version. Only one + passage of Front-Cover Text and one of Back-Cover Text may be + added by (or through arrangements made by) any one entity. If the + Document already includes a cover text for the same cover, + previously added by you or by arrangement made by the same entity + you are acting on behalf of, you may not add another; but you may + replace the old one, on explicit permission from the previous + publisher that added the old one. + + The author(s) and publisher(s) of the Document do not by this + License give permission to use their names for publicity for or to + assert or imply endorsement of any Modified Version. + + 5. COMBINING DOCUMENTS + + You may combine the Document with other documents released under + this License, under the terms defined in section 4 above for + modified versions, provided that you include in the combination + all of the Invariant Sections of all of the original documents, + unmodified, and list them all as Invariant Sections of your + combined work in its license notice, and that you preserve all + their Warranty Disclaimers. + + The combined work need only contain one copy of this License, and + multiple identical Invariant Sections may be replaced with a single + copy. If there are multiple Invariant Sections with the same name + but different contents, make the title of each such section unique + by adding at the end of it, in parentheses, the name of the + original author or publisher of that section if known, or else a + unique number. Make the same adjustment to the section titles in + the list of Invariant Sections in the license notice of the + combined work. + + In the combination, you must combine any sections Entitled + "History" in the various original documents, forming one section + Entitled "History"; likewise combine any sections Entitled + "Acknowledgements", and any sections Entitled "Dedications". You + must delete all sections Entitled "Endorsements." + + 6. COLLECTIONS OF DOCUMENTS + + You may make a collection consisting of the Document and other + documents released under this License, and replace the individual + copies of this License in the various documents with a single copy + that is included in the collection, provided that you follow the + rules of this License for verbatim copying of each of the + documents in all other respects. + + You may extract a single document from such a collection, and + distribute it individually under this License, provided you insert + a copy of this License into the extracted document, and follow + this License in all other respects regarding verbatim copying of + that document. + + 7. AGGREGATION WITH INDEPENDENT WORKS + + A compilation of the Document or its derivatives with other + separate and independent documents or works, in or on a volume of + a storage or distribution medium, is called an "aggregate" if the + copyright resulting from the compilation is not used to limit the + legal rights of the compilation's users beyond what the individual + works permit. When the Document is included in an aggregate, this + License does not apply to the other works in the aggregate which + are not themselves derivative works of the Document. + + If the Cover Text requirement of section 3 is applicable to these + copies of the Document, then if the Document is less than one half + of the entire aggregate, the Document's Cover Texts may be placed + on covers that bracket the Document within the aggregate, or the + electronic equivalent of covers if the Document is in electronic + form. Otherwise they must appear on printed covers that bracket + the whole aggregate. + + 8. TRANSLATION + + Translation is considered a kind of modification, so you may + distribute translations of the Document under the terms of section + 4. Replacing Invariant Sections with translations requires special + permission from their copyright holders, but you may include + translations of some or all Invariant Sections in addition to the + original versions of these Invariant Sections. You may include a + translation of this License, and all the license notices in the + Document, and any Warranty Disclaimers, provided that you also + include the original English version of this License and the + original versions of those notices and disclaimers. In case of a + disagreement between the translation and the original version of + this License or a notice or disclaimer, the original version will + prevail. + + If a section in the Document is Entitled "Acknowledgements", + "Dedications", or "History", the requirement (section 4) to + Preserve its Title (section 1) will typically require changing the + actual title. + + 9. TERMINATION + + You may not copy, modify, sublicense, or distribute the Document + except as expressly provided for under this License. Any other + attempt to copy, modify, sublicense or distribute the Document is + void, and will automatically terminate your rights under this + License. However, parties who have received copies, or rights, + from you under this License will not have their licenses + terminated so long as such parties remain in full compliance. + + 10. FUTURE REVISIONS OF THIS LICENSE + + The Free Software Foundation may publish new, revised versions of + the GNU Free Documentation License from time to time. Such new + versions will be similar in spirit to the present version, but may + differ in detail to address new problems or concerns. See + `http://www.gnu.org/copyleft/'. + + Each version of the License is given a distinguishing version + number. If the Document specifies that a particular numbered + version of this License "or any later version" applies to it, you + have the option of following the terms and conditions either of + that specified version or of any later version that has been + published (not as a draft) by the Free Software Foundation. If + the Document does not specify a version number of this License, + you may choose any version ever published (not as a draft) by the + Free Software Foundation. + +A.0.1 ADDENDUM: How to use this License for your documents +---------------------------------------------------------- + +To use this License in a document you have written, include a copy of +the License in the document and put the following copyright and license +notices just after the title page: + + Copyright (C) YEAR YOUR NAME. + Permission is granted to copy, distribute and/or modify this document + under the terms of the GNU Free Documentation License, Version 1.2 + or any later version published by the Free Software Foundation; + with no Invariant Sections, no Front-Cover Texts, and no Back-Cover + Texts. A copy of the license is included in the section entitled ``GNU + Free Documentation License''. + + If you have Invariant Sections, Front-Cover Texts and Back-Cover +Texts, replace the "with...Texts." line with this: + + with the Invariant Sections being LIST THEIR TITLES, with + the Front-Cover Texts being LIST, and with the Back-Cover Texts + being LIST. + + If you have Invariant Sections without Cover Texts, or some other +combination of the three, merge those two alternatives to suit the +situation. + + If your document contains nontrivial examples of program code, we +recommend releasing these examples in parallel under your choice of +free software license, such as the GNU General Public License, to +permit their use in free software. + + +File: user-guide.info, Node: Index, Prev: GNU Free Documentation Licence, Up: Top + +Index +***** + +[index] +* Menu: + +* alu (scheduler): Unify. (line 49) +* AppArmour: Troubleshooting. (line 96) +* arch: Getting GlusterFS. (line 6) +* booster: Booster. (line 6) +* commercial support: Introduction. (line 36) +* DNS round robin: Transport modules. (line 29) +* fcntl: POSIX Locks. (line 6) +* FDL, GNU Free Documentation License: GNU Free Documentation Licence. + (line 6) +* fixed-id (translator): Fixed ID. (line 6) +* GlusterFS client: Client. (line 6) +* GlusterFS mailing list: Introduction. (line 28) +* GlusterFS server: Server. (line 6) +* infiniband transport: Transport modules. (line 58) +* InfiniBand, installation: Pre requisites. (line 51) +* io-cache (translator): IO Cache. (line 6) +* io-threads (translator): IO Threads. (line 6) +* IRC channel, #gluster: Introduction. (line 31) +* libibverbs: Pre requisites. (line 51) +* namespace: Unify. (line 207) +* nufa (scheduler): Unify. (line 175) +* OpenSuSE: Troubleshooting. (line 96) +* posix-locks (translator): POSIX Locks. (line 6) +* random (scheduler): Unify. (line 159) +* read-ahead (translator): Read Ahead. (line 6) +* record locking: POSIX Locks. (line 6) +* Redhat Enterprise Linux: Troubleshooting. (line 78) +* Replicate: Replicate. (line 6) +* rot-13 (translator): ROT-13. (line 6) +* rr (scheduler): Unify. (line 138) +* scheduler (unify): Unify. (line 6) +* self heal (replicate): Replicate. (line 46) +* self heal (unify): Unify. (line 223) +* stripe (translator): Stripe. (line 6) +* trace (translator): Trace. (line 6) +* unify (translator): Unify. (line 6) +* unify invariants: Unify. (line 16) +* write-behind (translator): Write Behind. (line 6) +* Gluster, Inc.: Introduction. (line 36) + + + +Tag Table: +Node: Top704 +Node: Acknowledgements2304 +Node: Introduction3214 +Node: Installation and Invocation4649 +Node: Pre requisites4933 +Node: Getting GlusterFS7023 +Ref: Getting GlusterFS-Footnote-17809 +Node: Building7857 +Node: Running GlusterFS9559 +Node: Server9770 +Node: Client11358 +Node: A Tutorial Introduction13564 +Node: Concepts17101 +Node: Filesystems in Userspace17316 +Node: Translator18457 +Node: Volume specification file21160 +Node: Translators23632 +Node: Storage Translators24201 +Ref: Storage Translators-Footnote-125008 +Node: POSIX25142 +Node: BDB25765 +Node: Client and Server Translators26822 +Node: Transport modules27298 +Node: Client protocol31445 +Node: Server protocol32384 +Node: Clustering Translators33373 +Node: Unify34260 +Ref: Unify-Footnote-143859 +Node: Replicate43951 +Node: Stripe49006 +Node: Performance Translators50164 +Node: Read Ahead50438 +Node: Write Behind52170 +Node: IO Threads53579 +Node: IO Cache54367 +Node: Booster55691 +Node: Features Translators57105 +Node: POSIX Locks57333 +Node: Fixed ID58650 +Node: Miscellaneous Translators59136 +Node: ROT-1359334 +Node: Trace60013 +Node: Usage Scenarios61282 +Ref: Usage Scenarios-Footnote-167215 +Node: Troubleshooting67290 +Node: GNU Free Documentation Licence73638 +Node: Index96087 + +End Tag Table diff --git a/doc/legacy/user-guide.pdf b/doc/legacy/user-guide.pdf new file mode 100644 index 000000000..ed7bd2a99 Binary files /dev/null and b/doc/legacy/user-guide.pdf differ diff --git a/doc/legacy/user-guide.texi b/doc/legacy/user-guide.texi new file mode 100644 index 000000000..8e429853f --- /dev/null +++ b/doc/legacy/user-guide.texi @@ -0,0 +1,2246 @@ +\input texinfo +@setfilename user-guide.info +@settitle GlusterFS 2.0 User Guide +@afourpaper + +@direntry +* GlusterFS: (user-guide). GlusterFS distributed filesystem user guide +@end direntry + +@copying +This is the user manual for GlusterFS 2.0. + +Copyright @copyright{} 2007-2011 @email{@b{Gluster}} , Inc. Permission is granted to +copy, distribute and/or modify this document under the terms of the +@acronym{GNU} Free Documentation License, Version 1.2 or any later +version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the +license is included in the chapter entitled ``@acronym{GNU} Free +Documentation License''. +@end copying + +@titlepage +@title GlusterFS 2.0 User Guide [DRAFT] +@subtitle January 15, 2008 +@author http://gluster.org/core-team.php +@author @email{@b{Gluster}} +@page +@vskip 0pt plus 1filll +@insertcopying +@end titlepage + +@c Info stuff +@ifnottex +@node Top +@top GlusterFS 2.0 User Guide + +@insertcopying +@menu +* Acknowledgements:: +* Introduction:: +* Installation and Invocation:: +* Concepts:: +* Translators:: +* Usage Scenarios:: +* Troubleshooting:: +* GNU Free Documentation Licence:: +* Index:: + +@detailmenu + --- The Detailed Node Listing --- + +Installation and Invocation + +* Pre requisites:: +* Getting GlusterFS:: +* Building:: +* Running GlusterFS:: +* A Tutorial Introduction:: + +Running GlusterFS + +* Server:: +* Client:: + +Concepts + +* Filesystems in Userspace:: +* Translator:: +* Volume specification file:: + +Translators + +* Storage Translators:: +* Client and Server Translators:: +* Clustering Translators:: +* Performance Translators:: +* Features Translators:: + +Storage Translators + +* POSIX:: + +Client and Server Translators + +* Transport modules:: +* Client protocol:: +* Server protocol:: + +Clustering Translators + +* Unify:: +* Replicate:: +* Stripe:: + +Performance Translators + +* Read Ahead:: +* Write Behind:: +* IO Threads:: +* IO Cache:: + +Features Translators + +* POSIX Locks:: +* Fixed ID:: + +Miscellaneous Translators + +* ROT-13:: +* Trace:: + +@end detailmenu +@end menu + +@end ifnottex +@c Info stuff end + +@contents + +@node Acknowledgements +@unnumbered Acknowledgements +GlusterFS continues to be a wonderful and enriching experience for all +of us involved. + +GlusterFS development would not have been possible at this pace if +not for our enthusiastic users. People from around the world have +helped us with bug reports, performance numbers, and feature suggestions. +A huge thanks to them all. + +Matthew Paine - for RPMs & general enthu + +Leonardo Rodrigues de Mello - for DEBs + +Julian Perez & Adam D'Auria - for multi-server tutorial + +Paul England - for HA spec + +Brent Nelson - for many bug reports + +Jacques Mattheij - for Europe mirror. + +Patrick Negri - for TCP non-blocking connect. +@flushright +http://gluster.org/core-team.php (@email{list-hacking@@gluster.com}) +@email{@b{Gluster}} +@end flushright + +@node Introduction +@chapter Introduction + +GlusterFS is a distributed filesystem. It works at the file level, +not block level. + +A network filesystem is one which allows us to access remote files. A +distributed filesystem is one that stores data on multiple machines +and makes them all appear to be a part of the same filesystem. + +Need for distributed filesystems + +@itemize @bullet +@item Scalability: A distributed filesystem allows us to store more data than what can be stored on a single machine. + +@item Redundancy: We might want to replicate crucial data on to several machines. + +@item Uniform access: One can mount a remote volume (for example your home directory) from any machine and access the same data. +@end itemize + +@section Contacting us +You can reach us through the mailing list @strong{gluster-devel} +(@email{gluster-devel@@nongnu.org}). +@cindex GlusterFS mailing list + +You can also find many of the developers on @acronym{IRC}, on the @code{#gluster} +channel on Freenode (@indicateurl{irc.freenode.net}). +@cindex IRC channel, #gluster + +The GlusterFS documentation wiki is also useful: @* +@indicateurl{http://gluster.org/docs/index.php/GlusterFS} + +For commercial support, you can contact @email{@b{Gluster}} at: +@cindex commercial support +@cindex Gluster, Inc. + +@display +3194 Winding Vista Common +Fremont, CA 94539 +USA. + +Phone: +1 (510) 354 6801 +Toll free: +1 (888) 813 6309 +Fax: +1 (510) 372 0604 +@end display + +You can also email us at @email{support@@gluster.com}. + +@node Installation and Invocation +@chapter Installation and Invocation + +@menu +* Pre requisites:: +* Getting GlusterFS:: +* Building:: +* Running GlusterFS:: +* A Tutorial Introduction:: +@end menu + +@node Pre requisites +@section Pre requisites + +Before installing GlusterFS make sure you have the +following components installed. + +@subsection @acronym{FUSE} +GlusterFS has now built-in support for the @acronym{FUSE} protocol. +You need a kernel with @acronym{FUSE} support to mount GlusterFS. +You do not need the @acronym{FUSE} package (library and utilities), +but be aware of the following issues: + +@itemize +@item If you want unprivileged users to be able to mount GlusterFS filesystems, +you need a recent version of the @command{fusermount} utility. You already have +it if you have @acronym{FUSE} version 2.7.0 or higher installed; if that's not +the case, one will be compiled along with GlusterFS if you pass +@command{--enable-fusermount} to the @command{configure} script. @item You +need to ensure @acronym{FUSE} support is configured properly on your system. In +details: +@itemize +@item If your kernel has @acronym{FUSE} as a loadable module, make sure it's +loaded. +@item Create @command{/dev/fuse} (major 10, minor 229) either by means of udev +rules or by hand. +@item Optionally, if you want runtime control over your @acronym{FUSE} mounts, +mount the fusectl auxiliary filesystem: + +@example +# mount -t fusectl none /sys/fs/fuse/connections +@end example +@end itemize + +The @acronym{FUSE} packages shipped by the various distributions usually take care +about these things, so the easiest way to get the above tasks handled is still +installing the @acronym{FUSE} package(s). +@end itemize + +To get the best performance from GlusterFS,it is recommended that you use +our patched version of the @acronym{FUSE} kernel module. See Patched FUSE for details. + +@subsection Patched FUSE + +The GlusterFS project maintains a patched version of @acronym{FUSE} meant to be used +with GlusterFS. The patches increase GlusterFS performance. It is recommended that +all users use the patched @acronym{FUSE}. + +The patched @acronym{FUSE} tarball can be downloaded from: + +@indicateurl{ftp://ftp.gluster.com/pub/gluster/glusterfs/fuse/} + +The specific changes made to @acronym{FUSE} are: + +@itemize +@item The communication channel size between @acronym{FUSE} kernel module and GlusterFS has been increased to 1MB, permitting large reads and writes to be sent in bigger chunks. + +@item The kernel's read-ahead boundry has been extended upto 1MB. + +@item Block size returned in the @command{stat()}/@command{fstat()} calls tuned to 1MB, to make cp and similar commands perform I/O using that block size. + +@item @command{flock()} locking support has been added (although some rework in GlusterFS is needed for perfect compliance). +@end itemize + +@subsection libibverbs (optional) +@cindex InfiniBand, installation +@cindex libibverbs +This is only needed if you want GlusterFS to use InfiniBand as the +interconnect mechanism between server and client. You can get it from: + +@indicateurl{http://www.openfabrics.org/downloads.htm}. + +@subsection Bison and Flex +These should be already installed on most Linux systems. If not, use your distribution's +normal software installation procedures to install them. Make sure you install the +relevant developer packages also. + +@node Getting GlusterFS +@section Getting GlusterFS +@cindex arch +There are many ways to get hold of GlusterFS. For a production deployment, +the recommended method is to download the latest release tarball. +Release tarballs are available at: @indicateurl{http://gluster.org/download.php}. + +If you want the bleeding edge development source, you can get them +from the Git +@footnote{@indicateurl{http://git-scm.com}} +repository. First you must install Git itself. Then +you can check out the source + +@example +$ git clone git://git.sv.gnu.org/gluster.git glusterfs +@end example + +@node Building +@section Building +You can skip this section if you're installing from @acronym{RPM}s +or @acronym{DEB}s. + +GlusterFS uses the Autotools mechanism to build. As such, the procedure +is straight-forward. First, change into the GlusterFS source directory. + +@example +$ cd glusterfs- +@end example + +If you checked out the source from the Arch repository, you'll need +to run @command{./autogen.sh} first. Note that you'll need to have +Autoconf and Automake installed for this. + +Run @command{configure}. + +@example +$ ./configure +@end example + +The configure script accepts the following options: + +@cartouche +@table @code + +@item --disable-ibverbs +Disable the InfiniBand transport mechanism. + +@item --disable-fuse-client +Disable the @acronym{FUSE} client. + +@item --disable-server +Disable building of the GlusterFS server. + +@item --disable-bdb +Disable building of Berkeley DB based storage translator. + +@item --disable-mod_glusterfs +Disable building of Apache/lighttpd glusterfs plugins. + +@item --disable-epoll +Use poll instead of epoll. + +@item --disable-libglusterfsclient +Disable building of libglusterfsclient + +@item --enable-fusermount +Build fusermount + +@end table +@end cartouche + +Build and install GlusterFS. + +@example +# make install +@end example + +The binaries (@command{glusterfsd} and @command{glusterfs}) will be by +default installed in @command{/usr/local/sbin/}. Translator, +scheduler, and transport shared libraries will be installed in +@command{/usr/local/lib/glusterfs//}. Sample volume +specification files will be in @command{/usr/local/etc/glusterfs/}. +This document itself can be found in +@command{/usr/local/share/doc/glusterfs/}. If you passed the @command{--prefix} +argument to the configure script, then replace @command{/usr/local} in the preceding +paths with the prefix. + +@node Running GlusterFS +@section Running GlusterFS + +@menu +* Server:: +* Client:: +@end menu + +@node Server +@subsection Server +@cindex GlusterFS server + +The GlusterFS server is necessary to export storage volumes to remote clients +(See @ref{Server protocol} for more info). This section documents the invocation +of the GlusterFS server program and all the command-line options accepted by it. + +@cartouche +@table @code +Basic Options +@item -f, --volfile= + Use the volume file as the volume specification. + +@item -s, --volfile-server= + Server to get volume file from. This option overrides --volfile option. + +@item -l, --log-file= + Specify the path for the log file. + +@item -L, --log-level= + Set the log level for the server. Log level should be one of @acronym{DEBUG}, +@acronym{WARNING}, @acronym{ERROR}, @acronym{CRITICAL}, or @acronym{NONE}. + +Advanced Options +@item --debug + Run in debug mode. This option sets --no-daemon, --log-level to DEBUG and + --log-file to console. + +@item -N, --no-daemon + Run glusterfsd as a foreground process. + +@item -p, --pid-file= + Path for the @acronym{PID} file. + +@item --volfile-id= + 'key' of the volfile to be fetched from server. + +@item --volfile-server-port= + Listening port number of volfile server. + +@item --volfile-server-transport=[tcp|ib-verbs] + Transport type to get volfile from server. [default: @command{tcp}] + +@item --xlator-options= + Add/override a translator option for a volume with specified value. + +Miscellaneous Options +@item -?, --help + Show this help text. + +@item --usage + Display a short usage message. + +@item -V, --version + Show version information. +@end table +@end cartouche + +@node Client +@subsection Client +@cindex GlusterFS client + +The GlusterFS client process is necessary to access remote storage volumes and +mount them locally using @acronym{FUSE}. This section documents the invocation of the +client process and all its command-line arguments. + +@example + # glusterfs [options] +@end example + +The @command{mountpoint} is the directory where you want the GlusterFS +filesystem to appear. Example: + +@example + # glusterfs -f /usr/local/etc/glusterfs-client.vol /mnt +@end example + +The command-line options are detailed below. + +@tex +\vfill +@end tex +@page + +@cartouche +@table @code + +Basic Options +@item -f, --volfile= + Use the volume file as the volume specification. + +@item -s, --volfile-server= + Server to get volume file from. This option overrides --volfile option. + +@item -l, --log-file= + Specify the path for the log file. + +@item -L, --log-level= + Set the log level for the server. Log level should be one of @acronym{DEBUG}, +@acronym{WARNING}, @acronym{ERROR}, @acronym{CRITICAL}, or @acronym{NONE}. + +Advanced Options +@item --debug + Run in debug mode. This option sets --no-daemon, --log-level to DEBUG and + --log-file to console. + +@item -N, --no-daemon + Run @command{glusterfs} as a foreground process. + +@item -p, --pid-file= + Path for the @acronym{PID} file. + +@item --volfile-id= + 'key' of the volfile to be fetched from server. + +@item --volfile-server-port= + Listening port number of volfile server. + +@item --volfile-server-transport=[tcp|ib-verbs] + Transport type to get volfile from server. [default: @command{tcp}] + +@item --xlator-options= + Add/override a translator option for a volume with specified value. + +@item --volume-name= + Volume name in client spec to use. Defaults to the root volume. + +@acronym{FUSE} Options +@item --attribute-timeout= + Attribute timeout for inodes in the kernel, in seconds. Defaults to 1 second. + +@item --disable-direct-io-mode + Disable direct @acronym{I/O} mode in @acronym{FUSE} kernel module. This is set + automatically if kernel supports big writes (>= 2.6.26). + +@item -e, --entry-timeout= + Entry timeout for directory entries in the kernel, in seconds. + Defaults to 1 second. + +Missellaneous Options +@item -?, --help + Show this help information. + +@item -V, --version + Show version information. +@end table +@end cartouche + +@node A Tutorial Introduction +@section A Tutorial Introduction + +This section will show you how to quickly get GlusterFS up and running. We'll +configure GlusterFS as a simple network filesystem, with one server and one client. +In this mode of usage, GlusterFS can serve as a replacement for NFS. + +We'll make use of two machines; call them @emph{server} and +@emph{client} (If you don't want to setup two machines, just run +everything that follows on the same machine). In the examples that +follow, the shell prompts will use these names to clarify the machine +on which the command is being run. For example, a command that should +be run on the server will be shown with the prompt: + +@example +[root@@server]# +@end example + +Our goal is to make a directory on the @emph{server} (say, @command{/export}) +accessible to the @emph{client}. + +First of all, get GlusterFS installed on both the machines, as described in the +previous sections. Make sure you have the @acronym{FUSE} kernel module loaded. You +can ensure this by running: + +@example +[root@@server]# modprobe fuse +@end example + +Before we can run the GlusterFS client or server programs, we need to write +two files called @emph{volume specifications} (equivalently refered to as @emph{volfiles}). +The volfile describes the @emph{translator tree} on a node. The next chapter will +explain the concepts of `translator' and `volume specification' in detail. For now, +just assume that the volfile is like an NFS @command{/etc/export} file. + +On the server, create a text file somewhere (we'll assume the path +@command{/tmp/glusterfsd.vol}) with the following contents. + +@cartouche +@example +volume colon-o + type storage/posix + option directory /export +end-volume + +volume server + type protocol/server + subvolumes colon-o + option transport-type tcp + option auth.addr.colon-o.allow * +end-volume +@end example +@end cartouche + +A brief explanation of the file's contents. The first section defines a storage +volume, named ``colon-o'' (the volume names are arbitrary), which exports the +@command{/export} directory. The second section defines options for the translator +which will make the storage volume accessible remotely. It specifies @command{colon-o} as +a subvolume. This defines the @emph{translator tree}, about which more will be said +in the next chapter. The two options specify that the @acronym{TCP} protocol is to be +used (as opposed to InfiniBand, for example), and that access to the storage volume +is to be provided to clients with any @acronym{IP} address at all. If you wanted to +restrict access to this server to only your subnet for example, you'd specify +something like @command{192.168.1.*} in the second option line. + +On the client machine, create the following text file (again, we'll assume +the path to be @command{/tmp/glusterfs-client.vol}). Replace +@emph{server-ip-address} with the @acronym{IP} address of your server machine. If you +are doing all this on a single machine, use @command{127.0.0.1}. + +@cartouche +@example +volume client + type protocol/client + option transport-type tcp + option remote-host @emph{server-ip-address} + option remote-subvolume colon-o +end-volume +@end example +@end cartouche + +Now we need to start both the server and client programs. To start the server: + +@example +[root@@server]# glusterfsd -f /tmp/glusterfs-server.vol +@end example + +To start the client: + +@example +[root@@client]# glusterfs -f /tmp/glusterfs-client.vol /mnt/glusterfs +@end example + +You should now be able to see the files under the server's @command{/export} directory +in the @command{/mnt/glusterfs} directory on the client. That's it; GlusterFS is now +working as a network file system. + +@node Concepts +@chapter Concepts + +@menu +* Filesystems in Userspace:: +* Translator:: +* Volume specification file:: +@end menu + +@node Filesystems in Userspace +@section Filesystems in Userspace + +A filesystem is usually implemented in kernel space. Kernel space +development is much harder than userspace development. @acronym{FUSE} +is a kernel module/library that allows us to write a filesystem +completely in userspace. + +@acronym{FUSE} consists of a kernel module which interacts with the userspace +implementation using a device file @code{/dev/fuse}. When a process +makes a syscall on a @acronym{FUSE} filesystem, @acronym{VFS} hands the request to the +@acronym{FUSE} module, which writes the request to @code{/dev/fuse}. The +userspace implementation polls @code{/dev/fuse}, and when a request arrives, +processes it and writes the result back to @code{/dev/fuse}. The kernel then +reads from the device file and returns the result to the user process. + +In case of GlusterFS, the userspace program is the GlusterFS client. +The control flow is shown in the diagram below. The GlusterFS client +services the request by sending it to the server, which in turn +hands it to the local @acronym{POSIX} filesystem. + +@center @image{fuse,44pc,,,.pdf} +@center Fig 1. Control flow in GlusterFS + +@node Translator +@section Translator + +The @emph{translator} is the most important concept in GlusterFS. In +fact, GlusterFS is nothing but a collection of translators working +together, forming a translator @emph{tree}. + +The idea of a translator is perhaps best understood using an +analogy. Consider the @acronym{VFS} in the Linux kernel. The +@acronym{VFS} abstracts the various filesystem implementations (such +as @acronym{EXT3}, ReiserFS, @acronym{XFS}, etc.) supported by the +kernel. When an application calls the kernel to perform an operation +on a file, the kernel passes the request on to the appropriate +filesystem implementation. + +For example, let's say there are two partitions on a Linux machine: +@command{/}, which is an @acronym{EXT3} partition, and @command{/usr}, +which is a ReiserFS partition. Now if an application wants to open a +file called, say, @command{/etc/fstab}, then the kernel will +internally pass the request to the @acronym{EXT3} implementation. If +on the other hand, an application wants to read a file called +@command{/usr/src/linux/CREDITS}, then the kernel will call upon the +ReiserFS implementation to do the job. + +The ``filesystem implementation'' objects are analogous to GlusterFS +translators. A GlusterFS translator implements all the filesystem +operations. Whereas in @acronym{VFS} there is a two-level tree (with +the kernel at the root and all the filesystem implementation as its +children), in GlusterFS there exists a more elaborate tree structure. + +We can now define translators more precisely. A GlusterFS translator +is a shared object (@command{.so}) that implements every filesystem +call. GlusterFS translators can be arranged in an arbitrary tree +structure (subject to constraints imposed by the translators). When +GlusterFS receives a filesystem call, it passes it on to the +translator at the root of the translator tree. The root translator may +in turn pass it on to any or all of its children, and so on, until the +leaf nodes are reached. The result of a filesystem call is +communicated in the reverse fashion, from the leaf nodes up to the +root node, and then on to the application. + +So what might a translator tree look like? + +@tex +\vfill +@end tex +@page + +@center @image{xlator,44pc,,,.pdf} +@center Fig 2. A sample translator tree + +The diagram depicts three servers and one GlusterFS client. It is important +to note that conceptually, the translator tree spans machine boundaries. +Thus, the client machine in the diagram, @command{10.0.0.1}, can access +the aggregated storage of the filesystems on the server machines @command{10.0.0.2}, +@command{10.0.0.3}, and @command{10.0.0.4}. The translator diagram will make more +sense once you've read the next chapter and understood the functions of the +various translators. + +@node Volume specification file +@section Volume specification file +The volume specification file describes the translator tree for both the +server and client programs. + +A volume specification file is a sequence of volume definitions. +The syntax of a volume definition is explained below: + +@cartouche +@example +@strong{volume} @emph{volume-name} + @strong{type} @emph{translator-name} + @strong{option} @emph{option-name} @emph{option-value} + @dots{} + @strong{subvolumes} @emph{subvolume1} @emph{subvolume2} @dots{} +@strong{end-volume} +@end example + +@dots{} +@end cartouche + +@table @asis +@item @emph{volume-name} + An identifier for the volume. This is just a human-readable name, +and can contain any alphanumeric character. For instance, ``storage-1'', ``colon-o'', +or ``forty-two''. + +@item @emph{translator-name} + Name of one of the available translators. Example: @command{protocol/client}, +@command{cluster/unify}. + +@item @emph{option-name} + Name of a valid option for the translator. + +@item @emph{option-value} + Value for the option. Everything following the ``option'' keyword to the end of the +line is considered the value; it is up to the translator to parse it. + +@item @emph{subvolume1}, @emph{subvolume2}, @dots{} + Volume names of sub-volumes. The sub-volumes must already have been defined earlier +in the file. +@end table + +There are a few rules you must follow when writing a volume specification file: + +@itemize +@item Everything following a `@command{#}' is considered a comment and is ignored. Blank lines are also ignored. +@item All names and keywords are case-sensitive. +@item The order of options inside a volume definition does not matter. +@item An option value may not span multiple lines. +@item If an option is not specified, it will assume its default value. +@item A sub-volume must have already been defined before it can be referenced. This means you have to write the specification file ``bottom-up'', starting from the leaf nodes of the translator tree and moving up to the root. +@end itemize + +A simple example volume specification file is shown below: + +@cartouche +@example +# This is a comment line +volume client + type protocol/client + option transport-type tcp + option remote-host localhost # Also a comment + option remote-subvolume brick +# The subvolumes line may be absent +end-volume + +volume iot + type performance/io-threads + option thread-count 4 + subvolumes client +end-volume + +volume wb + type performance/write-behind + subvolumes iot +end-volume +@end example +@end cartouche + +@node Translators +@chapter Translators + +@menu +* Storage Translators:: +* Client and Server Translators:: +* Clustering Translators:: +* Performance Translators:: +* Features Translators:: +* Miscellaneous Translators:: +@end menu + +This chapter documents all the available GlusterFS translators in detail. +Each translator section will show its name (for example, @command{cluster/unify}), +briefly describe its purpose and workings, and list every option accepted by +that translator and their meaning. + +@node Storage Translators +@section Storage Translators + +The storage translators form the ``backend'' for GlusterFS. Currently, +the only available storage translator is the @acronym{POSIX} +translator, which stores files on a normal @acronym{POSIX} +filesystem. A pleasant consequence of this is that your data will +still be accessible if GlusterFS crashes or cannot be started. + +Other storage backends are planned for the future. One of the possibilities is an +Amazon S3 translator. Amazon S3 is an unlimited online storage service accessible +through a web services @acronym{API}. The S3 translator will allow you to access +the storage as a normal @acronym{POSIX} filesystem. +@footnote{Some more discussion about this can be found at: + +http://developer.amazonwebservices.com/connect/message.jspa?messageID=52873} + +@menu +* POSIX:: +* BDB:: +@end menu + +@node POSIX +@subsection POSIX +@example +type storage/posix +@end example + +The @command{posix} translator uses a normal @acronym{POSIX} +filesystem as its ``backend'' to actually store files and +directories. This can be any filesystem that supports extended +attributes (@acronym{EXT3}, ReiserFS, @acronym{XFS}, ...). Extended +attributes are used by some translators to store metadata, for +example, by the replicate and stripe translators. See +@ref{Replicate} and @ref{Stripe}, respectively for details. + +@cartouche +@table @code +@item directory +The directory on the local filesystem which is to be used for storage. +@end table +@end cartouche + +@node BDB +@subsection BDB +@example +type storage/bdb +@end example + +The @command{BDB} translator uses a @acronym{Berkeley DB} database as its +``backend'' to actually store files as key-value pair in the database and +directories as regular @acronym{POSIX} directories. Note that @acronym{BDB} +does not provide extended attribute support for regular files. Do not use +@acronym{BDB} as storage translator while using any translator that demands +extended attributes on ``backend''. + +@cartouche +@table @code +@item directory +The directory on the local filesystem which is to be used for storage. +@item mode [cache|persistent] (cache) +When @acronym{BDB} is run in @command{cache} mode, recovery of back-end is not completely +guaranteed. @command{persistent} guarantees that @acronym{BDB} can recover back-end from +@acronym{Berkeley DB} even if GlusterFS crashes. +@item errfile +The path of the file to be used as @command{errfile} for @acronym{Berkeley DB} to report +detailed error messages, if any. Note that all the contents of this file will be written +by @acronym{Berkeley DB}, not GlusterFS. +@item logdir + + +@end table +@end cartouche + +@node Client and Server Translators, Clustering Translators, Storage Translators, Translators +@section Client and Server Translators + +The client and server translator enable GlusterFS to export a +translator tree over the network or access a remote GlusterFS +server. These two translators implement GlusterFS's network protocol. + +@menu +* Transport modules:: +* Client protocol:: +* Server protocol:: +@end menu + +@node Transport modules +@subsection Transport modules +The client and server translators are capable of using any of the +pluggable transport modules. Currently available transport modules are +@command{tcp}, which uses a @acronym{TCP} connection between client +and server to communicate; @command{ib-sdp}, which uses a +@acronym{TCP} connection over InfiniBand, and @command{ibverbs}, which +uses high-speed InfiniBand connections. + +Each transport module comes in two different versions, one to be used on +the server side and the other on the client side. + +@subsubsection TCP + +The @acronym{TCP} transport module uses a @acronym{TCP/IP} connection between +the server and the client. + +@example + option transport-type tcp +@end example + +The @acronym{TCP} client module accepts the following options: + +@cartouche +@table @code +@item non-blocking-connect [no|off|on|yes] (on) +Whether to make the connection attempt asynchronous. +@item remote-port (24007) +Server port to connect to. +@cindex DNS round robin +@item remote-host * +Hostname or @acronym{IP} address of the server. If the host name resolves to +multiple IP addresses, all of them will be tried in a round-robin fashion. This +feature can be used to implement fail-over. +@end table +@end cartouche + +The @acronym{TCP} server module accepts the following options: + +@cartouche +@table @code +@item bind-address
(0.0.0.0) +The local interface on which the server should listen to requests. Default is to +listen on all interfaces. +@item listen-port (24007) +The local port to listen on. +@end table +@end cartouche + +@subsubsection IB-SDP +@example + option transport-type ib-sdp +@end example + +kernel implements socket interface for ib hardware. SDP is over ib-verbs. +This module accepts the same options as @command{tcp} + +@subsubsection ibverbs + +@example + option transport-type tcp +@end example + +@cindex infiniband transport + +InfiniBand is a scalable switched fabric interconnect mechanism +primarily used in high-performance computing. InfiniBand can deliver +data throughput of the order of 10 Gbit/s, with latencies of 4-5 ms. + +The @command{ib-verbs} transport accesses the InfiniBand hardware through +the ``verbs'' @acronym{API}, which is the lowest level of software access possible +and which gives the highest performance. On InfiniBand hardware, it is always +best to use @command{ib-verbs}. Use @command{ib-sdp} only if you cannot get +@command{ib-verbs} working for some reason. + +The @command{ib-verbs} client module accepts the following options: + +@cartouche +@table @code +@item non-blocking-connect [no|off|on|yes] (on) +Whether to make the connection attempt asynchronous. +@item remote-port (24007) +Server port to connect to. +@cindex DNS round robin +@item remote-host * +Hostname or @acronym{IP} address of the server. If the host name resolves to +multiple IP addresses, all of them will be tried in a round-robin fashion. This +feature can be used to implement fail-over. +@end table +@end cartouche + +The @command{ib-verbs} server module accepts the following options: + +@cartouche +@table @code +@item bind-address
(0.0.0.0) +The local interface on which the server should listen to requests. Default is to +listen on all interfaces. +@item listen-port (24007) +The local port to listen on. +@end table +@end cartouche + +The following options are common to both the client and server modules: + +If you are familiar with InfiniBand jargon, +the mode is used by GlusterFS is ``reliable connection-oriented channel transfer''. + +@cartouche +@table @code +@item ib-verbs-work-request-send-count (64) +Length of the send queue in datagrams. [Reason to increase/decrease?] + +@item ib-verbs-work-request-recv-count (64) +Length of the receive queue in datagrams. [Reason to increase/decrease?] + +@item ib-verbs-work-request-send-size (128KB) +Size of each datagram that is sent. [Reason to increase/decrease?] + +@item ib-verbs-work-request-recv-size (128KB) +Size of each datagram that is received. [Reason to increase/decrease?] + +@item ib-verbs-port (1) +Port number for ib-verbs. + +@item ib-verbs-mtu [256|512|1024|2048|4096] (2048) +The Maximum Transmission Unit [Reason to increase/decrease?] + +@item ib-verbs-device-name (first device in the list) +InfiniBand device to be used. +@end table +@end cartouche + +For maximum performance, you should ensure that the send/receive counts on both +the client and server are the same. + +ib-verbs is preferred over ib-sdp. + +@node Client protocol +@subsection Client +@example +type procotol/client +@end example + +The client translator enables the GlusterFS client to access a remote server's +translator tree. + +@cartouche +@table @code + +@item transport-type [tcp,ib-sdp,ib-verbs] (tcp) +The transport type to use. You should use the client versions of all the +transport modules (@command{tcp}, @command{ib-sdp}, +@command{ib-verbs}). +@item remote-subvolume * +The name of the volume on the remote host to attach to. Note that +this is @emph{not} the name of the @command{protocol/server} volume on the +server. It should be any volume under the server. +@item transport-timeout (120- seconds) +Inactivity timeout. If a reply is expected and no activity takes place +on the connection within this time, the transport connection will be +broken, and a new connection will be attempted. +@end table +@end cartouche + +@node Server protocol +@subsection Server +@example +type protocol/server +@end example + +The server translator exports a translator tree and makes it accessible to +remote GlusterFS clients. + +@cartouche +@table @code +@item client-volume-filename (/glusterfs-client.vol) +The volume specification file to use for the client. This is the file the +client will receive when it is invoked with the @command{--server} option +(@ref{Client}). + +@item transport-type [tcp,ib-verbs,ib-sdp] (tcp) +The transport to use. You should use the server versions of all the transport +modules (@command{tcp}, @command{ib-sdp}, @command{ib-verbs}). + +@item auth.addr..allow +IP addresses of the clients that are allowed to attach to the specified volume. +This can be a wildcard. For example, a wildcard of the form @command{192.168.*.*} +allows any host in the @command{192.168.x.x} subnet to connect to the server. + +@end table +@end cartouche + +@node Clustering Translators +@section Clustering Translators + +The clustering translators are the most important GlusterFS +translators, since it is these that make GlusterFS a cluster +filesystem. These translators together enable GlusterFS to access an +arbitrarily large amount of storage, and provide @acronym{RAID}-like +redundancy and distribution over the entire cluster. + +There are three clustering translators: @strong{unify}, @strong{replicate}, +and @strong{stripe}. The unify translator aggregates storage from +many server nodes. The replicate translator provides file replication. The stripe +translator allows a file to be spread across many server nodes. The following sections +look at each of these translators in detail. + +@menu +* Unify:: +* Replicate:: +* Stripe:: +@end menu + +@node Unify +@subsection Unify +@cindex unify (translator) +@cindex scheduler (unify) +@example +type cluster/unify +@end example + +The unify translator presents a `unified' view of all its sub-volumes. That is, +it makes the union of all its sub-volumes appear as a single volume. It is the +unify translator that gives GlusterFS the ability to access an arbitrarily +large amount of storage. + +For unify to work correctly, certain invariants need to be maintained across +the entire network. These are: + +@cindex unify invariants +@itemize +@item The directory structure of all the sub-volumes must be identical. +@item A particular file can exist on only one of the sub-volumes. Phrasing it in another way, a pathname such as @command{/home/calvin/homework.txt}) is unique across the entire cluster. +@end itemize + +@tex +\vfill +@end tex +@page + +@center @image{unify,44pc,,,.pdf} + +Looking at the second requirement, you might wonder how one can +accomplish storing redundant copies of a file, if no file can exist +multiple times. To answer, we must remember that these invariants are +from @emph{unify's perspective}. A translator such as replicate at a lower +level in the translator tree than unify may subvert this picture. + +The first invariant might seem quite tedious to ensure. We shall see +later that this is not so, since unify's @emph{self-heal} mechanism +takes care of maintaining it. + +The second invariant implies that unify needs some way to decide which file goes where. +Unify makes use of @emph{scheduler} modules for this purpose. + +When a file needs to be created, unify's scheduler decides upon the +sub-volume to be used to store the file. There are many schedulers +available, each using a different algorithm and suitable for different +purposes. + +The various schedulers are described in detail in the sections that follow. + +@subsubsection ALU +@cindex alu (scheduler) + +@example + option scheduler alu +@end example + +ALU stands for "Adaptive Least Usage". It is the most advanced +scheduler available in GlusterFS. It balances the load across volumes +taking several factors in account. It adapts itself to changing I/O +patterns according to its configuration. When properly configured, it +can eliminate the need for regular tuning of the filesystem to keep +volume load nicely balanced. + +The ALU scheduler is composed of multiple least-usage +sub-schedulers. Each sub-scheduler keeps track of a certain type of +load, for each of the sub-volumes, getting statistics from +the sub-volumes themselves. The sub-schedulers are these: + +@itemize +@item disk-usage: The used and free disk space on the volume. + +@item read-usage: The amount of reading done from this volume. + +@item write-usage: The amount of writing done to this volume. + +@item open-files-usage: The number of files currently open from this volume. + +@item disk-speed-usage: The speed at which the disks are spinning. This is a constant value and therefore not very useful. +@end itemize + +The ALU scheduler needs to know which of these sub-schedulers to use, +and in which order to evaluate them. This is done through the +@command{option alu.order} configuration directive. + +Each sub-scheduler needs to know two things: when to kick in (the +entry-threshold), and how long to stay in control (the +exit-threshold). For example: when unifying three disks of 100GB, +keeping an exact balance of disk-usage is not necesary. Instead, there +could be a 1GB margin, which can be used to nicely balance other +factors, such as read-usage. The disk-usage scheduler can be told to +kick in only when a certain threshold of discrepancy is passed, such +as 1GB. When it assumes control under this condition, it will write +all subsequent data to the least-used volume. If it is doing so, it is +unwise to stop right after the values are below the entry-threshold +again, since that would make it very likely that the situation will +occur again very soon. Such a situation would cause the ALU to spend +most of its time disk-usage scheduling, which is unfair to the other +sub-schedulers. The exit-threshold therefore defines the amount of +data that needs to be written to the least-used disk, before control +is relinquished again. + +In addition to the sub-schedulers, the ALU scheduler also has "limits" +options. These can stop the creation of new files on a volume once +values drop below a certain threshold. For example, setting +@command{option alu.limits.min-free-disk 5GB} will stop the scheduling +of files to volumes that have less than 5GB of free disk space, +leaving the files on that disk some room to grow. + +The actual values you assign to the thresholds for sub-schedulers and +limits depend on your situation. If you have fast-growing files, +you'll want to stop file-creation on a disk much earlier than when +hardly any of your files are growing. If you care less about +disk-usage balance than about read-usage balance, you'll want a bigger +disk-usage scheduler entry-threshold and a smaller read-usage +scheduler entry-threshold. + +For thresholds defining a size, values specifying "KB", "MB" and "GB" +are allowed. For example: @command{option alu.limits.min-free-disk 5GB}. + +@cartouche +@table @code +@item alu.order * ("disk-usage:write-usage:read-usage:open-files-usage:disk-speed") +@item alu.disk-usage.entry-threshold (1GB) +@item alu.disk-usage.exit-threshold (512MB) +@item alu.write-usage.entry-threshold <%> (25) +@item alu.write-usage.exit-threshold <%> (5) +@item alu.read-usage.entry-threshold <%> (25) +@item alu.read-usage.exit-threshold <%> (5) +@item alu.open-files-usage.entry-threshold (1000) +@item alu.open-files-usage.exit-threshold (100) +@item alu.limits.min-free-disk <%> +@item alu.limits.max-open-files +@end table +@end cartouche + +@subsubsection Round Robin (RR) +@cindex rr (scheduler) + +@example + option scheduler rr +@end example + +Round-Robin (RR) scheduler creates files in a round-robin +fashion. Each client will have its own round-robin loop. When your +files are mostly similar in size and I/O access pattern, this +scheduler is a good choice. RR scheduler checks for free disk space +on the server before scheduling, so you can know when to add +another server node. The default value of min-free-disk is 5% and is +checked on file creation calls, with atleast 10 seconds (by default) +elapsing between two checks. + +Options: +@cartouche +@table @code +@item rr.limits.min-free-disk <%> (5) +Minimum free disk space a node must have for RR to schedule a file to it. +@item rr.refresh-interval (10 seconds) +Time between two successive free disk space checks. +@end table +@end cartouche + +@subsubsection Random +@cindex random (scheduler) + +@example + option scheduler random +@end example + +The random scheduler schedules file creation randomly among its child nodes. +Like the round-robin scheduler, it also checks for a minimum amount of free disk +space before scheduling a file to a node. + +@cartouche +@table @code +@item random.limits.min-free-disk <%> (5) +Minimum free disk space a node must have for random to schedule a file to it. +@item random.refresh-interval (10 seconds) +Time between two successive free disk space checks. +@end table +@end cartouche + +@subsubsection NUFA +@cindex nufa (scheduler) + +@example + option scheduler nufa +@end example + +It is common in many GlusterFS computing environments for all deployed +machines to act as both servers and clients. For example, a +research lab may have 40 workstations each with its own storage. All +of these workstations might act as servers exporting a volume as well +as clients accessing the entire cluster's storage. In such a +situation, it makes sense to store locally created files on the local +workstation itself (assuming files are accessed most by the +workstation that created them). The Non-Uniform File Allocation (@acronym{NUFA}) +scheduler accomplishes that. + +@acronym{NUFA} gives the local system first priority for file creation +over other nodes. If the local volume does not have more free disk space +than a specified amount (5% by default) then @acronym{NUFA} schedules files +among the other child volumes in a round-robin fashion. + +@acronym{NUFA} is named after the similar strategy used for memory access, +@acronym{NUMA}@footnote{Non-Uniform Memory Access: +@indicateurl{http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access}}. + +@cartouche +@table @code +@item nufa.limits.min-free-disk <%> (5) +Minimum disk space that must be free (local or remote) for @acronym{NUFA} to schedule a +file to it. +@item nufa.refresh-interval (10 seconds) +Time between two successive free disk space checks. +@item nufa.local-volume-name +The name of the volume corresponding to the local system. This volume must be +one of the children of the unify volume. This option is mandatory. +@end table +@end cartouche + +@cindex namespace +@subsubsection Namespace +Namespace volume needed because: + - persistent inode numbers. + - file exists even when node is down. + +namespace files are simply touched. on every lookup it is checked. + +@cartouche +@table @code +@item namespace * +Name of the namespace volume (which should be one of the unify volume's children). +@item self-heal [on|off] (on) +Enable/disable self-heal. Unless you know what you are doing, do not disable self-heal. +@end table +@end cartouche + +@cindex self heal (unify) +@subsubsection Self Heal + * When a 'lookup()/stat()' call is made on directory for the first +time, a self-heal call is made, which checks for the consistancy of +its child nodes. If an entry is present in storage node, but not in +namespace, that entry is created in namespace, and vica-versa. There +is an writedir() API introduced which is used for the same. It also +checks for permissions, and uid/gid consistencies. + + * This check is also done when an server goes down and comes up. + + * If one starts with an empty namespace export, but has data in +storage nodes, a 'find .>/dev/null' or 'ls -lR >/dev/null' should help +to build namespace in one shot. Even otherwise, namespace is built on +demand when a file is looked up for the first time. + +NOTE: There are some issues (Kernel 'Oops' msgs) seen with fuse-2.6.3, +when someone deletes namespace in backend, when glusterfs is +running. But with fuse-2.6.5, this issue is not there. + +@node Replicate +@subsection Replicate (formerly AFR) +@cindex Replicate +@example +type cluster/replicate +@end example + +Replicate provides @acronym{RAID}-1 like functionality for +GlusterFS. Replicate replicates files and directories across the +subvolumes. Hence if Replicate has four subvolumes, there will be +four copies of all files and directories. Replicate provides +high-availability, i.e., in case one of the subvolumes go down +(e. g. server crash, network disconnection) Replicate will still +service the requests using the redundant copies. + +Replicate also provides self-heal functionality, i.e., in case the +crashed servers come up, the outdated files and directories will be +updated with the latest versions. Replicate uses extended +attributes of the backend file system to track the versioning of files +and directories and provide the self-heal feature. + +@example +volume replicate-example + type cluster/replicate + subvolumes brick1 brick2 brick3 +end-volume +@end example + +This sample configuration will replicate all directories and files on +brick1, brick2 and brick3. + +All the read operations happen from the first alive child. If all the +three sub-volumes are up, reads will be done from brick1; if brick1 is +down read will be done from brick2. In case read() was being done on +brick1 and it goes down, replicate transparently falls back to +brick2. + +The next release of GlusterFS will add the following features: +@itemize +@item Ability to specify the sub-volume from which read operations are to be done (this will help users who have one of the sub-volumes as a local storage volume). +@item Allow scheduling of read operations amongst the sub-volumes in a round-robin fashion. +@end itemize + +The order of the subvolumes list should be same across all the 'replicate's as +they will be used for locking purposes. + +@cindex self heal (replicate) +@subsubsection Self Heal +Replicate has self-heal feature, which updates the outdated file and +directory copies by the most recent versions. For example consider the +following config: + +@example +volume replicate-example + type cluster/replicate + subvolumes brick1 brick2 +end-volume +@end example + +@subsubsection File self-heal + +Now if we create a file foo.txt on replicate-example, the file will be created +on brick1 and brick2. The file will have two extended attributes associated +with it in the backend filesystem. One is trusted.afr.createtime and the +other is trusted.afr.version. The trusted.afr.createtime xattr has the +create time (in terms of seconds since epoch) and trusted.afr.version +is a number that is incremented each time a file is modified. This increment +happens during close (incase any write was done before close). + +If brick1 goes down, we edit foo.txt the version gets incremented. Now +the brick1 comes back up, when we open() on foo.txt replicate will check if +their versions are same. If they are not same, the outdated copy is +replaced by the latest copy and its version is updated. After the sync +the open() proceeds in the usual manner and the application calling open() +can continue on its access to the file. + +If brick1 goes down, we delete foo.txt and create a file with the same +name again i.e foo.txt. Now brick1 comes back up, clearly there is a +chance that the version on brick1 being more than the version on brick2, +this is where createtime extended attribute helps in deciding which +the outdated copy is. Hence we need to consider both createtime and +version to decide on the latest copy. + +The version attribute is incremented during the close() call. Version +will not be incremented in case there was no write() done. In case the +fd that the close() gets was got by create() call, we also create +the createtime extended attribute. + +@subsubsection Directory self-heal + +Suppose brick1 goes down, we delete foo.txt, brick1 comes back up, now +we should not create foo.txt on brick2 but we should delete foo.txt +on brick1. We handle this situation by having the createtime and version +attribute on the directory similar to the file. when lookup() is done +on the directory, we compare the createtime/version attributes of the +copies and see which files needs to be deleted and delete those files +and update the extended attributes of the outdated directory copy. +Each time a directory is modified (a file or a subdirectory is created +or deleted inside the directory) and one of the subvols is down, we +increment the directory's version. + +lookup() is a call initiated by the kernel on a file or directory +just before any access to that file or directory. In glusterfs, by +default, lookup() will not be called in case it was called in the +past one second on that particular file or directory. + +The extended attributes can be seen in the backend filesystem using +the @command{getfattr} command. (@command{getfattr -n trusted.afr.version }) + +@cartouche +@table @code +@item debug [on|off] (off) +@item self-heal [on|off] (on) +@item replicate (*:1) +@item lock-node (first child is used by default) +@end table +@end cartouche + +@node Stripe +@subsection Stripe +@cindex stripe (translator) +@example +type cluster/stripe +@end example + +The stripe translator distributes the contents of a file over its +sub-volumes. It does this by creating a file equal in size to the +total size of the file on each of its sub-volumes. It then writes only +a part of the file to each sub-volume, leaving the rest of it empty. +These empty regions are called `holes' in Unix terminology. The holes +do not consume any disk space. + +The diagram below makes this clear. + +@center @image{stripe,44pc,,,.pdf} + +You can configure stripe so that only filenames matching a pattern +are striped. You can also configure the size of the data to be stored +on each sub-volume. + +@cartouche +@table @code +@item block-size : (*:0 no striping) +Distribute files matching @command{} over the sub-volumes, +storing at least @command{} on each sub-volume. For example, + +@example + option block-size *.mpg:1M +@end example + +distributes all files ending in @command{.mpg}, storing at least 1 MB on +each sub-volume. + +Any number of @command{block-size} option lines may be present, specifying +different sizes for different file name patterns. +@end table +@end cartouche + +@node Performance Translators +@section Performance Translators + +@menu +* Read Ahead:: +* Write Behind:: +* IO Threads:: +* IO Cache:: +* Booster:: +@end menu + +@node Read Ahead +@subsection Read Ahead +@cindex read-ahead (translator) +@example +type performance/read-ahead +@end example + +The read-ahead translator pre-fetches data in advance on every read. +This benefits applications that mostly process files in sequential order, +since the next block of data will already be available by the time the +application is done with the current one. + +Additionally, the read-ahead translator also behaves as a read-aggregator. +Many small read operations are combined and issued as fewer, larger read +requests to the server. + +Read-ahead deals in ``pages'' as the unit of data fetched. The page size +is configurable, as is the ``page count'', which is the number of pages +that are pre-fetched. + +Read-ahead is best used with InfiniBand (using the ib-verbs transport). +On FastEthernet and Gigabit Ethernet networks, +GlusterFS can achieve the link-maximum throughput even without +read-ahead, making it quite superflous. + +Note that read-ahead only happens if the reads are perfectly +sequential. If your application accesses data in a random fashion, +using read-ahead might actually lead to a performance loss, since +read-ahead will pointlessly fetch pages which won't be used by the +application. + +@cartouche +Options: +@table @code +@item page-size (256KB) +The unit of data that is pre-fetched. +@item page-count (2) +The number of pages that are pre-fetched. +@item force-atime-update [on|off|yes|no] (off|no) +Whether to force an access time (atime) update on the file on every read. Without +this, the atime will be slightly imprecise, as it will reflect the time when +the read-ahead translator read the data, not when the application actually read it. +@end table +@end cartouche + +@node Write Behind +@subsection Write Behind +@cindex write-behind (translator) +@example +type performance/write-behind +@end example + +The write-behind translator improves the latency of a write operation. +It does this by relegating the write operation to the background and +returning to the application even as the write is in progress. Using the +write-behind translator, successive write requests can be pipelined. +This mode of write-behind operation is best used on the client side, to +enable decreased write latency for the application. + +The write-behind translator can also aggregate write requests. If the +@command{aggregate-size} option is specified, then successive writes upto that +size are accumulated and written in a single operation. This mode of operation +is best used on the server side, as this will decrease the disk's head movement +when multiple files are being written to in parallel. + +The @command{aggregate-size} option has a default value of 128KB. Although +this works well for most users, you should always experiment with different values +to determine the one that will deliver maximum performance. This is because the +performance of write-behind depends on your interconnect, size of RAM, and the +work load. + +@cartouche +@table @code +@item aggregate-size (128KB) +Amount of data to accumulate before doing a write +@item flush-behind [on|yes|off|no] (off|no) + +@end table +@end cartouche + +@node IO Threads +@subsection IO Threads +@cindex io-threads (translator) +@example +type performance/io-threads +@end example + +The IO threads translator is intended to increase the responsiveness +of the server to metadata operations by doing file I/O (read, write) +in a background thread. Since the GlusterFS server is +single-threaded, using the IO threads translator can significantly +improve performance. This translator is best used on the server side, +loaded just below the server protocol translator. + +IO threads operates by handing out read and write requests to a separate thread. +The total number of threads in existence at a time is constant, and configurable. + +@cartouche +@table @code +@item thread-count (1) +Number of threads to use. +@end table +@end cartouche + +@node IO Cache +@subsection IO Cache +@cindex io-cache (translator) +@example +type performance/io-cache +@end example + +The IO cache translator caches data that has been read. This is useful +if many applications read the same data multiple times, and if reads +are much more frequent than writes (for example, IO caching may be +useful in a web hosting environment, where most clients will simply +read some files and only a few will write to them). + +The IO cache translator reads data from its child in @command{page-size} chunks. +It caches data upto @command{cache-size} bytes. The cache is maintained as +a prioritized least-recently-used (@acronym{LRU}) list, with priorities determined +by user-specified patterns to match filenames. + +When the IO cache translator detects a write operation, the +cache for that file is flushed. + +The IO cache translator periodically verifies the consistency of +cached data, using the modification times on the files. The verification timeout +is configurable. + +@cartouche +@table @code +@item page-size (128KB) +Size of a page. +@item cache-size (n) (32MB) +Total amount of data to be cached. +@item force-revalidate-timeout (1) +Timeout to force a cache consistency verification, in seconds. +@item priority (*:0) +Filename patterns listed in order of priority. +@end table +@end cartouche + +@node Booster +@subsection Booster +@cindex booster +@example + type performance/booster +@end example + +The booster translator gives applications a faster path to communicate +read and write requests to GlusterFS. Normally, all requests to GlusterFS from +applications go through FUSE, as indicated in @ref{Filesystems in Userspace}. +Using the booster translator in conjunction with the GlusterFS booster shared +library, an application can bypass the FUSE path and send read/write requests +directly to the GlusterFS client process. + +The booster mechanism consists of two parts: the booster translator, +and the booster shared library. The booster translator is meant to be +loaded on the client side, usually at the root of the translator tree. +The booster shared library should be @command{LD_PRELOAD}ed with the +application. + +The booster translator when loaded opens a Unix domain socket and +listens for read/write requests on it. The booster shared library +intercepts read and write system calls and sends the requests to the +GlusterFS process directly using the Unix domain socket, bypassing FUSE. +This leads to superior performance. + +Once you've loaded the booster translator in your volume specification file, you +can start your application as: + +@example + $ LD_PRELOAD=/usr/local/bin/glusterfs-booster.so your_app +@end example + +The booster translator accepts no options. + +@node Features Translators +@section Features Translators + +@menu +* POSIX Locks:: +* Fixed ID:: +@end menu + +@node POSIX Locks +@subsection POSIX Locks +@cindex record locking +@cindex fcntl +@cindex posix-locks (translator) +@example +type features/posix-locks +@end example + +This translator provides storage independent POSIX record locking +support (@command{fcntl} locking). Typically you'll want to load this on the +server side, just above the @acronym{POSIX} storage translator. Using this +translator you can get both advisory locking and mandatory locking +support. It also handles @command{flock()} locks properly. + +Caveat: Consider a file that does not have its mandatory locking bits +(+setgid, -group execution) turned on. Assume that this file is now +opened by a process on a client that has the write-behind xlator +loaded. The write-behind xlator does not cache anything for files +which have mandatory locking enabled, to avoid incoherence. Let's say +that mandatory locking is now enabled on this file through another +client. The former client will not know about this change, and +write-behind may erroneously report a write as being successful when +in fact it would fail due to the region it is writing to being locked. + +There seems to be no easy way to fix this. To work around this +problem, it is recommended that you never enable the mandatory bits on +a file while it is open. + +@cartouche +@table @code +@item mandatory [on|off] (on) +Turns mandatory locking on. +@end table +@end cartouche + +@node Fixed ID +@subsection Fixed ID +@cindex fixed-id (translator) +@example +type features/fixed-id +@end example + +The fixed ID translator makes all filesystem requests from the client +to appear to be coming from a fixed, specified +@acronym{UID}/@acronym{GID}, regardless of which user actually +initiated the request. + +@cartouche +@table @code +@item fixed-uid [if not set, not used] +The @acronym{UID} to send to the server +@item fixed-gid [if not set, not used] +The @acronym{GID} to send to the server +@end table +@end cartouche + +@node Miscellaneous Translators +@section Miscellaneous Translators + +@menu +* ROT-13:: +* Trace:: +@end menu + +@node ROT-13 +@subsection ROT-13 +@cindex rot-13 (translator) +@example +type encryption/rot-13 +@end example + +@acronym{ROT-13} is a toy translator that can ``encrypt'' and ``decrypt'' file +contents using the @acronym{ROT-13} algorithm. @acronym{ROT-13} is a trivial +algorithm that rotates each alphabet by thirteen places. Thus, 'A' becomes 'N', +'B' becomes 'O', and 'Z' becomes 'M'. + +It goes without saying that you shouldn't use this translator if you need +@emph{real} encryption (a future release of GlusterFS will have real encryption +translators). + +@cartouche +@table @code +@item encrypt-write [on|off] (on) +Whether to encrypt on write +@item decrypt-read [on|off] (on) +Whether to decrypt on read +@end table +@end cartouche + +@node Trace +@subsection Trace +@cindex trace (translator) +@example +type debug/trace +@end example + +The trace translator is intended for debugging purposes. When loaded, it +logs all the system calls received by the server or client (wherever +trace is loaded), their arguments, and the results. You must use a GlusterFS log +level of DEBUG (See @ref{Running GlusterFS}) for trace to work. + +Sample trace output (lines have been wrapped for readability): +@cartouche +@example +2007-10-30 00:08:58 D [trace.c:1579:trace_opendir] trace: callid: 68 +(*this=0x8059e40, loc=0x8091984 @{path=/iozone3_283, inode=0x8091f00@}, + fd=0x8091d50) + +2007-10-30 00:08:58 D [trace.c:630:trace_opendir_cbk] trace: +(*this=0x8059e40, op_ret=4, op_errno=1, fd=0x8091d50) + +2007-10-30 00:08:58 D [trace.c:1602:trace_readdir] trace: callid: 69 +(*this=0x8059e40, size=4096, offset=0 fd=0x8091d50) + +2007-10-30 00:08:58 D [trace.c:215:trace_readdir_cbk] trace: +(*this=0x8059e40, op_ret=0, op_errno=0, count=4) + +2007-10-30 00:08:58 D [trace.c:1624:trace_closedir] trace: callid: 71 +(*this=0x8059e40, *fd=0x8091d50) + +2007-10-30 00:08:58 D [trace.c:809:trace_closedir_cbk] trace: +(*this=0x8059e40, op_ret=0, op_errno=1) +@end example +@end cartouche + +@node Usage Scenarios +@chapter Usage Scenarios + +@section Advanced Striping + +This section is based on the Advanced Striping tutorial written by +Anand Avati on the GlusterFS wiki +@footnote{http://gluster.org/docs/index.php/Mixing_Striped_and_Regular_Files}. + +@subsection Mixed Storage Requirements + +There are two ways of scheduling the I/O. One at file level (using +unify translator) and other at block level (using stripe +translator). Striped I/O is good for files that are potentially large +and require high parallel throughput (for example, a single file of +400GB being accessed by 100s and 1000s of systems simultaneously and +randomly). For most of the cases, file level scheduling works best. + +In the real world, it is desirable to mix file level and block level +scheduling on a single storage volume. Alternatively users can choose +to have two separate volumes and hence two mount points, but the +applications may demand a single storage system to host both. + +This document explains how to mix file level scheduling with stripe. + +@subsection Configuration Brief + +This setup demonstrates how users can configure unify translator with +appropriate I/O scheduler for file level scheduling and strip for only +matching patterns. This way, GlusterFS chooses appropriate I/O profile +and knows how to efficiently handle both the types of data. + +A simple technique to achieve this effect is to create a stripe set of +unify and stripe blocks, where unify is the first sub-volume. Files +that do not match the stripe policy passed on to first unify +sub-volume and inturn scheduled arcoss the cluster using its file +level I/O scheduler. + +@image{advanced-stripe,44pc,,,.pdf} + +@subsection Preparing GlusterFS Envoronment + +Create the directories /export/namespace, /export/unify and +/export/stripe on all the storage bricks. + + Place the following server and client volume spec file under +/etc/glusterfs (or appropriate installed path) and replace the IP +addresses / access control fields to match your environment. + +@cartouche +@example + ## file: /etc/glusterfs/glusterfsd.vol + volume posix-unify + type storage/posix + option directory /export/for-unify + end-volume + + volume posix-stripe + type storage/posix + option directory /export/for-stripe + end-volume + + volume posix-namespace + type storage/posix + option directory /export/for-namespace + end-volume + + volume server + type protocol/server + option transport-type tcp + option auth.addr.posix-unify.allow 192.168.1.* + option auth.addr.posix-stripe.allow 192.168.1.* + option auth.addr.posix-namespace.allow 192.168.1.* + subvolumes posix-unify posix-stripe posix-namespace + end-volume +@end example +@end cartouche + +@cartouche +@example + ## file: /etc/glusterfs/glusterfs.vol + volume client-namespace + type protocol/client + option transport-type tcp + option remote-host 192.168.1.1 + option remote-subvolume posix-namespace + end-volume + + volume client-unify-1 + type protocol/client + option transport-type tcp + option remote-host 192.168.1.1 + option remote-subvolume posix-unify + end-volume + + volume client-unify-2 + type protocol/client + option transport-type tcp + option remote-host 192.168.1.2 + option remote-subvolume posix-unify + end-volume + + volume client-unify-3 + type protocol/client + option transport-type tcp + option remote-host 192.168.1.3 + option remote-subvolume posix-unify + end-volume + + volume client-unify-4 + type protocol/client + option transport-type tcp + option remote-host 192.168.1.4 + option remote-subvolume posix-unify + end-volume + + volume client-stripe-1 + type protocol/client + option transport-type tcp + option remote-host 192.168.1.1 + option remote-subvolume posix-stripe + end-volume + + volume client-stripe-2 + type protocol/client + option transport-type tcp + option remote-host 192.168.1.2 + option remote-subvolume posix-stripe + end-volume + + volume client-stripe-3 + type protocol/client + option transport-type tcp + option remote-host 192.168.1.3 + option remote-subvolume posix-stripe + end-volume + + volume client-stripe-4 + type protocol/client + option transport-type tcp + option remote-host 192.168.1.4 + option remote-subvolume posix-stripe + end-volume + + volume unify + type cluster/unify + option scheduler rr + subvolumes cluster-unify-1 cluster-unify-2 cluster-unify-3 cluster-unify-4 + end-volume + + volume stripe + type cluster/stripe + option block-size *.img:2MB # All files ending with .img are striped with 2MB stripe block size. + subvolumes unify cluster-stripe-1 cluster-stripe-2 cluster-stripe-3 cluster-stripe-4 + end-volume +@end example +@end cartouche + + +Bring up the Storage + +Starting GlusterFS Server: If you have installed through binary +package, you can start the service through init.d startup script. If +not: + +@example +[root@@server]# glusterfsd +@end example + +Mounting GlusterFS Volumes: + +@example +[root@@client]# glusterfs -s [BRICK-IP-ADDRESS] /mnt/cluster +@end example + +Improving upon this Setup + +Infiniband Verbs RDMA transport is much faster than TCP/IP GigE +transport. + +Use of performance translators such as read-ahead, write-behind, +io-cache, io-threads, booster is recommended. + +Replace round-robin (rr) scheduler with ALU to handle more dynamic +storage environments. + +@node Troubleshooting +@chapter Troubleshooting + +This chapter is a general troubleshooting guide to GlusterFS. It lists +common GlusterFS server and client error messages, debugging hints, and +concludes with the suggested procedure to report bugs in GlusterFS. + +@section GlusterFS error messages + +@subsection Server errors + +@example +glusterfsd: FATAL: could not open specfile: +'/etc/glusterfs/glusterfsd.vol' +@end example + +The GlusterFS server expects the volume specification file to be +at @command{/etc/glusterfs/glusterfsd.vol}. The example +specification file will be installed as +@command{/etc/glusterfs/glusterfsd.vol.sample}. You need to edit +it and rename it, or provide a different specification file using +the @command{--spec-file} command line option (See @ref{Server}). + +@vskip 4ex + +@example +gf_log_init: failed to open logfile "/usr/var/log/glusterfs/glusterfsd.log" + (Permission denied) +@end example + +You don't have permission to create files in the +@command{/usr/var/log/glusterfs} directory. Make sure you are running +GlusterFS as root. Alternatively, specify a different path for the log +file using the @command{--log-file} option (See @ref{Server}). + +@subsection Client errors + +@example +fusermount: failed to access mountpoint /mnt: + Transport endpoint is not connected +@end example + +A previous failed (or hung) mount of GlusterFS is preventing it from being +mounted again in the same location. The fix is to do: + +@example +# umount /mnt +@end example + +and try mounting again. + +@vskip 4ex + +@strong{``Transport endpoint is not connected''.} + +If you get this error when you try a command such as @command{ls} or @command{cat}, +it means the GlusterFS mount did not succeed. Try running GlusterFS in @command{DEBUG} +logging level and study the log messages to discover the cause. + +@vskip 4ex + +@strong{``Connect to server failed'', ``SERVER-ADDRESS: Connection refused''.} + +GluserFS Server is not running or dead. Check your network +connections and firewall settings. To check if the server is reachable, +try: + +@example +telnet IP-ADDRESS 24007 +@end example + +If the server is accessible, your `telnet' command should connect and +block. If not you will see an error message such as @command{telnet: Unable to +connect to remote host: Connection refused}. 24007 is the default +GlusterFS port. If you have changed it, then use the corresponding +port instead. + +@vskip 4ex + +@example +gf_log_init: failed to open logfile "/usr/var/log/glusterfs/glusterfs.log" + (Permission denied) +@end example + +You don't have permission to create files in the +@command{/usr/var/log/glusterfs} directory. Make sure you are running +GlusterFS as root. Alternatively, specify a different path for the log +file using the @command{--log-file} option (See @ref{Client}). + +@section FUSE error messages +@command{modprobe fuse} fails with: ``Unknown symbol in module, or unknown parameter''. +@cindex Redhat Enterprise Linux + +If you are using fuse-2.6.x on Redhat Enterprise Linux Work Station 4 +and Advanced Server 4 with 2.6.9-42.ELlargesmp, 2.6.9-42.ELsmp, +2.6.9-42.EL kernels and get this error while loading @acronym{FUSE} kernel +module, you need to apply the following patch. + +For fuse-2.6.2: + +@indicateurl{http://ftp.gluster.com/pub/gluster/glusterfs/fuse/fuse-2.6.2-rhel-build.patch} + +For fuse-2.6.3: + +@indicateurl{http://ftp.gluster.com/pub/gluster/glusterfs/fuse/fuse-2.6.3-rhel-build.patch} + +@section AppArmour and GlusterFS +@cindex AppArmour +@cindex OpenSuSE +Under OpenSuSE GNU/Linux, the AppArmour security feature does not +allow GlusterFS to create temporary files or network socket +connections even while running as root. You will see error messages +like `Unable to open log file: Operation not permitted' or `Connection +refused'. Disabling AppArmour using YaST or properly configuring +AppArmour to recognize @command{glusterfsd} or @command{glusterfs}/@command{fusermount} +should solve the problem. + +@section Reporting a bug + +If you encounter a bug in GlusterFS, please follow the below +guidelines when you report it to the mailing list. Be sure to report +it! User feedback is crucial to the health of the project and we value +it highly. + +@subsection General instructions + +When running GlusterFS in a non-production environment, be sure to +build it with the following command: + +@example + $ make CFLAGS='-g -O0 -DDEBUG' +@end example + +This includes debugging information which will be helpful in getting +backtraces (see below) and also disable optimization. Enabling +optimization can result in incorrect line numbers being reported to +gdb. + +@subsection Volume specification files + +Attach all relevant server and client spec files you were using when +you encountered the bug. Also tell us details of your setup, i.e., how +many clients and how many servers. + +@subsection Log files + +Set the loglevel of your client and server programs to @acronym{DEBUG} (by +passing the -L @acronym{DEBUG} option) and attach the log files with your bug +report. Obviously, if only the client is failing (for example), you +only need to send us the client log file. + +@subsection Backtrace + +If GlusterFS has encountered a segmentation fault or has crashed for +some other reason, include the backtrace with the bug report. You can +get the backtrace using the following procedure. + +Run the GlusterFS client or server inside gdb. + +@example + $ gdb ./glusterfs + (gdb) set args -f client.spec -N -l/path/to/log/file -LDEBUG /mnt/point + (gdb) run +@end example + +Now when the process segfaults, you can get the backtrace by typing: + +@example + (gdb) bt +@end example + +If the GlusterFS process has crashed and dumped a core file (you can +find this in / if running as a daemon and in the current directory +otherwise), you can do: + +@example + $ gdb /path/to/glusterfs /path/to/core. +@end example + +and then get the backtrace. + +If the GlusterFS server or client seems to be hung, then you can get +the backtrace by attaching gdb to the process. First get the @command{PID} of +the process (using ps), and then do: + +@example + $ gdb ./glusterfs +@end example + +Press Ctrl-C to interrupt the process and then generate the backtrace. + +@subsection Reproducing the bug + +If the bug is reproducible, please include the steps necessary to do +so. If the bug is not reproducible, send us the bug report anyway. + +@subsection Other information + +If you think it is relevant, send us also the version of @acronym{FUSE} you're +using, the kernel version, platform. + +@node GNU Free Documentation Licence +@appendix GNU Free Documentation Licence +@include fdl.texi + +@node Index +@unnumbered Index +@printindex cp + +@bye diff --git a/doc/legacy/xlator.odg b/doc/legacy/xlator.odg new file mode 100644 index 000000000..179a65f6e Binary files /dev/null and b/doc/legacy/xlator.odg differ diff --git a/doc/legacy/xlator.pdf b/doc/legacy/xlator.pdf new file mode 100644 index 000000000..a07e14d67 Binary files /dev/null and b/doc/legacy/xlator.pdf differ diff --git a/doc/user-guide/legacy/Makefile.am b/doc/user-guide/legacy/Makefile.am deleted file mode 100644 index b2caabaa2..000000000 --- a/doc/user-guide/legacy/Makefile.am +++ /dev/null @@ -1,3 +0,0 @@ -info_TEXINFOS = user-guide.texi -CLEANFILES = *~ -DISTCLEANFILES = .deps/*.P *.info *vti diff --git a/doc/user-guide/legacy/advanced-stripe.odg b/doc/user-guide/legacy/advanced-stripe.odg deleted file mode 100644 index 7686d7091..000000000 Binary files a/doc/user-guide/legacy/advanced-stripe.odg and /dev/null differ diff --git a/doc/user-guide/legacy/advanced-stripe.pdf b/doc/user-guide/legacy/advanced-stripe.pdf deleted file mode 100644 index ec8b03dcf..000000000 Binary files a/doc/user-guide/legacy/advanced-stripe.pdf and /dev/null differ diff --git a/doc/user-guide/legacy/colonO-icon.jpg b/doc/user-guide/legacy/colonO-icon.jpg deleted file mode 100644 index 3e66f7a27..000000000 Binary files a/doc/user-guide/legacy/colonO-icon.jpg and /dev/null differ diff --git a/doc/user-guide/legacy/fdl.texi b/doc/user-guide/legacy/fdl.texi deleted file mode 100644 index e33c687cd..000000000 --- a/doc/user-guide/legacy/fdl.texi +++ /dev/null @@ -1,454 +0,0 @@ - -@c @node GNU Free Documentation License -@c @appendixsec GNU Free Documentation License - -@cindex FDL, GNU Free Documentation License -@center Version 1.2, November 2002 - -@display -Copyright @copyright{} 2000,2001,2002 Free Software Foundation, Inc. -59 Temple Place, Suite 330, Boston, MA 02111-1307, USA - -Everyone is permitted to copy and distribute verbatim copies -of this license document, but changing it is not allowed. -@end display - -@enumerate 0 -@item -PREAMBLE - -The purpose of this License is to make a manual, textbook, or other -functional and useful document @dfn{free} in the sense of freedom: to -assure everyone the effective freedom to copy and redistribute it, -with or without modifying it, either commercially or noncommercially. -Secondarily, this License preserves for the author and publisher a way -to get credit for their work, while not being considered responsible -for modifications made by others. - -This License is a kind of ``copyleft'', which means that derivative -works of the document must themselves be free in the same sense. It -complements the GNU General Public License, which is a copyleft -license designed for free software. - -We have designed this License in order to use it for manuals for free -software, because free software needs free documentation: a free -program should come with manuals providing the same freedoms that the -software does. But this License is not limited to software manuals; -it can be used for any textual work, regardless of subject matter or -whether it is published as a printed book. We recommend this License -principally for works whose purpose is instruction or reference. - -@item -APPLICABILITY AND DEFINITIONS - -This License applies to any manual or other work, in any medium, that -contains a notice placed by the copyright holder saying it can be -distributed under the terms of this License. Such a notice grants a -world-wide, royalty-free license, unlimited in duration, to use that -work under the conditions stated herein. The ``Document'', below, -refers to any such manual or work. Any member of the public is a -licensee, and is addressed as ``you''. You accept the license if you -copy, modify or distribute the work in a way requiring permission -under copyright law. - -A ``Modified Version'' of the Document means any work containing the -Document or a portion of it, either copied verbatim, or with -modifications and/or translated into another language. - -A ``Secondary Section'' is a named appendix or a front-matter section -of the Document that deals exclusively with the relationship of the -publishers or authors of the Document to the Document's overall -subject (or to related matters) and contains nothing that could fall -directly within that overall subject. (Thus, if the Document is in -part a textbook of mathematics, a Secondary Section may not explain -any mathematics.) The relationship could be a matter of historical -connection with the subject or with related matters, or of legal, -commercial, philosophical, ethical or political position regarding -them. - -The ``Invariant Sections'' are certain Secondary Sections whose titles -are designated, as being those of Invariant Sections, in the notice -that says that the Document is released under this License. If a -section does not fit the above definition of Secondary then it is not -allowed to be designated as Invariant. The Document may contain zero -Invariant Sections. If the Document does not identify any Invariant -Sections then there are none. - -The ``Cover Texts'' are certain short passages of text that are listed, -as Front-Cover Texts or Back-Cover Texts, in the notice that says that -the Document is released under this License. A Front-Cover Text may -be at most 5 words, and a Back-Cover Text may be at most 25 words. - -A ``Transparent'' copy of the Document means a machine-readable copy, -represented in a format whose specification is available to the -general public, that is suitable for revising the document -straightforwardly with generic text editors or (for images composed of -pixels) generic paint programs or (for drawings) some widely available -drawing editor, and that is suitable for input to text formatters or -for automatic translation to a variety of formats suitable for input -to text formatters. A copy made in an otherwise Transparent file -format whose markup, or absence of markup, has been arranged to thwart -or discourage subsequent modification by readers is not Transparent. -An image format is not Transparent if used for any substantial amount -of text. A copy that is not ``Transparent'' is called ``Opaque''. - -Examples of suitable formats for Transparent copies include plain -@sc{ascii} without markup, Texinfo input format, La@TeX{} input -format, @acronym{SGML} or @acronym{XML} using a publicly available -@acronym{DTD}, and standard-conforming simple @acronym{HTML}, -PostScript or @acronym{PDF} designed for human modification. Examples -of transparent image formats include @acronym{PNG}, @acronym{XCF} and -@acronym{JPG}. Opaque formats include proprietary formats that can be -read and edited only by proprietary word processors, @acronym{SGML} or -@acronym{XML} for which the @acronym{DTD} and/or processing tools are -not generally available, and the machine-generated @acronym{HTML}, -PostScript or @acronym{PDF} produced by some word processors for -output purposes only. - -The ``Title Page'' means, for a printed book, the title page itself, -plus such following pages as are needed to hold, legibly, the material -this License requires to appear in the title page. For works in -formats which do not have any title page as such, ``Title Page'' means -the text near the most prominent appearance of the work's title, -preceding the beginning of the body of the text. - -A section ``Entitled XYZ'' means a named subunit of the Document whose -title either is precisely XYZ or contains XYZ in parentheses following -text that translates XYZ in another language. (Here XYZ stands for a -specific section name mentioned below, such as ``Acknowledgements'', -``Dedications'', ``Endorsements'', or ``History''.) To ``Preserve the Title'' -of such a section when you modify the Document means that it remains a -section ``Entitled XYZ'' according to this definition. - -The Document may include Warranty Disclaimers next to the notice which -states that this License applies to the Document. These Warranty -Disclaimers are considered to be included by reference in this -License, but only as regards disclaiming warranties: any other -implication that these Warranty Disclaimers may have is void and has -no effect on the meaning of this License. - -@item -VERBATIM COPYING - -You may copy and distribute the Document in any medium, either -commercially or noncommercially, provided that this License, the -copyright notices, and the license notice saying this License applies -to the Document are reproduced in all copies, and that you add no other -conditions whatsoever to those of this License. You may not use -technical measures to obstruct or control the reading or further -copying of the copies you make or distribute. However, you may accept -compensation in exchange for copies. If you distribute a large enough -number of copies you must also follow the conditions in section 3. - -You may also lend copies, under the same conditions stated above, and -you may publicly display copies. - -@item -COPYING IN QUANTITY - -If you publish printed copies (or copies in media that commonly have -printed covers) of the Document, numbering more than 100, and the -Document's license notice requires Cover Texts, you must enclose the -copies in covers that carry, clearly and legibly, all these Cover -Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on -the back cover. Both covers must also clearly and legibly identify -you as the publisher of these copies. The front cover must present -the full title with all words of the title equally prominent and -visible. You may add other material on the covers in addition. -Copying with changes limited to the covers, as long as they preserve -the title of the Document and satisfy these conditions, can be treated -as verbatim copying in other respects. - -If the required texts for either cover are too voluminous to fit -legibly, you should put the first ones listed (as many as fit -reasonably) on the actual cover, and continue the rest onto adjacent -pages. - -If you publish or distribute Opaque copies of the Document numbering -more than 100, you must either include a machine-readable Transparent -copy along with each Opaque copy, or state in or with each Opaque copy -a computer-network location from which the general network-using -public has access to download using public-standard network protocols -a complete Transparent copy of the Document, free of added material. -If you use the latter option, you must take reasonably prudent steps, -when you begin distribution of Opaque copies in quantity, to ensure -that this Transparent copy will remain thus accessible at the stated -location until at least one year after the last time you distribute an -Opaque copy (directly or through your agents or retailers) of that -edition to the public. - -It is requested, but not required, that you contact the authors of the -Document well before redistributing any large number of copies, to give -them a chance to provide you with an updated version of the Document. - -@item -MODIFICATIONS - -You may copy and distribute a Modified Version of the Document under -the conditions of sections 2 and 3 above, provided that you release -the Modified Version under precisely this License, with the Modified -Version filling the role of the Document, thus licensing distribution -and modification of the Modified Version to whoever possesses a copy -of it. In addition, you must do these things in the Modified Version: - -@enumerate A -@item -Use in the Title Page (and on the covers, if any) a title distinct -from that of the Document, and from those of previous versions -(which should, if there were any, be listed in the History section -of the Document). You may use the same title as a previous version -if the original publisher of that version gives permission. - -@item -List on the Title Page, as authors, one or more persons or entities -responsible for authorship of the modifications in the Modified -Version, together with at least five of the principal authors of the -Document (all of its principal authors, if it has fewer than five), -unless they release you from this requirement. - -@item -State on the Title page the name of the publisher of the -Modified Version, as the publisher. - -@item -Preserve all the copyright notices of the Document. - -@item -Add an appropriate copyright notice for your modifications -adjacent to the other copyright notices. - -@item -Include, immediately after the copyright notices, a license notice -giving the public permission to use the Modified Version under the -terms of this License, in the form shown in the Addendum below. - -@item -Preserve in that license notice the full lists of Invariant Sections -and required Cover Texts given in the Document's license notice. - -@item -Include an unaltered copy of this License. - -@item -Preserve the section Entitled ``History'', Preserve its Title, and add -to it an item stating at least the title, year, new authors, and -publisher of the Modified Version as given on the Title Page. If -there is no section Entitled ``History'' in the Document, create one -stating the title, year, authors, and publisher of the Document as -given on its Title Page, then add an item describing the Modified -Version as stated in the previous sentence. - -@item -Preserve the network location, if any, given in the Document for -public access to a Transparent copy of the Document, and likewise -the network locations given in the Document for previous versions -it was based on. These may be placed in the ``History'' section. -You may omit a network location for a work that was published at -least four years before the Document itself, or if the original -publisher of the version it refers to gives permission. - -@item -For any section Entitled ``Acknowledgements'' or ``Dedications'', Preserve -the Title of the section, and preserve in the section all the -substance and tone of each of the contributor acknowledgements and/or -dedications given therein. - -@item -Preserve all the Invariant Sections of the Document, -unaltered in their text and in their titles. Section numbers -or the equivalent are not considered part of the section titles. - -@item -Delete any section Entitled ``Endorsements''. Such a section -may not be included in the Modified Version. - -@item -Do not retitle any existing section to be Entitled ``Endorsements'' or -to conflict in title with any Invariant Section. - -@item -Preserve any Warranty Disclaimers. -@end enumerate - -If the Modified Version includes new front-matter sections or -appendices that qualify as Secondary Sections and contain no material -copied from the Document, you may at your option designate some or all -of these sections as invariant. To do this, add their titles to the -list of Invariant Sections in the Modified Version's license notice. -These titles must be distinct from any other section titles. - -You may add a section Entitled ``Endorsements'', provided it contains -nothing but endorsements of your Modified Version by various -parties---for example, statements of peer review or that the text has -been approved by an organization as the authoritative definition of a -standard. - -You may add a passage of up to five words as a Front-Cover Text, and a -passage of up to 25 words as a Back-Cover Text, to the end of the list -of Cover Texts in the Modified Version. Only one passage of -Front-Cover Text and one of Back-Cover Text may be added by (or -through arrangements made by) any one entity. If the Document already -includes a cover text for the same cover, previously added by you or -by arrangement made by the same entity you are acting on behalf of, -you may not add another; but you may replace the old one, on explicit -permission from the previous publisher that added the old one. - -The author(s) and publisher(s) of the Document do not by this License -give permission to use their names for publicity for or to assert or -imply endorsement of any Modified Version. - -@item -COMBINING DOCUMENTS - -You may combine the Document with other documents released under this -License, under the terms defined in section 4 above for modified -versions, provided that you include in the combination all of the -Invariant Sections of all of the original documents, unmodified, and -list them all as Invariant Sections of your combined work in its -license notice, and that you preserve all their Warranty Disclaimers. - -The combined work need only contain one copy of this License, and -multiple identical Invariant Sections may be replaced with a single -copy. If there are multiple Invariant Sections with the same name but -different contents, make the title of each such section unique by -adding at the end of it, in parentheses, the name of the original -author or publisher of that section if known, or else a unique number. -Make the same adjustment to the section titles in the list of -Invariant Sections in the license notice of the combined work. - -In the combination, you must combine any sections Entitled ``History'' -in the various original documents, forming one section Entitled -``History''; likewise combine any sections Entitled ``Acknowledgements'', -and any sections Entitled ``Dedications''. You must delete all -sections Entitled ``Endorsements.'' - -@item -COLLECTIONS OF DOCUMENTS - -You may make a collection consisting of the Document and other documents -released under this License, and replace the individual copies of this -License in the various documents with a single copy that is included in -the collection, provided that you follow the rules of this License for -verbatim copying of each of the documents in all other respects. - -You may extract a single document from such a collection, and distribute -it individually under this License, provided you insert a copy of this -License into the extracted document, and follow this License in all -other respects regarding verbatim copying of that document. - -@item -AGGREGATION WITH INDEPENDENT WORKS - -A compilation of the Document or its derivatives with other separate -and independent documents or works, in or on a volume of a storage or -distribution medium, is called an ``aggregate'' if the copyright -resulting from the compilation is not used to limit the legal rights -of the compilation's users beyond what the individual works permit. -When the Document is included in an aggregate, this License does not -apply to the other works in the aggregate which are not themselves -derivative works of the Document. - -If the Cover Text requirement of section 3 is applicable to these -copies of the Document, then if the Document is less than one half of -the entire aggregate, the Document's Cover Texts may be placed on -covers that bracket the Document within the aggregate, or the -electronic equivalent of covers if the Document is in electronic form. -Otherwise they must appear on printed covers that bracket the whole -aggregate. - -@item -TRANSLATION - -Translation is considered a kind of modification, so you may -distribute translations of the Document under the terms of section 4. -Replacing Invariant Sections with translations requires special -permission from their copyright holders, but you may include -translations of some or all Invariant Sections in addition to the -original versions of these Invariant Sections. You may include a -translation of this License, and all the license notices in the -Document, and any Warranty Disclaimers, provided that you also include -the original English version of this License and the original versions -of those notices and disclaimers. In case of a disagreement between -the translation and the original version of this License or a notice -or disclaimer, the original version will prevail. - -If a section in the Document is Entitled ``Acknowledgements'', -``Dedications'', or ``History'', the requirement (section 4) to Preserve -its Title (section 1) will typically require changing the actual -title. - -@item -TERMINATION - -You may not copy, modify, sublicense, or distribute the Document except -as expressly provided for under this License. Any other attempt to -copy, modify, sublicense or distribute the Document is void, and will -automatically terminate your rights under this License. However, -parties who have received copies, or rights, from you under this -License will not have their licenses terminated so long as such -parties remain in full compliance. - -@item -FUTURE REVISIONS OF THIS LICENSE - -The Free Software Foundation may publish new, revised versions -of the GNU Free Documentation License from time to time. Such new -versions will be similar in spirit to the present version, but may -differ in detail to address new problems or concerns. See -@uref{http://www.gnu.org/copyleft/}. - -Each version of the License is given a distinguishing version number. -If the Document specifies that a particular numbered version of this -License ``or any later version'' applies to it, you have the option of -following the terms and conditions either of that specified version or -of any later version that has been published (not as a draft) by the -Free Software Foundation. If the Document does not specify a version -number of this License, you may choose any version ever published (not -as a draft) by the Free Software Foundation. -@end enumerate - -@page -@c @appendixsubsec ADDENDUM: How to use this License for your -@c documents -@subsection ADDENDUM: How to use this License for your documents - -To use this License in a document you have written, include a copy of -the License in the document and put the following copyright and -license notices just after the title page: - -@smallexample -@group - Copyright (C) @var{year} @var{your name}. - Permission is granted to copy, distribute and/or modify this document - under the terms of the GNU Free Documentation License, Version 1.2 - or any later version published by the Free Software Foundation; - with no Invariant Sections, no Front-Cover Texts, and no Back-Cover - Texts. A copy of the license is included in the section entitled ``GNU - Free Documentation License''. -@end group -@end smallexample - -If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, -replace the ``with...Texts.'' line with this: - -@smallexample -@group - with the Invariant Sections being @var{list their titles}, with - the Front-Cover Texts being @var{list}, and with the Back-Cover Texts - being @var{list}. -@end group -@end smallexample - -If you have Invariant Sections without Cover Texts, or some other -combination of the three, merge those two alternatives to suit the -situation. - -If your document contains nontrivial examples of program code, we -recommend releasing these examples in parallel under your choice of -free software license, such as the GNU General Public License, -to permit their use in free software. - -@c Local Variables: -@c ispell-local-pdict: "ispell-dict" -@c End: - diff --git a/doc/user-guide/legacy/fuse.odg b/doc/user-guide/legacy/fuse.odg deleted file mode 100644 index 61bd103c7..000000000 Binary files a/doc/user-guide/legacy/fuse.odg and /dev/null differ diff --git a/doc/user-guide/legacy/fuse.pdf b/doc/user-guide/legacy/fuse.pdf deleted file mode 100644 index a7d13faff..000000000 Binary files a/doc/user-guide/legacy/fuse.pdf and /dev/null differ diff --git a/doc/user-guide/legacy/ha.odg b/doc/user-guide/legacy/ha.odg deleted file mode 100644 index e4b8b72d0..000000000 Binary files a/doc/user-guide/legacy/ha.odg and /dev/null differ diff --git a/doc/user-guide/legacy/ha.pdf b/doc/user-guide/legacy/ha.pdf deleted file mode 100644 index e372c0ab0..000000000 Binary files a/doc/user-guide/legacy/ha.pdf and /dev/null differ diff --git a/doc/user-guide/legacy/stripe.odg b/doc/user-guide/legacy/stripe.odg deleted file mode 100644 index 79441bf14..000000000 Binary files a/doc/user-guide/legacy/stripe.odg and /dev/null differ diff --git a/doc/user-guide/legacy/stripe.pdf b/doc/user-guide/legacy/stripe.pdf deleted file mode 100644 index b94446feb..000000000 Binary files a/doc/user-guide/legacy/stripe.pdf and /dev/null differ diff --git a/doc/user-guide/legacy/unify.odg b/doc/user-guide/legacy/unify.odg deleted file mode 100644 index ccaa9bf16..000000000 Binary files a/doc/user-guide/legacy/unify.odg and /dev/null differ diff --git a/doc/user-guide/legacy/unify.pdf b/doc/user-guide/legacy/unify.pdf deleted file mode 100644 index c22027f66..000000000 Binary files a/doc/user-guide/legacy/unify.pdf and /dev/null differ diff --git a/doc/user-guide/legacy/user-guide.info b/doc/user-guide/legacy/user-guide.info deleted file mode 100644 index 2bbadb351..000000000 --- a/doc/user-guide/legacy/user-guide.info +++ /dev/null @@ -1,2697 +0,0 @@ -This is ../../../doc/user-guide/user-guide.info, produced by makeinfo version 4.13 from ../../../doc/user-guide/user-guide.texi. - -START-INFO-DIR-ENTRY -* GlusterFS: (user-guide). GlusterFS distributed filesystem user guide -END-INFO-DIR-ENTRY - - This is the user manual for GlusterFS 2.0. - - Copyright (c) 2007-2011 Gluster, Inc. Permission is granted to -copy, distribute and/or modify this document under the terms of the GNU -Free Documentation License, Version 1.2 or any later version published -by the Free Software Foundation; with no Invariant Sections, no -Front-Cover Texts, and no Back-Cover Texts. A copy of the license is -included in the chapter entitled "GNU Free Documentation License". - - -File: user-guide.info, Node: Top, Next: Acknowledgements, Up: (dir) - -GlusterFS 2.0 User Guide -************************ - -This is the user manual for GlusterFS 2.0. - - Copyright (c) 2007-2011 Gluster, Inc. Permission is granted to -copy, distribute and/or modify this document under the terms of the GNU -Free Documentation License, Version 1.2 or any later version published -by the Free Software Foundation; with no Invariant Sections, no -Front-Cover Texts, and no Back-Cover Texts. A copy of the license is -included in the chapter entitled "GNU Free Documentation License". - -* Menu: - -* Acknowledgements:: -* Introduction:: -* Installation and Invocation:: -* Concepts:: -* Translators:: -* Usage Scenarios:: -* Troubleshooting:: -* GNU Free Documentation Licence:: -* Index:: - - --- The Detailed Node Listing --- - -Installation and Invocation - -* Pre requisites:: -* Getting GlusterFS:: -* Building:: -* Running GlusterFS:: -* A Tutorial Introduction:: - -Running GlusterFS - -* Server:: -* Client:: - -Concepts - -* Filesystems in Userspace:: -* Translator:: -* Volume specification file:: - -Translators - -* Storage Translators:: -* Client and Server Translators:: -* Clustering Translators:: -* Performance Translators:: -* Features Translators:: - -Storage Translators - -* POSIX:: - -Client and Server Translators - -* Transport modules:: -* Client protocol:: -* Server protocol:: - -Clustering Translators - -* Unify:: -* Replicate:: -* Stripe:: - -Performance Translators - -* Read Ahead:: -* Write Behind:: -* IO Threads:: -* IO Cache:: - -Features Translators - -* POSIX Locks:: -* Fixed ID:: - -Miscellaneous Translators - -* ROT-13:: -* Trace:: - - -File: user-guide.info, Node: Acknowledgements, Next: Introduction, Prev: Top, Up: Top - -Acknowledgements -**************** - -GlusterFS continues to be a wonderful and enriching experience for all -of us involved. - - GlusterFS development would not have been possible at this pace if -not for our enthusiastic users. People from around the world have -helped us with bug reports, performance numbers, and feature -suggestions. A huge thanks to them all. - - Matthew Paine - for RPMs & general enthu - - Leonardo Rodrigues de Mello - for DEBs - - Julian Perez & Adam D'Auria - for multi-server tutorial - - Paul England - for HA spec - - Brent Nelson - for many bug reports - - Jacques Mattheij - for Europe mirror. - - Patrick Negri - for TCP non-blocking connect. - http://gluster.org/core-team.php () - Gluster - - -File: user-guide.info, Node: Introduction, Next: Installation and Invocation, Prev: Acknowledgements, Up: Top - -1 Introduction -************** - -GlusterFS is a distributed filesystem. It works at the file level, not -block level. - - A network filesystem is one which allows us to access remote files. A -distributed filesystem is one that stores data on multiple machines and -makes them all appear to be a part of the same filesystem. - - Need for distributed filesystems - - * Scalability: A distributed filesystem allows us to store more data - than what can be stored on a single machine. - - * Redundancy: We might want to replicate crucial data on to several - machines. - - * Uniform access: One can mount a remote volume (for example your - home directory) from any machine and access the same data. - -1.1 Contacting us -================= - -You can reach us through the mailing list *gluster-devel* -(). - - You can also find many of the developers on IRC, on the `#gluster' -channel on Freenode (). - - The GlusterFS documentation wiki is also useful: - - - For commercial support, you can contact Gluster at: - - 3194 Winding Vista Common - Fremont, CA 94539 - USA. - - Phone: +1 (510) 354 6801 - Toll free: +1 (888) 813 6309 - Fax: +1 (510) 372 0604 - - You can also email us at . - - -File: user-guide.info, Node: Installation and Invocation, Next: Concepts, Prev: Introduction, Up: Top - -2 Installation and Invocation -***************************** - -* Menu: - -* Pre requisites:: -* Getting GlusterFS:: -* Building:: -* Running GlusterFS:: -* A Tutorial Introduction:: - - -File: user-guide.info, Node: Pre requisites, Next: Getting GlusterFS, Up: Installation and Invocation - -2.1 Pre requisites -================== - -Before installing GlusterFS make sure you have the following components -installed. - -2.1.1 FUSE ----------- - -You'll need FUSE version 2.6.0 or higher to use GlusterFS. You can omit -installing FUSE if you want to build _only_ the server. Note that you -won't be able to mount a GlusterFS filesystem on a machine that does -not have FUSE installed. - - FUSE can be downloaded from: - - To get the best performance from GlusterFS, however, it is -recommended that you use our patched version of FUSE. See Patched FUSE -for details. - -2.1.2 Patched FUSE ------------------- - -The GlusterFS project maintains a patched version of FUSE meant to be -used with GlusterFS. The patches increase GlusterFS performance. It is -recommended that all users use the patched FUSE. - - The patched FUSE tarball can be downloaded from: - - - - The specific changes made to FUSE are: - - * The communication channel size between FUSE kernel module and - GlusterFS has been increased to 1MB, permitting large reads and - writes to be sent in bigger chunks. - - * The kernel's read-ahead boundry has been extended upto 1MB. - - * Block size returned in the `stat()'/`fstat()' calls tuned to 1MB, - to make cp and similar commands perform I/O using that block size. - - * `flock()' locking support has been added (although some rework in - GlusterFS is needed for perfect compliance). - -2.1.3 libibverbs (optional) ---------------------------- - -This is only needed if you want GlusterFS to use InfiniBand as the -interconnect mechanism between server and client. You can get it from: - - . - -2.1.4 Bison and Flex --------------------- - -These should be already installed on most Linux systems. If not, use -your distribution's normal software installation procedures to install -them. Make sure you install the relevant developer packages also. - - -File: user-guide.info, Node: Getting GlusterFS, Next: Building, Prev: Pre requisites, Up: Installation and Invocation - -2.2 Getting GlusterFS -===================== - -There are many ways to get hold of GlusterFS. For a production -deployment, the recommended method is to download the latest release -tarball. Release tarballs are available at: -. - - If you want the bleeding edge development source, you can get them -from the GNU Arch(1) repository. First you must install GNU Arch -itself. Then register the GlusterFS archive by doing: - - $ tla register-archive http://arch.sv.gnu.org/archives/gluster - - Now you can check out the source itself: - - $ tla get -A gluster@sv.gnu.org glusterfs--mainline--3.0 - - ---------- Footnotes ---------- - - (1) - - -File: user-guide.info, Node: Building, Next: Running GlusterFS, Prev: Getting GlusterFS, Up: Installation and Invocation - -2.3 Building -============ - -You can skip this section if you're installing from RPMs or DEBs. - - GlusterFS uses the Autotools mechanism to build. As such, the -procedure is straight-forward. First, change into the GlusterFS source -directory. - - $ cd glusterfs- - - If you checked out the source from the Arch repository, you'll need -to run `./autogen.sh' first. Note that you'll need to have Autoconf and -Automake installed for this. - - Run `configure'. - - $ ./configure - - The configure script accepts the following options: - -`--disable-ibverbs' - Disable the InfiniBand transport mechanism. - -`--disable-fuse-client' - Disable the FUSE client. - -`--disable-server' - Disable building of the GlusterFS server. - -`--disable-bdb' - Disable building of Berkeley DB based storage translator. - -`--disable-mod_glusterfs' - Disable building of Apache/lighttpd glusterfs plugins. - -`--disable-epoll' - Use poll instead of epoll. - -`--disable-libglusterfsclient' - Disable building of libglusterfsclient - - - Build and install GlusterFS. - - # make install - - The binaries (`glusterfsd' and `glusterfs') will be by default -installed in `/usr/local/sbin/'. Translator, scheduler, and transport -shared libraries will be installed in -`/usr/local/lib/glusterfs//'. Sample volume specification -files will be in `/usr/local/etc/glusterfs/'. This document itself can -be found in `/usr/local/share/doc/glusterfs/'. If you passed the -`--prefix' argument to the configure script, then replace `/usr/local' -in the preceding paths with the prefix. - - -File: user-guide.info, Node: Running GlusterFS, Next: A Tutorial Introduction, Prev: Building, Up: Installation and Invocation - -2.4 Running GlusterFS -===================== - -* Menu: - -* Server:: -* Client:: - - -File: user-guide.info, Node: Server, Next: Client, Up: Running GlusterFS - -2.4.1 Server ------------- - -The GlusterFS server is necessary to export storage volumes to remote -clients (See *note Server protocol:: for more info). This section -documents the invocation of the GlusterFS server program and all the -command-line options accepted by it. - - Basic Options - -`-f, --volfile=' - Use the volume file as the volume specification. - -`-s, --volfile-server=' - Server to get volume file from. This option overrides -volfile - option. - -`-l, --log-file=' - Specify the path for the log file. - -`-L, --log-level=' - Set the log level for the server. Log level should be one of DEBUG, - WARNING, ERROR, CRITICAL, or NONE. - - Advanced Options - -`--debug' - Run in debug mode. This option sets -no-daemon, -log-level to - DEBUG and -log-file to console. - -`-N, --no-daemon' - Run glusterfsd as a foreground process. - -`-p, --pid-file=' - Path for the PID file. - -`--volfile-id=' - 'key' of the volfile to be fetched from server. - -`--volfile-server-port=' - Listening port number of volfile server. - -`--volfile-server-transport=[tcp|ib-verbs]' - Transport type to get volfile from server. [default: `tcp'] - -`--xlator-options=' - Add/override a translator option for a volume with specified value. - - Miscellaneous Options - -`-?, --help' - Show this help text. - -`--usage' - Display a short usage message. - -`-V, --version' - Show version information. - - -File: user-guide.info, Node: Client, Prev: Server, Up: Running GlusterFS - -2.4.2 Client ------------- - -The GlusterFS client process is necessary to access remote storage -volumes and mount them locally using FUSE. This section documents the -invocation of the client process and all its command-line arguments. - - # glusterfs [options] - - The `mountpoint' is the directory where you want the GlusterFS -filesystem to appear. Example: - - # glusterfs -f /usr/local/etc/glusterfs-client.vol /mnt - - The command-line options are detailed below. - - Basic Options - -`-f, --volfile=' - Use the volume file as the volume specification. - -`-s, --volfile-server=' - Server to get volume file from. This option overrides -volfile - option. - -`-l, --log-file=' - Specify the path for the log file. - -`-L, --log-level=' - Set the log level for the server. Log level should be one of DEBUG, - WARNING, ERROR, CRITICAL, or NONE. - - Advanced Options - -`--debug' - Run in debug mode. This option sets -no-daemon, -log-level to - DEBUG and -log-file to console. - -`-N, --no-daemon' - Run `glusterfs' as a foreground process. - -`-p, --pid-file=' - Path for the PID file. - -`--volfile-id=' - 'key' of the volfile to be fetched from server. - -`--volfile-server-port=' - Listening port number of volfile server. - -`--volfile-server-transport=[tcp|ib-verbs]' - Transport type to get volfile from server. [default: `tcp'] - -`--xlator-options=' - Add/override a translator option for a volume with specified value. - -`--volume-name=' - Volume name in client spec to use. Defaults to the root volume. - - FUSE Options - -`--attribute-timeout=' - Attribute timeout for inodes in the kernel, in seconds. Defaults - to 1 second. - -`--disable-direct-io-mode' - Disable direct I/O mode in FUSE kernel module. - -`-e, --entry-timeout=' - Entry timeout for directory entries in the kernel, in seconds. - Defaults to 1 second. - - Missellaneous Options - -`-?, --help' - Show this help information. - -`-V, --version' - Show version information. - - -File: user-guide.info, Node: A Tutorial Introduction, Prev: Running GlusterFS, Up: Installation and Invocation - -2.5 A Tutorial Introduction -=========================== - -This section will show you how to quickly get GlusterFS up and running. -We'll configure GlusterFS as a simple network filesystem, with one -server and one client. In this mode of usage, GlusterFS can serve as a -replacement for NFS. - - We'll make use of two machines; call them _server_ and _client_ (If -you don't want to setup two machines, just run everything that follows -on the same machine). In the examples that follow, the shell prompts -will use these names to clarify the machine on which the command is -being run. For example, a command that should be run on the server will -be shown with the prompt: - - [root@server]# - - Our goal is to make a directory on the _server_ (say, `/export') -accessible to the _client_. - - First of all, get GlusterFS installed on both the machines, as -described in the previous sections. Make sure you have the FUSE kernel -module loaded. You can ensure this by running: - - [root@server]# modprobe fuse - - Before we can run the GlusterFS client or server programs, we need -to write two files called _volume specifications_ (equivalently refered -to as _volfiles_). The volfile describes the _translator tree_ on a -node. The next chapter will explain the concepts of `translator' and -`volume specification' in detail. For now, just assume that the volfile -is like an NFS `/etc/export' file. - - On the server, create a text file somewhere (we'll assume the path -`/tmp/glusterfsd.vol') with the following contents. - - volume colon-o - type storage/posix - option directory /export - end-volume - - volume server - type protocol/server - subvolumes colon-o - option transport-type tcp - option auth.addr.colon-o.allow * - end-volume - - A brief explanation of the file's contents. The first section -defines a storage volume, named "colon-o" (the volume names are -arbitrary), which exports the `/export' directory. The second section -defines options for the translator which will make the storage volume -accessible remotely. It specifies `colon-o' as a subvolume. This -defines the _translator tree_, about which more will be said in the -next chapter. The two options specify that the TCP protocol is to be -used (as opposed to InfiniBand, for example), and that access to the -storage volume is to be provided to clients with any IP address at all. -If you wanted to restrict access to this server to only your subnet for -example, you'd specify something like `192.168.1.*' in the second -option line. - - On the client machine, create the following text file (again, we'll -assume the path to be `/tmp/glusterfs-client.vol'). Replace -_server-ip-address_ with the IP address of your server machine. If you -are doing all this on a single machine, use `127.0.0.1'. - - volume client - type protocol/client - option transport-type tcp - option remote-host _server-ip-address_ - option remote-subvolume colon-o - end-volume - - Now we need to start both the server and client programs. To start -the server: - - [root@server]# glusterfsd -f /tmp/glusterfs-server.vol - - To start the client: - - [root@client]# glusterfs -f /tmp/glusterfs-client.vol /mnt/glusterfs - - You should now be able to see the files under the server's `/export' -directory in the `/mnt/glusterfs' directory on the client. That's it; -GlusterFS is now working as a network file system. - - -File: user-guide.info, Node: Concepts, Next: Translators, Prev: Installation and Invocation, Up: Top - -3 Concepts -********** - -* Menu: - -* Filesystems in Userspace:: -* Translator:: -* Volume specification file:: - - -File: user-guide.info, Node: Filesystems in Userspace, Next: Translator, Up: Concepts - -3.1 Filesystems in Userspace -============================ - -A filesystem is usually implemented in kernel space. Kernel space -development is much harder than userspace development. FUSE is a kernel -module/library that allows us to write a filesystem completely in -userspace. - - FUSE consists of a kernel module which interacts with the userspace -implementation using a device file `/dev/fuse'. When a process makes a -syscall on a FUSE filesystem, VFS hands the request to the FUSE module, -which writes the request to `/dev/fuse'. The userspace implementation -polls `/dev/fuse', and when a request arrives, processes it and writes -the result back to `/dev/fuse'. The kernel then reads from the device -file and returns the result to the user process. - - In case of GlusterFS, the userspace program is the GlusterFS client. -The control flow is shown in the diagram below. The GlusterFS client -services the request by sending it to the server, which in turn hands -it to the local POSIX filesystem. - - - Fig 1. Control flow in GlusterFS - - -File: user-guide.info, Node: Translator, Next: Volume specification file, Prev: Filesystems in Userspace, Up: Concepts - -3.2 Translator -============== - -The _translator_ is the most important concept in GlusterFS. In fact, -GlusterFS is nothing but a collection of translators working together, -forming a translator _tree_. - - The idea of a translator is perhaps best understood using an -analogy. Consider the VFS in the Linux kernel. The VFS abstracts the -various filesystem implementations (such as EXT3, ReiserFS, XFS, etc.) -supported by the kernel. When an application calls the kernel to -perform an operation on a file, the kernel passes the request on to the -appropriate filesystem implementation. - - For example, let's say there are two partitions on a Linux machine: -`/', which is an EXT3 partition, and `/usr', which is a ReiserFS -partition. Now if an application wants to open a file called, say, -`/etc/fstab', then the kernel will internally pass the request to the -EXT3 implementation. If on the other hand, an application wants to -read a file called `/usr/src/linux/CREDITS', then the kernel will call -upon the ReiserFS implementation to do the job. - - The "filesystem implementation" objects are analogous to GlusterFS -translators. A GlusterFS translator implements all the filesystem -operations. Whereas in VFS there is a two-level tree (with the kernel -at the root and all the filesystem implementation as its children), in -GlusterFS there exists a more elaborate tree structure. - - We can now define translators more precisely. A GlusterFS translator -is a shared object (`.so') that implements every filesystem call. -GlusterFS translators can be arranged in an arbitrary tree structure -(subject to constraints imposed by the translators). When GlusterFS -receives a filesystem call, it passes it on to the translator at the -root of the translator tree. The root translator may in turn pass it on -to any or all of its children, and so on, until the leaf nodes are -reached. The result of a filesystem call is communicated in the reverse -fashion, from the leaf nodes up to the root node, and then on to the -application. - - So what might a translator tree look like? - - - Fig 2. A sample translator tree - - The diagram depicts three servers and one GlusterFS client. It is -important to note that conceptually, the translator tree spans machine -boundaries. Thus, the client machine in the diagram, `10.0.0.1', can -access the aggregated storage of the filesystems on the server machines -`10.0.0.2', `10.0.0.3', and `10.0.0.4'. The translator diagram will -make more sense once you've read the next chapter and understood the -functions of the various translators. - - -File: user-guide.info, Node: Volume specification file, Prev: Translator, Up: Concepts - -3.3 Volume specification file -============================= - -The volume specification file describes the translator tree for both the -server and client programs. - - A volume specification file is a sequence of volume definitions. -The syntax of a volume definition is explained below: - - *volume* _volume-name_ - *type* _translator-name_ - *option* _option-name_ _option-value_ - ... - *subvolumes* _subvolume1_ _subvolume2_ ... - *end-volume* - - ... - -_volume-name_ - An identifier for the volume. This is just a human-readable name, - and can contain any alphanumeric character. For instance, - "storage-1", "colon-o", or "forty-two". - -_translator-name_ - Name of one of the available translators. Example: - `protocol/client', `cluster/unify'. - -_option-name_ - Name of a valid option for the translator. - -_option-value_ - Value for the option. Everything following the "option" keyword to - the end of the line is considered the value; it is up to the - translator to parse it. - -_subvolume1_, _subvolume2_, ... - Volume names of sub-volumes. The sub-volumes must already have - been defined earlier in the file. - - There are a few rules you must follow when writing a volume -specification file: - - * Everything following a ``#'' is considered a comment and is - ignored. Blank lines are also ignored. - - * All names and keywords are case-sensitive. - - * The order of options inside a volume definition does not matter. - - * An option value may not span multiple lines. - - * If an option is not specified, it will assume its default value. - - * A sub-volume must have already been defined before it can be - referenced. This means you have to write the specification file - "bottom-up", starting from the leaf nodes of the translator tree - and moving up to the root. - - A simple example volume specification file is shown below: - - # This is a comment line - volume client - type protocol/client - option transport-type tcp - option remote-host localhost # Also a comment - option remote-subvolume brick - # The subvolumes line may be absent - end-volume - - volume iot - type performance/io-threads - option thread-count 4 - subvolumes client - end-volume - - volume wb - type performance/write-behind - subvolumes iot - end-volume - - -File: user-guide.info, Node: Translators, Next: Usage Scenarios, Prev: Concepts, Up: Top - -4 Translators -************* - -* Menu: - -* Storage Translators:: -* Client and Server Translators:: -* Clustering Translators:: -* Performance Translators:: -* Features Translators:: -* Miscellaneous Translators:: - - This chapter documents all the available GlusterFS translators in -detail. Each translator section will show its name (for example, -`cluster/unify'), briefly describe its purpose and workings, and list -every option accepted by that translator and their meaning. - - -File: user-guide.info, Node: Storage Translators, Next: Client and Server Translators, Up: Translators - -4.1 Storage Translators -======================= - -The storage translators form the "backend" for GlusterFS. Currently, -the only available storage translator is the POSIX translator, which -stores files on a normal POSIX filesystem. A pleasant consequence of -this is that your data will still be accessible if GlusterFS crashes or -cannot be started. - - Other storage backends are planned for the future. One of the -possibilities is an Amazon S3 translator. Amazon S3 is an unlimited -online storage service accessible through a web services API. The S3 -translator will allow you to access the storage as a normal POSIX -filesystem. (1) - -* Menu: - -* POSIX:: -* BDB:: - - ---------- Footnotes ---------- - - (1) Some more discussion about this can be found at: - -http://developer.amazonwebservices.com/connect/message.jspa?messageID=52873 - - -File: user-guide.info, Node: POSIX, Next: BDB, Up: Storage Translators - -4.1.1 POSIX ------------ - - type storage/posix - - The `posix' translator uses a normal POSIX filesystem as its -"backend" to actually store files and directories. This can be any -filesystem that supports extended attributes (EXT3, ReiserFS, XFS, -...). Extended attributes are used by some translators to store -metadata, for example, by the replicate and stripe translators. See -*note Replicate:: and *note Stripe::, respectively for details. - -`directory ' - The directory on the local filesystem which is to be used for - storage. - - -File: user-guide.info, Node: BDB, Prev: POSIX, Up: Storage Translators - -4.1.2 BDB ---------- - - type storage/bdb - - The `BDB' translator uses a Berkeley DB database as its "backend" to -actually store files as key-value pair in the database and directories -as regular POSIX directories. Note that BDB does not provide extended -attribute support for regular files. Do not use BDB as storage -translator while using any translator that demands extended attributes -on "backend". - -`directory ' - The directory on the local filesystem which is to be used for - storage. - -`mode [cache|persistent] (cache)' - When BDB is run in `cache' mode, recovery of back-end is not - completely guaranteed. `persistent' guarantees that BDB can - recover back-end from Berkeley DB even if GlusterFS crashes. - -`errfile ' - The path of the file to be used as `errfile' for Berkeley DB to - report detailed error messages, if any. Note that all the contents - of this file will be written by Berkeley DB, not GlusterFS. - -`logdir ' - - -File: user-guide.info, Node: Client and Server Translators, Next: Clustering Translators, Prev: Storage Translators, Up: Translators - -4.2 Client and Server Translators -================================= - -The client and server translator enable GlusterFS to export a -translator tree over the network or access a remote GlusterFS server. -These two translators implement GlusterFS's network protocol. - -* Menu: - -* Transport modules:: -* Client protocol:: -* Server protocol:: - - -File: user-guide.info, Node: Transport modules, Next: Client protocol, Up: Client and Server Translators - -4.2.1 Transport modules ------------------------ - -The client and server translators are capable of using any of the -pluggable transport modules. Currently available transport modules are -`tcp', which uses a TCP connection between client and server to -communicate; `ib-sdp', which uses a TCP connection over InfiniBand, and -`ibverbs', which uses high-speed InfiniBand connections. - - Each transport module comes in two different versions, one to be -used on the server side and the other on the client side. - -4.2.1.1 TCP -........... - -The TCP transport module uses a TCP/IP connection between the server -and the client. - - option transport-type tcp - - The TCP client module accepts the following options: - -`non-blocking-connect [no|off|on|yes] (on)' - Whether to make the connection attempt asynchronous. - -`remote-port (24007)' - Server port to connect to. - -`remote-host *' - Hostname or IP address of the server. If the host name resolves to - multiple IP addresses, all of them will be tried in a round-robin - fashion. This feature can be used to implement fail-over. - - The TCP server module accepts the following options: - -`bind-address
(0.0.0.0)' - The local interface on which the server should listen to requests. - Default is to listen on all interfaces. - -`listen-port (24007)' - The local port to listen on. - -4.2.1.2 IB-SDP -.............. - - option transport-type ib-sdp - - kernel implements socket interface for ib hardware. SDP is over -ib-verbs. This module accepts the same options as `tcp' - -4.2.1.3 ibverbs -............... - - option transport-type tcp - - InfiniBand is a scalable switched fabric interconnect mechanism -primarily used in high-performance computing. InfiniBand can deliver -data throughput of the order of 10 Gbit/s, with latencies of 4-5 ms. - - The `ib-verbs' transport accesses the InfiniBand hardware through -the "verbs" API, which is the lowest level of software access possible -and which gives the highest performance. On InfiniBand hardware, it is -always best to use `ib-verbs'. Use `ib-sdp' only if you cannot get -`ib-verbs' working for some reason. - - The `ib-verbs' client module accepts the following options: - -`non-blocking-connect [no|off|on|yes] (on)' - Whether to make the connection attempt asynchronous. - -`remote-port (24007)' - Server port to connect to. - -`remote-host *' - Hostname or IP address of the server. If the host name resolves to - multiple IP addresses, all of them will be tried in a round-robin - fashion. This feature can be used to implement fail-over. - - The `ib-verbs' server module accepts the following options: - -`bind-address
(0.0.0.0)' - The local interface on which the server should listen to requests. - Default is to listen on all interfaces. - -`listen-port (24007)' - The local port to listen on. - - The following options are common to both the client and server -modules: - - If you are familiar with InfiniBand jargon, the mode is used by -GlusterFS is "reliable connection-oriented channel transfer". - -`ib-verbs-work-request-send-count (64)' - Length of the send queue in datagrams. [Reason to - increase/decrease?] - -`ib-verbs-work-request-recv-count (64)' - Length of the receive queue in datagrams. [Reason to - increase/decrease?] - -`ib-verbs-work-request-send-size (128KB)' - Size of each datagram that is sent. [Reason to increase/decrease?] - -`ib-verbs-work-request-recv-size (128KB)' - Size of each datagram that is received. [Reason to - increase/decrease?] - -`ib-verbs-port (1)' - Port number for ib-verbs. - -`ib-verbs-mtu [256|512|1024|2048|4096] (2048)' - The Maximum Transmission Unit [Reason to increase/decrease?] - -`ib-verbs-device-name (first device in the list)' - InfiniBand device to be used. - - For maximum performance, you should ensure that the send/receive -counts on both the client and server are the same. - - ib-verbs is preferred over ib-sdp. - - -File: user-guide.info, Node: Client protocol, Next: Server protocol, Prev: Transport modules, Up: Client and Server Translators - -4.2.2 Client ------------- - - type procotol/client - - The client translator enables the GlusterFS client to access a -remote server's translator tree. - -`transport-type [tcp,ib-sdp,ib-verbs] (tcp)' - The transport type to use. You should use the client versions of - all the transport modules (`tcp', `ib-sdp', `ib-verbs'). - -`remote-subvolume *' - The name of the volume on the remote host to attach to. Note that - this is _not_ the name of the `protocol/server' volume on the - server. It should be any volume under the server. - -`transport-timeout (120- seconds)' - Inactivity timeout. If a reply is expected and no activity takes - place on the connection within this time, the transport connection - will be broken, and a new connection will be attempted. - - -File: user-guide.info, Node: Server protocol, Prev: Client protocol, Up: Client and Server Translators - -4.2.3 Server ------------- - - type protocol/server - - The server translator exports a translator tree and makes it -accessible to remote GlusterFS clients. - -`client-volume-filename (/glusterfs-client.vol)' - The volume specification file to use for the client. This is the - file the client will receive when it is invoked with the - `--server' option (*note Client::). - -`transport-type [tcp,ib-verbs,ib-sdp] (tcp)' - The transport to use. You should use the server versions of all - the transport modules (`tcp', `ib-sdp', `ib-verbs'). - -`auth.addr..allow ' - IP addresses of the clients that are allowed to attach to the - specified volume. This can be a wildcard. For example, a wildcard - of the form `192.168.*.*' allows any host in the `192.168.x.x' - subnet to connect to the server. - - - -File: user-guide.info, Node: Clustering Translators, Next: Performance Translators, Prev: Client and Server Translators, Up: Translators - -4.3 Clustering Translators -========================== - -The clustering translators are the most important GlusterFS -translators, since it is these that make GlusterFS a cluster -filesystem. These translators together enable GlusterFS to access an -arbitrarily large amount of storage, and provide RAID-like redundancy -and distribution over the entire cluster. - - There are three clustering translators: *unify*, *replicate*, and -*stripe*. The unify translator aggregates storage from many server -nodes. The replicate translator provides file replication. The stripe -translator allows a file to be spread across many server nodes. The -following sections look at each of these translators in detail. - -* Menu: - -* Unify:: -* Replicate:: -* Stripe:: - - -File: user-guide.info, Node: Unify, Next: Replicate, Up: Clustering Translators - -4.3.1 Unify ------------ - - type cluster/unify - - The unify translator presents a `unified' view of all its -sub-volumes. That is, it makes the union of all its sub-volumes appear -as a single volume. It is the unify translator that gives GlusterFS the -ability to access an arbitrarily large amount of storage. - - For unify to work correctly, certain invariants need to be -maintained across the entire network. These are: - - * The directory structure of all the sub-volumes must be identical. - - * A particular file can exist on only one of the sub-volumes. - Phrasing it in another way, a pathname such as - `/home/calvin/homework.txt') is unique across the entire cluster. - - - -Looking at the second requirement, you might wonder how one can -accomplish storing redundant copies of a file, if no file can exist -multiple times. To answer, we must remember that these invariants are -from _unify's perspective_. A translator such as replicate at a lower -level in the translator tree than unify may subvert this picture. - - The first invariant might seem quite tedious to ensure. We shall see -later that this is not so, since unify's _self-heal_ mechanism takes -care of maintaining it. - - The second invariant implies that unify needs some way to decide -which file goes where. Unify makes use of _scheduler_ modules for this -purpose. - - When a file needs to be created, unify's scheduler decides upon the -sub-volume to be used to store the file. There are many schedulers -available, each using a different algorithm and suitable for different -purposes. - - The various schedulers are described in detail in the sections that -follow. - -4.3.1.1 ALU -........... - - option scheduler alu - - ALU stands for "Adaptive Least Usage". It is the most advanced -scheduler available in GlusterFS. It balances the load across volumes -taking several factors in account. It adapts itself to changing I/O -patterns according to its configuration. When properly configured, it -can eliminate the need for regular tuning of the filesystem to keep -volume load nicely balanced. - - The ALU scheduler is composed of multiple least-usage -sub-schedulers. Each sub-scheduler keeps track of a certain type of -load, for each of the sub-volumes, getting statistics from the -sub-volumes themselves. The sub-schedulers are these: - - * disk-usage: The used and free disk space on the volume. - - * read-usage: The amount of reading done from this volume. - - * write-usage: The amount of writing done to this volume. - - * open-files-usage: The number of files currently open from this - volume. - - * disk-speed-usage: The speed at which the disks are spinning. This - is a constant value and therefore not very useful. - - The ALU scheduler needs to know which of these sub-schedulers to use, -and in which order to evaluate them. This is done through the `option -alu.order' configuration directive. - - Each sub-scheduler needs to know two things: when to kick in (the -entry-threshold), and how long to stay in control (the exit-threshold). -For example: when unifying three disks of 100GB, keeping an exact -balance of disk-usage is not necesary. Instead, there could be a 1GB -margin, which can be used to nicely balance other factors, such as -read-usage. The disk-usage scheduler can be told to kick in only when a -certain threshold of discrepancy is passed, such as 1GB. When it -assumes control under this condition, it will write all subsequent data -to the least-used volume. If it is doing so, it is unwise to stop right -after the values are below the entry-threshold again, since that would -make it very likely that the situation will occur again very soon. Such -a situation would cause the ALU to spend most of its time disk-usage -scheduling, which is unfair to the other sub-schedulers. The -exit-threshold therefore defines the amount of data that needs to be -written to the least-used disk, before control is relinquished again. - - In addition to the sub-schedulers, the ALU scheduler also has -"limits" options. These can stop the creation of new files on a volume -once values drop below a certain threshold. For example, setting -`option alu.limits.min-free-disk 5GB' will stop the scheduling of files -to volumes that have less than 5GB of free disk space, leaving the -files on that disk some room to grow. - - The actual values you assign to the thresholds for sub-schedulers and -limits depend on your situation. If you have fast-growing files, you'll -want to stop file-creation on a disk much earlier than when hardly any -of your files are growing. If you care less about disk-usage balance -than about read-usage balance, you'll want a bigger disk-usage -scheduler entry-threshold and a smaller read-usage scheduler -entry-threshold. - - For thresholds defining a size, values specifying "KB", "MB" and "GB" -are allowed. For example: `option alu.limits.min-free-disk 5GB'. - -`alu.order * ("disk-usage:write-usage:read-usage:open-files-usage:disk-speed")' - -`alu.disk-usage.entry-threshold (1GB)' - -`alu.disk-usage.exit-threshold (512MB)' - -`alu.write-usage.entry-threshold <%> (25)' - -`alu.write-usage.exit-threshold <%> (5)' - -`alu.read-usage.entry-threshold <%> (25)' - -`alu.read-usage.exit-threshold <%> (5)' - -`alu.open-files-usage.entry-threshold (1000)' - -`alu.open-files-usage.exit-threshold (100)' - -`alu.limits.min-free-disk <%>' - -`alu.limits.max-open-files ' - -4.3.1.2 Round Robin (RR) -........................ - - option scheduler rr - - Round-Robin (RR) scheduler creates files in a round-robin fashion. -Each client will have its own round-robin loop. When your files are -mostly similar in size and I/O access pattern, this scheduler is a good -choice. RR scheduler checks for free disk space on the server before -scheduling, so you can know when to add another server node. The -default value of min-free-disk is 5% and is checked on file creation -calls, with atleast 10 seconds (by default) elapsing between two checks. - - Options: -`rr.limits.min-free-disk <%> (5)' - Minimum free disk space a node must have for RR to schedule a file - to it. - -`rr.refresh-interval (10 seconds)' - Time between two successive free disk space checks. - -4.3.1.3 Random -.............. - - option scheduler random - - The random scheduler schedules file creation randomly among its -child nodes. Like the round-robin scheduler, it also checks for a -minimum amount of free disk space before scheduling a file to a node. - -`random.limits.min-free-disk <%> (5)' - Minimum free disk space a node must have for random to schedule a - file to it. - -`random.refresh-interval (10 seconds)' - Time between two successive free disk space checks. - -4.3.1.4 NUFA -............ - - option scheduler nufa - - It is common in many GlusterFS computing environments for all -deployed machines to act as both servers and clients. For example, a -research lab may have 40 workstations each with its own storage. All of -these workstations might act as servers exporting a volume as well as -clients accessing the entire cluster's storage. In such a situation, -it makes sense to store locally created files on the local workstation -itself (assuming files are accessed most by the workstation that -created them). The Non-Uniform File Allocation (NUFA) scheduler -accomplishes that. - - NUFA gives the local system first priority for file creation over -other nodes. If the local volume does not have more free disk space -than a specified amount (5% by default) then NUFA schedules files among -the other child volumes in a round-robin fashion. - - NUFA is named after the similar strategy used for memory access, -NUMA(1). - -`nufa.limits.min-free-disk <%> (5)' - Minimum disk space that must be free (local or remote) for NUFA to - schedule a file to it. - -`nufa.refresh-interval (10 seconds)' - Time between two successive free disk space checks. - -`nufa.local-volume-name ' - The name of the volume corresponding to the local system. This - volume must be one of the children of the unify volume. This - option is mandatory. - -4.3.1.5 Namespace -................. - -Namespace volume needed because: - persistent inode numbers. - file -exists even when node is down. - - namespace files are simply touched. on every lookup it is checked. - -`namespace *' - Name of the namespace volume (which should be one of the unify - volume's children). - -`self-heal [on|off] (on)' - Enable/disable self-heal. Unless you know what you are doing, do - not disable self-heal. - -4.3.1.6 Self Heal -................. - -* When a 'lookup()/stat()' call is made on directory for the first -time, a self-heal call is made, which checks for the consistancy of its -child nodes. If an entry is present in storage node, but not in -namespace, that entry is created in namespace, and vica-versa. There is -an writedir() API introduced which is used for the same. It also checks -for permissions, and uid/gid consistencies. - - * This check is also done when an server goes down and comes up. - - * If one starts with an empty namespace export, but has data in -storage nodes, a 'find .>/dev/null' or 'ls -lR >/dev/null' should help -to build namespace in one shot. Even otherwise, namespace is built on -demand when a file is looked up for the first time. - - NOTE: There are some issues (Kernel 'Oops' msgs) seen with -fuse-2.6.3, when someone deletes namespace in backend, when glusterfs is -running. But with fuse-2.6.5, this issue is not there. - - ---------- Footnotes ---------- - - (1) Non-Uniform Memory Access: - - - -File: user-guide.info, Node: Replicate, Next: Stripe, Prev: Unify, Up: Clustering Translators - -4.3.2 Replicate (formerly AFR) ------------------------------- - - type cluster/replicate - - Replicate provides RAID-1 like functionality for GlusterFS. -Replicate replicates files and directories across the subvolumes. Hence -if Replicate has four subvolumes, there will be four copies of all -files and directories. Replicate provides high-availability, i.e., in -case one of the subvolumes go down (e. g. server crash, network -disconnection) Replicate will still service the requests using the -redundant copies. - - Replicate also provides self-heal functionality, i.e., in case the -crashed servers come up, the outdated files and directories will be -updated with the latest versions. Replicate uses extended attributes of -the backend file system to track the versioning of files and -directories and provide the self-heal feature. - - volume replicate-example - type cluster/replicate - subvolumes brick1 brick2 brick3 - end-volume - - This sample configuration will replicate all directories and files on -brick1, brick2 and brick3. - - All the read operations happen from the first alive child. If all the -three sub-volumes are up, reads will be done from brick1; if brick1 is -down read will be done from brick2. In case read() was being done on -brick1 and it goes down, replicate transparently falls back to brick2. - - The next release of GlusterFS will add the following features: - * Ability to specify the sub-volume from which read operations are - to be done (this will help users who have one of the sub-volumes - as a local storage volume). - - * Allow scheduling of read operations amongst the sub-volumes in a - round-robin fashion. - - The order of the subvolumes list should be same across all the -'replicate's as they will be used for locking purposes. - -4.3.2.1 Self Heal -................. - -Replicate has self-heal feature, which updates the outdated file and -directory copies by the most recent versions. For example consider the -following config: - - volume replicate-example - type cluster/replicate - subvolumes brick1 brick2 - end-volume - -4.3.2.2 File self-heal -...................... - -Now if we create a file foo.txt on replicate-example, the file will be -created on brick1 and brick2. The file will have two extended -attributes associated with it in the backend filesystem. One is -trusted.afr.createtime and the other is trusted.afr.version. The -trusted.afr.createtime xattr has the create time (in terms of seconds -since epoch) and trusted.afr.version is a number that is incremented -each time a file is modified. This increment happens during close -(incase any write was done before close). - - If brick1 goes down, we edit foo.txt the version gets incremented. -Now the brick1 comes back up, when we open() on foo.txt replicate will -check if their versions are same. If they are not same, the outdated -copy is replaced by the latest copy and its version is updated. After -the sync the open() proceeds in the usual manner and the application -calling open() can continue on its access to the file. - - If brick1 goes down, we delete foo.txt and create a file with the -same name again i.e foo.txt. Now brick1 comes back up, clearly there is -a chance that the version on brick1 being more than the version on -brick2, this is where createtime extended attribute helps in deciding -which the outdated copy is. Hence we need to consider both createtime -and version to decide on the latest copy. - - The version attribute is incremented during the close() call. Version -will not be incremented in case there was no write() done. In case the -fd that the close() gets was got by create() call, we also create the -createtime extended attribute. - -4.3.2.3 Directory self-heal -........................... - -Suppose brick1 goes down, we delete foo.txt, brick1 comes back up, now -we should not create foo.txt on brick2 but we should delete foo.txt on -brick1. We handle this situation by having the createtime and version -attribute on the directory similar to the file. when lookup() is done -on the directory, we compare the createtime/version attributes of the -copies and see which files needs to be deleted and delete those files -and update the extended attributes of the outdated directory copy. -Each time a directory is modified (a file or a subdirectory is created -or deleted inside the directory) and one of the subvols is down, we -increment the directory's version. - - lookup() is a call initiated by the kernel on a file or directory -just before any access to that file or directory. In glusterfs, by -default, lookup() will not be called in case it was called in the past -one second on that particular file or directory. - - The extended attributes can be seen in the backend filesystem using -the `getfattr' command. (`getfattr -n trusted.afr.version ') - -`debug [on|off] (off)' - -`self-heal [on|off] (on)' - -`replicate (*:1)' - -`lock-node (first child is used by default)' - - -File: user-guide.info, Node: Stripe, Prev: Replicate, Up: Clustering Translators - -4.3.3 Stripe ------------- - - type cluster/stripe - - The stripe translator distributes the contents of a file over its -sub-volumes. It does this by creating a file equal in size to the -total size of the file on each of its sub-volumes. It then writes only -a part of the file to each sub-volume, leaving the rest of it empty. -These empty regions are called `holes' in Unix terminology. The holes -do not consume any disk space. - - The diagram below makes this clear. - - - -You can configure stripe so that only filenames matching a pattern are -striped. You can also configure the size of the data to be stored on -each sub-volume. - -`block-size : (*:0 no striping)' - Distribute files matching `' over the sub-volumes, - storing at least `' on each sub-volume. For example, - - option block-size *.mpg:1M - - distributes all files ending in `.mpg', storing at least 1 MB on - each sub-volume. - - Any number of `block-size' option lines may be present, specifying - different sizes for different file name patterns. - - -File: user-guide.info, Node: Performance Translators, Next: Features Translators, Prev: Clustering Translators, Up: Translators - -4.4 Performance Translators -=========================== - -* Menu: - -* Read Ahead:: -* Write Behind:: -* IO Threads:: -* IO Cache:: -* Booster:: - - -File: user-guide.info, Node: Read Ahead, Next: Write Behind, Up: Performance Translators - -4.4.1 Read Ahead ----------------- - - type performance/read-ahead - - The read-ahead translator pre-fetches data in advance on every read. -This benefits applications that mostly process files in sequential -order, since the next block of data will already be available by the -time the application is done with the current one. - - Additionally, the read-ahead translator also behaves as a -read-aggregator. Many small read operations are combined and issued as -fewer, larger read requests to the server. - - Read-ahead deals in "pages" as the unit of data fetched. The page -size is configurable, as is the "page count", which is the number of -pages that are pre-fetched. - - Read-ahead is best used with InfiniBand (using the ib-verbs -transport). On FastEthernet and Gigabit Ethernet networks, GlusterFS -can achieve the link-maximum throughput even without read-ahead, making -it quite superflous. - - Note that read-ahead only happens if the reads are perfectly -sequential. If your application accesses data in a random fashion, -using read-ahead might actually lead to a performance loss, since -read-ahead will pointlessly fetch pages which won't be used by the -application. - - Options: -`page-size (256KB)' - The unit of data that is pre-fetched. - -`page-count (2)' - The number of pages that are pre-fetched. - -`force-atime-update [on|off|yes|no] (off|no)' - Whether to force an access time (atime) update on the file on - every read. Without this, the atime will be slightly imprecise, as - it will reflect the time when the read-ahead translator read the - data, not when the application actually read it. - - -File: user-guide.info, Node: Write Behind, Next: IO Threads, Prev: Read Ahead, Up: Performance Translators - -4.4.2 Write Behind ------------------- - - type performance/write-behind - - The write-behind translator improves the latency of a write -operation. It does this by relegating the write operation to the -background and returning to the application even as the write is in -progress. Using the write-behind translator, successive write requests -can be pipelined. This mode of write-behind operation is best used on -the client side, to enable decreased write latency for the application. - - The write-behind translator can also aggregate write requests. If the -`aggregate-size' option is specified, then successive writes upto that -size are accumulated and written in a single operation. This mode of -operation is best used on the server side, as this will decrease the -disk's head movement when multiple files are being written to in -parallel. - - The `aggregate-size' option has a default value of 128KB. Although -this works well for most users, you should always experiment with -different values to determine the one that will deliver maximum -performance. This is because the performance of write-behind depends on -your interconnect, size of RAM, and the work load. - -`aggregate-size (128KB)' - Amount of data to accumulate before doing a write - -`flush-behind [on|yes|off|no] (off|no)' - - -File: user-guide.info, Node: IO Threads, Next: IO Cache, Prev: Write Behind, Up: Performance Translators - -4.4.3 IO Threads ----------------- - - type performance/io-threads - - The IO threads translator is intended to increase the responsiveness -of the server to metadata operations by doing file I/O (read, write) in -a background thread. Since the GlusterFS server is single-threaded, -using the IO threads translator can significantly improve performance. -This translator is best used on the server side, loaded just below the -server protocol translator. - - IO threads operates by handing out read and write requests to a -separate thread. The total number of threads in existence at a time is -constant, and configurable. - -`thread-count (1)' - Number of threads to use. - - -File: user-guide.info, Node: IO Cache, Next: Booster, Prev: IO Threads, Up: Performance Translators - -4.4.4 IO Cache --------------- - - type performance/io-cache - - The IO cache translator caches data that has been read. This is -useful if many applications read the same data multiple times, and if -reads are much more frequent than writes (for example, IO caching may be -useful in a web hosting environment, where most clients will simply -read some files and only a few will write to them). - - The IO cache translator reads data from its child in `page-size' -chunks. It caches data upto `cache-size' bytes. The cache is -maintained as a prioritized least-recently-used (LRU) list, with -priorities determined by user-specified patterns to match filenames. - - When the IO cache translator detects a write operation, the cache -for that file is flushed. - - The IO cache translator periodically verifies the consistency of -cached data, using the modification times on the files. The -verification timeout is configurable. - -`page-size (128KB)' - Size of a page. - -`cache-size (n) (32MB)' - Total amount of data to be cached. - -`force-revalidate-timeout (1)' - Timeout to force a cache consistency verification, in seconds. - -`priority (*:0)' - Filename patterns listed in order of priority. - - -File: user-guide.info, Node: Booster, Prev: IO Cache, Up: Performance Translators - -4.4.5 Booster -------------- - - type performance/booster - - The booster translator gives applications a faster path to -communicate read and write requests to GlusterFS. Normally, all -requests to GlusterFS from applications go through FUSE, as indicated -in *note Filesystems in Userspace::. Using the booster translator in -conjunction with the GlusterFS booster shared library, an application -can bypass the FUSE path and send read/write requests directly to the -GlusterFS client process. - - The booster mechanism consists of two parts: the booster translator, -and the booster shared library. The booster translator is meant to be -loaded on the client side, usually at the root of the translator tree. -The booster shared library should be `LD_PRELOAD'ed with the -application. - - The booster translator when loaded opens a Unix domain socket and -listens for read/write requests on it. The booster shared library -intercepts read and write system calls and sends the requests to the -GlusterFS process directly using the Unix domain socket, bypassing FUSE. -This leads to superior performance. - - Once you've loaded the booster translator in your volume -specification file, you can start your application as: - - $ LD_PRELOAD=/usr/local/bin/glusterfs-booster.so your_app - - The booster translator accepts no options. - - -File: user-guide.info, Node: Features Translators, Next: Miscellaneous Translators, Prev: Performance Translators, Up: Translators - -4.5 Features Translators -======================== - -* Menu: - -* POSIX Locks:: -* Fixed ID:: - - -File: user-guide.info, Node: POSIX Locks, Next: Fixed ID, Up: Features Translators - -4.5.1 POSIX Locks ------------------ - - type features/posix-locks - - This translator provides storage independent POSIX record locking -support (`fcntl' locking). Typically you'll want to load this on the -server side, just above the POSIX storage translator. Using this -translator you can get both advisory locking and mandatory locking -support. It also handles `flock()' locks properly. - - Caveat: Consider a file that does not have its mandatory locking bits -(+setgid, -group execution) turned on. Assume that this file is now -opened by a process on a client that has the write-behind xlator -loaded. The write-behind xlator does not cache anything for files which -have mandatory locking enabled, to avoid incoherence. Let's say that -mandatory locking is now enabled on this file through another client. -The former client will not know about this change, and write-behind may -erroneously report a write as being successful when in fact it would -fail due to the region it is writing to being locked. - - There seems to be no easy way to fix this. To work around this -problem, it is recommended that you never enable the mandatory bits on -a file while it is open. - -`mandatory [on|off] (on)' - Turns mandatory locking on. - - -File: user-guide.info, Node: Fixed ID, Prev: POSIX Locks, Up: Features Translators - -4.5.2 Fixed ID --------------- - - type features/fixed-id - - The fixed ID translator makes all filesystem requests from the client -to appear to be coming from a fixed, specified UID/GID, regardless of -which user actually initiated the request. - -`fixed-uid [if not set, not used]' - The UID to send to the server - -`fixed-gid [if not set, not used]' - The GID to send to the server - - -File: user-guide.info, Node: Miscellaneous Translators, Prev: Features Translators, Up: Translators - -4.6 Miscellaneous Translators -============================= - -* Menu: - -* ROT-13:: -* Trace:: - - -File: user-guide.info, Node: ROT-13, Next: Trace, Up: Miscellaneous Translators - -4.6.1 ROT-13 ------------- - - type encryption/rot-13 - - ROT-13 is a toy translator that can "encrypt" and "decrypt" file -contents using the ROT-13 algorithm. ROT-13 is a trivial algorithm that -rotates each alphabet by thirteen places. Thus, 'A' becomes 'N', 'B' -becomes 'O', and 'Z' becomes 'M'. - - It goes without saying that you shouldn't use this translator if you -need _real_ encryption (a future release of GlusterFS will have real -encryption translators). - -`encrypt-write [on|off] (on)' - Whether to encrypt on write - -`decrypt-read [on|off] (on)' - Whether to decrypt on read - - -File: user-guide.info, Node: Trace, Prev: ROT-13, Up: Miscellaneous Translators - -4.6.2 Trace ------------ - - type debug/trace - - The trace translator is intended for debugging purposes. When -loaded, it logs all the system calls received by the server or client -(wherever trace is loaded), their arguments, and the results. You must -use a GlusterFS log level of DEBUG (See *note Running GlusterFS::) for -trace to work. - - Sample trace output (lines have been wrapped for readability): - 2007-10-30 00:08:58 D [trace.c:1579:trace_opendir] trace: callid: 68 - (*this=0x8059e40, loc=0x8091984 {path=/iozone3_283, inode=0x8091f00}, - fd=0x8091d50) - - 2007-10-30 00:08:58 D [trace.c:630:trace_opendir_cbk] trace: - (*this=0x8059e40, op_ret=4, op_errno=1, fd=0x8091d50) - - 2007-10-30 00:08:58 D [trace.c:1602:trace_readdir] trace: callid: 69 - (*this=0x8059e40, size=4096, offset=0 fd=0x8091d50) - - 2007-10-30 00:08:58 D [trace.c:215:trace_readdir_cbk] trace: - (*this=0x8059e40, op_ret=0, op_errno=0, count=4) - - 2007-10-30 00:08:58 D [trace.c:1624:trace_closedir] trace: callid: 71 - (*this=0x8059e40, *fd=0x8091d50) - - 2007-10-30 00:08:58 D [trace.c:809:trace_closedir_cbk] trace: - (*this=0x8059e40, op_ret=0, op_errno=1) - - -File: user-guide.info, Node: Usage Scenarios, Next: Troubleshooting, Prev: Translators, Up: Top - -5 Usage Scenarios -***************** - -5.1 Advanced Striping -===================== - -This section is based on the Advanced Striping tutorial written by -Anand Avati on the GlusterFS wiki (1). - -5.1.1 Mixed Storage Requirements --------------------------------- - -There are two ways of scheduling the I/O. One at file level (using -unify translator) and other at block level (using stripe translator). -Striped I/O is good for files that are potentially large and require -high parallel throughput (for example, a single file of 400GB being -accessed by 100s and 1000s of systems simultaneously and randomly). For -most of the cases, file level scheduling works best. - - In the real world, it is desirable to mix file level and block level -scheduling on a single storage volume. Alternatively users can choose -to have two separate volumes and hence two mount points, but the -applications may demand a single storage system to host both. - - This document explains how to mix file level scheduling with stripe. - -5.1.2 Configuration Brief -------------------------- - -This setup demonstrates how users can configure unify translator with -appropriate I/O scheduler for file level scheduling and strip for only -matching patterns. This way, GlusterFS chooses appropriate I/O profile -and knows how to efficiently handle both the types of data. - - A simple technique to achieve this effect is to create a stripe set -of unify and stripe blocks, where unify is the first sub-volume. Files -that do not match the stripe policy passed on to first unify sub-volume -and inturn scheduled arcoss the cluster using its file level I/O -scheduler. - - 5.1.3 Preparing GlusterFS Envoronment -------------------------------------- - -Create the directories /export/namespace, /export/unify and -/export/stripe on all the storage bricks. - - Place the following server and client volume spec file under -/etc/glusterfs (or appropriate installed path) and replace the IP -addresses / access control fields to match your environment. - - ## file: /etc/glusterfs/glusterfsd.vol - volume posix-unify - type storage/posix - option directory /export/for-unify - end-volume - - volume posix-stripe - type storage/posix - option directory /export/for-stripe - end-volume - - volume posix-namespace - type storage/posix - option directory /export/for-namespace - end-volume - - volume server - type protocol/server - option transport-type tcp - option auth.addr.posix-unify.allow 192.168.1.* - option auth.addr.posix-stripe.allow 192.168.1.* - option auth.addr.posix-namespace.allow 192.168.1.* - subvolumes posix-unify posix-stripe posix-namespace - end-volume - - ## file: /etc/glusterfs/glusterfs.vol - volume client-namespace - type protocol/client - option transport-type tcp - option remote-host 192.168.1.1 - option remote-subvolume posix-namespace - end-volume - - volume client-unify-1 - type protocol/client - option transport-type tcp - option remote-host 192.168.1.1 - option remote-subvolume posix-unify - end-volume - - volume client-unify-2 - type protocol/client - option transport-type tcp - option remote-host 192.168.1.2 - option remote-subvolume posix-unify - end-volume - - volume client-unify-3 - type protocol/client - option transport-type tcp - option remote-host 192.168.1.3 - option remote-subvolume posix-unify - end-volume - - volume client-unify-4 - type protocol/client - option transport-type tcp - option remote-host 192.168.1.4 - option remote-subvolume posix-unify - end-volume - - volume client-stripe-1 - type protocol/client - option transport-type tcp - option remote-host 192.168.1.1 - option remote-subvolume posix-stripe - end-volume - - volume client-stripe-2 - type protocol/client - option transport-type tcp - option remote-host 192.168.1.2 - option remote-subvolume posix-stripe - end-volume - - volume client-stripe-3 - type protocol/client - option transport-type tcp - option remote-host 192.168.1.3 - option remote-subvolume posix-stripe - end-volume - - volume client-stripe-4 - type protocol/client - option transport-type tcp - option remote-host 192.168.1.4 - option remote-subvolume posix-stripe - end-volume - - volume unify - type cluster/unify - option scheduler rr - subvolumes cluster-unify-1 cluster-unify-2 cluster-unify-3 cluster-unify-4 - end-volume - - volume stripe - type cluster/stripe - option block-size *.img:2MB # All files ending with .img are striped with 2MB stripe block size. - subvolumes unify cluster-stripe-1 cluster-stripe-2 cluster-stripe-3 cluster-stripe-4 - end-volume - - Bring up the Storage - - Starting GlusterFS Server: If you have installed through binary -package, you can start the service through init.d startup script. If -not: - - [root@server]# glusterfsd - - Mounting GlusterFS Volumes: - - [root@client]# glusterfs -s [BRICK-IP-ADDRESS] /mnt/cluster - - Improving upon this Setup - - Infiniband Verbs RDMA transport is much faster than TCP/IP GigE -transport. - - Use of performance translators such as read-ahead, write-behind, -io-cache, io-threads, booster is recommended. - - Replace round-robin (rr) scheduler with ALU to handle more dynamic -storage environments. - - ---------- Footnotes ---------- - - (1) -http://gluster.org/docs/index.php/Mixing_Striped_and_Regular_Files - - -File: user-guide.info, Node: Troubleshooting, Next: GNU Free Documentation Licence, Prev: Usage Scenarios, Up: Top - -6 Troubleshooting -***************** - -This chapter is a general troubleshooting guide to GlusterFS. It lists -common GlusterFS server and client error messages, debugging hints, and -concludes with the suggested procedure to report bugs in GlusterFS. - -6.1 GlusterFS error messages -============================ - -6.1.1 Server errors -------------------- - - glusterfsd: FATAL: could not open specfile: - '/etc/glusterfs/glusterfsd.vol' - - The GlusterFS server expects the volume specification file to be at -`/etc/glusterfs/glusterfsd.vol'. The example specification file will be -installed as `/etc/glusterfs/glusterfsd.vol.sample'. You need to edit -it and rename it, or provide a different specification file using the -`--spec-file' command line option (See *note Server::). - - gf_log_init: failed to open logfile "/usr/var/log/glusterfs/glusterfsd.log" - (Permission denied) - - You don't have permission to create files in the -`/usr/var/log/glusterfs' directory. Make sure you are running GlusterFS -as root. Alternatively, specify a different path for the log file using -the `--log-file' option (See *note Server::). - -6.1.2 Client errors -------------------- - - fusermount: failed to access mountpoint /mnt: - Transport endpoint is not connected - - A previous failed (or hung) mount of GlusterFS is preventing it from -being mounted again in the same location. The fix is to do: - - # umount /mnt - - and try mounting again. - - *"Transport endpoint is not connected".* - - If you get this error when you try a command such as `ls' or `cat', -it means the GlusterFS mount did not succeed. Try running GlusterFS in -`DEBUG' logging level and study the log messages to discover the cause. - - *"Connect to server failed", "SERVER-ADDRESS: Connection refused".* - - GluserFS Server is not running or dead. Check your network -connections and firewall settings. To check if the server is reachable, -try: - - telnet IP-ADDRESS 24007 - - If the server is accessible, your `telnet' command should connect and -block. If not you will see an error message such as `telnet: Unable to -connect to remote host: Connection refused'. 24007 is the default -GlusterFS port. If you have changed it, then use the corresponding port -instead. - - gf_log_init: failed to open logfile "/usr/var/log/glusterfs/glusterfs.log" - (Permission denied) - - You don't have permission to create files in the -`/usr/var/log/glusterfs' directory. Make sure you are running GlusterFS -as root. Alternatively, specify a different path for the log file using -the `--log-file' option (See *note Client::). - -6.2 FUSE error messages -======================= - -`modprobe fuse' fails with: "Unknown symbol in module, or unknown -parameter". - - If you are using fuse-2.6.x on Redhat Enterprise Linux Work Station 4 -and Advanced Server 4 with 2.6.9-42.ELlargesmp, 2.6.9-42.ELsmp, -2.6.9-42.EL kernels and get this error while loading FUSE kernel -module, you need to apply the following patch. - - For fuse-2.6.2: - - - - For fuse-2.6.3: - - - -6.3 AppArmour and GlusterFS -=========================== - -Under OpenSuSE GNU/Linux, the AppArmour security feature does not allow -GlusterFS to create temporary files or network socket connections even -while running as root. You will see error messages like `Unable to open -log file: Operation not permitted' or `Connection refused'. Disabling -AppArmour using YaST or properly configuring AppArmour to recognize -`glusterfsd' or `glusterfs'/`fusermount' should solve the problem. - -6.4 Reporting a bug -=================== - -If you encounter a bug in GlusterFS, please follow the below guidelines -when you report it to the mailing list. Be sure to report it! User -feedback is crucial to the health of the project and we value it highly. - -6.4.1 General instructions --------------------------- - -When running GlusterFS in a non-production environment, be sure to -build it with the following command: - - $ make CFLAGS='-g -O0 -DDEBUG' - - This includes debugging information which will be helpful in getting -backtraces (see below) and also disable optimization. Enabling -optimization can result in incorrect line numbers being reported to gdb. - -6.4.2 Volume specification files --------------------------------- - -Attach all relevant server and client spec files you were using when -you encountered the bug. Also tell us details of your setup, i.e., how -many clients and how many servers. - -6.4.3 Log files ---------------- - -Set the loglevel of your client and server programs to DEBUG (by -passing the -L DEBUG option) and attach the log files with your bug -report. Obviously, if only the client is failing (for example), you -only need to send us the client log file. - -6.4.4 Backtrace ---------------- - -If GlusterFS has encountered a segmentation fault or has crashed for -some other reason, include the backtrace with the bug report. You can -get the backtrace using the following procedure. - - Run the GlusterFS client or server inside gdb. - - $ gdb ./glusterfs - (gdb) set args -f client.spec -N -l/path/to/log/file -LDEBUG /mnt/point - (gdb) run - - Now when the process segfaults, you can get the backtrace by typing: - - (gdb) bt - - If the GlusterFS process has crashed and dumped a core file (you can -find this in / if running as a daemon and in the current directory -otherwise), you can do: - - $ gdb /path/to/glusterfs /path/to/core. - - and then get the backtrace. - - If the GlusterFS server or client seems to be hung, then you can get -the backtrace by attaching gdb to the process. First get the `PID' of -the process (using ps), and then do: - - $ gdb ./glusterfs - - Press Ctrl-C to interrupt the process and then generate the -backtrace. - -6.4.5 Reproducing the bug -------------------------- - -If the bug is reproducible, please include the steps necessary to do -so. If the bug is not reproducible, send us the bug report anyway. - -6.4.6 Other information ------------------------ - -If you think it is relevant, send us also the version of FUSE you're -using, the kernel version, platform. - - -File: user-guide.info, Node: GNU Free Documentation Licence, Next: Index, Prev: Troubleshooting, Up: Top - -Appendix A GNU Free Documentation Licence -***************************************** - - Version 1.2, November 2002 - - Copyright (C) 2000,2001,2002 Free Software Foundation, Inc. - 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA - - Everyone is permitted to copy and distribute verbatim copies - of this license document, but changing it is not allowed. - - 0. PREAMBLE - - The purpose of this License is to make a manual, textbook, or other - functional and useful document "free" in the sense of freedom: to - assure everyone the effective freedom to copy and redistribute it, - with or without modifying it, either commercially or - noncommercially. Secondarily, this License preserves for the - author and publisher a way to get credit for their work, while not - being considered responsible for modifications made by others. - - This License is a kind of "copyleft", which means that derivative - works of the document must themselves be free in the same sense. - It complements the GNU General Public License, which is a copyleft - license designed for free software. - - We have designed this License in order to use it for manuals for - free software, because free software needs free documentation: a - free program should come with manuals providing the same freedoms - that the software does. But this License is not limited to - software manuals; it can be used for any textual work, regardless - of subject matter or whether it is published as a printed book. - We recommend this License principally for works whose purpose is - instruction or reference. - - 1. APPLICABILITY AND DEFINITIONS - - This License applies to any manual or other work, in any medium, - that contains a notice placed by the copyright holder saying it - can be distributed under the terms of this License. Such a notice - grants a world-wide, royalty-free license, unlimited in duration, - to use that work under the conditions stated herein. The - "Document", below, refers to any such manual or work. Any member - of the public is a licensee, and is addressed as "you". You - accept the license if you copy, modify or distribute the work in a - way requiring permission under copyright law. - - A "Modified Version" of the Document means any work containing the - Document or a portion of it, either copied verbatim, or with - modifications and/or translated into another language. - - A "Secondary Section" is a named appendix or a front-matter section - of the Document that deals exclusively with the relationship of the - publishers or authors of the Document to the Document's overall - subject (or to related matters) and contains nothing that could - fall directly within that overall subject. (Thus, if the Document - is in part a textbook of mathematics, a Secondary Section may not - explain any mathematics.) The relationship could be a matter of - historical connection with the subject or with related matters, or - of legal, commercial, philosophical, ethical or political position - regarding them. - - The "Invariant Sections" are certain Secondary Sections whose - titles are designated, as being those of Invariant Sections, in - the notice that says that the Document is released under this - License. If a section does not fit the above definition of - Secondary then it is not allowed to be designated as Invariant. - The Document may contain zero Invariant Sections. If the Document - does not identify any Invariant Sections then there are none. - - The "Cover Texts" are certain short passages of text that are - listed, as Front-Cover Texts or Back-Cover Texts, in the notice - that says that the Document is released under this License. A - Front-Cover Text may be at most 5 words, and a Back-Cover Text may - be at most 25 words. - - A "Transparent" copy of the Document means a machine-readable copy, - represented in a format whose specification is available to the - general public, that is suitable for revising the document - straightforwardly with generic text editors or (for images - composed of pixels) generic paint programs or (for drawings) some - widely available drawing editor, and that is suitable for input to - text formatters or for automatic translation to a variety of - formats suitable for input to text formatters. A copy made in an - otherwise Transparent file format whose markup, or absence of - markup, has been arranged to thwart or discourage subsequent - modification by readers is not Transparent. An image format is - not Transparent if used for any substantial amount of text. A - copy that is not "Transparent" is called "Opaque". - - Examples of suitable formats for Transparent copies include plain - ASCII without markup, Texinfo input format, LaTeX input format, - SGML or XML using a publicly available DTD, and - standard-conforming simple HTML, PostScript or PDF designed for - human modification. Examples of transparent image formats include - PNG, XCF and JPG. Opaque formats include proprietary formats that - can be read and edited only by proprietary word processors, SGML or - XML for which the DTD and/or processing tools are not generally - available, and the machine-generated HTML, PostScript or PDF - produced by some word processors for output purposes only. - - The "Title Page" means, for a printed book, the title page itself, - plus such following pages as are needed to hold, legibly, the - material this License requires to appear in the title page. For - works in formats which do not have any title page as such, "Title - Page" means the text near the most prominent appearance of the - work's title, preceding the beginning of the body of the text. - - A section "Entitled XYZ" means a named subunit of the Document - whose title either is precisely XYZ or contains XYZ in parentheses - following text that translates XYZ in another language. (Here XYZ - stands for a specific section name mentioned below, such as - "Acknowledgements", "Dedications", "Endorsements", or "History".) - To "Preserve the Title" of such a section when you modify the - Document means that it remains a section "Entitled XYZ" according - to this definition. - - The Document may include Warranty Disclaimers next to the notice - which states that this License applies to the Document. These - Warranty Disclaimers are considered to be included by reference in - this License, but only as regards disclaiming warranties: any other - implication that these Warranty Disclaimers may have is void and - has no effect on the meaning of this License. - - 2. VERBATIM COPYING - - You may copy and distribute the Document in any medium, either - commercially or noncommercially, provided that this License, the - copyright notices, and the license notice saying this License - applies to the Document are reproduced in all copies, and that you - add no other conditions whatsoever to those of this License. You - may not use technical measures to obstruct or control the reading - or further copying of the copies you make or distribute. However, - you may accept compensation in exchange for copies. If you - distribute a large enough number of copies you must also follow - the conditions in section 3. - - You may also lend copies, under the same conditions stated above, - and you may publicly display copies. - - 3. COPYING IN QUANTITY - - If you publish printed copies (or copies in media that commonly - have printed covers) of the Document, numbering more than 100, and - the Document's license notice requires Cover Texts, you must - enclose the copies in covers that carry, clearly and legibly, all - these Cover Texts: Front-Cover Texts on the front cover, and - Back-Cover Texts on the back cover. Both covers must also clearly - and legibly identify you as the publisher of these copies. The - front cover must present the full title with all words of the - title equally prominent and visible. You may add other material - on the covers in addition. Copying with changes limited to the - covers, as long as they preserve the title of the Document and - satisfy these conditions, can be treated as verbatim copying in - other respects. - - If the required texts for either cover are too voluminous to fit - legibly, you should put the first ones listed (as many as fit - reasonably) on the actual cover, and continue the rest onto - adjacent pages. - - If you publish or distribute Opaque copies of the Document - numbering more than 100, you must either include a - machine-readable Transparent copy along with each Opaque copy, or - state in or with each Opaque copy a computer-network location from - which the general network-using public has access to download - using public-standard network protocols a complete Transparent - copy of the Document, free of added material. If you use the - latter option, you must take reasonably prudent steps, when you - begin distribution of Opaque copies in quantity, to ensure that - this Transparent copy will remain thus accessible at the stated - location until at least one year after the last time you - distribute an Opaque copy (directly or through your agents or - retailers) of that edition to the public. - - It is requested, but not required, that you contact the authors of - the Document well before redistributing any large number of - copies, to give them a chance to provide you with an updated - version of the Document. - - 4. MODIFICATIONS - - You may copy and distribute a Modified Version of the Document - under the conditions of sections 2 and 3 above, provided that you - release the Modified Version under precisely this License, with - the Modified Version filling the role of the Document, thus - licensing distribution and modification of the Modified Version to - whoever possesses a copy of it. In addition, you must do these - things in the Modified Version: - - A. Use in the Title Page (and on the covers, if any) a title - distinct from that of the Document, and from those of - previous versions (which should, if there were any, be listed - in the History section of the Document). You may use the - same title as a previous version if the original publisher of - that version gives permission. - - B. List on the Title Page, as authors, one or more persons or - entities responsible for authorship of the modifications in - the Modified Version, together with at least five of the - principal authors of the Document (all of its principal - authors, if it has fewer than five), unless they release you - from this requirement. - - C. State on the Title page the name of the publisher of the - Modified Version, as the publisher. - - D. Preserve all the copyright notices of the Document. - - E. Add an appropriate copyright notice for your modifications - adjacent to the other copyright notices. - - F. Include, immediately after the copyright notices, a license - notice giving the public permission to use the Modified - Version under the terms of this License, in the form shown in - the Addendum below. - - G. Preserve in that license notice the full lists of Invariant - Sections and required Cover Texts given in the Document's - license notice. - - H. Include an unaltered copy of this License. - - I. Preserve the section Entitled "History", Preserve its Title, - and add to it an item stating at least the title, year, new - authors, and publisher of the Modified Version as given on - the Title Page. If there is no section Entitled "History" in - the Document, create one stating the title, year, authors, - and publisher of the Document as given on its Title Page, - then add an item describing the Modified Version as stated in - the previous sentence. - - J. Preserve the network location, if any, given in the Document - for public access to a Transparent copy of the Document, and - likewise the network locations given in the Document for - previous versions it was based on. These may be placed in - the "History" section. You may omit a network location for a - work that was published at least four years before the - Document itself, or if the original publisher of the version - it refers to gives permission. - - K. For any section Entitled "Acknowledgements" or "Dedications", - Preserve the Title of the section, and preserve in the - section all the substance and tone of each of the contributor - acknowledgements and/or dedications given therein. - - L. Preserve all the Invariant Sections of the Document, - unaltered in their text and in their titles. Section numbers - or the equivalent are not considered part of the section - titles. - - M. Delete any section Entitled "Endorsements". Such a section - may not be included in the Modified Version. - - N. Do not retitle any existing section to be Entitled - "Endorsements" or to conflict in title with any Invariant - Section. - - O. Preserve any Warranty Disclaimers. - - If the Modified Version includes new front-matter sections or - appendices that qualify as Secondary Sections and contain no - material copied from the Document, you may at your option - designate some or all of these sections as invariant. To do this, - add their titles to the list of Invariant Sections in the Modified - Version's license notice. These titles must be distinct from any - other section titles. - - You may add a section Entitled "Endorsements", provided it contains - nothing but endorsements of your Modified Version by various - parties--for example, statements of peer review or that the text - has been approved by an organization as the authoritative - definition of a standard. - - You may add a passage of up to five words as a Front-Cover Text, - and a passage of up to 25 words as a Back-Cover Text, to the end - of the list of Cover Texts in the Modified Version. Only one - passage of Front-Cover Text and one of Back-Cover Text may be - added by (or through arrangements made by) any one entity. If the - Document already includes a cover text for the same cover, - previously added by you or by arrangement made by the same entity - you are acting on behalf of, you may not add another; but you may - replace the old one, on explicit permission from the previous - publisher that added the old one. - - The author(s) and publisher(s) of the Document do not by this - License give permission to use their names for publicity for or to - assert or imply endorsement of any Modified Version. - - 5. COMBINING DOCUMENTS - - You may combine the Document with other documents released under - this License, under the terms defined in section 4 above for - modified versions, provided that you include in the combination - all of the Invariant Sections of all of the original documents, - unmodified, and list them all as Invariant Sections of your - combined work in its license notice, and that you preserve all - their Warranty Disclaimers. - - The combined work need only contain one copy of this License, and - multiple identical Invariant Sections may be replaced with a single - copy. If there are multiple Invariant Sections with the same name - but different contents, make the title of each such section unique - by adding at the end of it, in parentheses, the name of the - original author or publisher of that section if known, or else a - unique number. Make the same adjustment to the section titles in - the list of Invariant Sections in the license notice of the - combined work. - - In the combination, you must combine any sections Entitled - "History" in the various original documents, forming one section - Entitled "History"; likewise combine any sections Entitled - "Acknowledgements", and any sections Entitled "Dedications". You - must delete all sections Entitled "Endorsements." - - 6. COLLECTIONS OF DOCUMENTS - - You may make a collection consisting of the Document and other - documents released under this License, and replace the individual - copies of this License in the various documents with a single copy - that is included in the collection, provided that you follow the - rules of this License for verbatim copying of each of the - documents in all other respects. - - You may extract a single document from such a collection, and - distribute it individually under this License, provided you insert - a copy of this License into the extracted document, and follow - this License in all other respects regarding verbatim copying of - that document. - - 7. AGGREGATION WITH INDEPENDENT WORKS - - A compilation of the Document or its derivatives with other - separate and independent documents or works, in or on a volume of - a storage or distribution medium, is called an "aggregate" if the - copyright resulting from the compilation is not used to limit the - legal rights of the compilation's users beyond what the individual - works permit. When the Document is included in an aggregate, this - License does not apply to the other works in the aggregate which - are not themselves derivative works of the Document. - - If the Cover Text requirement of section 3 is applicable to these - copies of the Document, then if the Document is less than one half - of the entire aggregate, the Document's Cover Texts may be placed - on covers that bracket the Document within the aggregate, or the - electronic equivalent of covers if the Document is in electronic - form. Otherwise they must appear on printed covers that bracket - the whole aggregate. - - 8. TRANSLATION - - Translation is considered a kind of modification, so you may - distribute translations of the Document under the terms of section - 4. Replacing Invariant Sections with translations requires special - permission from their copyright holders, but you may include - translations of some or all Invariant Sections in addition to the - original versions of these Invariant Sections. You may include a - translation of this License, and all the license notices in the - Document, and any Warranty Disclaimers, provided that you also - include the original English version of this License and the - original versions of those notices and disclaimers. In case of a - disagreement between the translation and the original version of - this License or a notice or disclaimer, the original version will - prevail. - - If a section in the Document is Entitled "Acknowledgements", - "Dedications", or "History", the requirement (section 4) to - Preserve its Title (section 1) will typically require changing the - actual title. - - 9. TERMINATION - - You may not copy, modify, sublicense, or distribute the Document - except as expressly provided for under this License. Any other - attempt to copy, modify, sublicense or distribute the Document is - void, and will automatically terminate your rights under this - License. However, parties who have received copies, or rights, - from you under this License will not have their licenses - terminated so long as such parties remain in full compliance. - - 10. FUTURE REVISIONS OF THIS LICENSE - - The Free Software Foundation may publish new, revised versions of - the GNU Free Documentation License from time to time. Such new - versions will be similar in spirit to the present version, but may - differ in detail to address new problems or concerns. See - `http://www.gnu.org/copyleft/'. - - Each version of the License is given a distinguishing version - number. If the Document specifies that a particular numbered - version of this License "or any later version" applies to it, you - have the option of following the terms and conditions either of - that specified version or of any later version that has been - published (not as a draft) by the Free Software Foundation. If - the Document does not specify a version number of this License, - you may choose any version ever published (not as a draft) by the - Free Software Foundation. - -A.0.1 ADDENDUM: How to use this License for your documents ----------------------------------------------------------- - -To use this License in a document you have written, include a copy of -the License in the document and put the following copyright and license -notices just after the title page: - - Copyright (C) YEAR YOUR NAME. - Permission is granted to copy, distribute and/or modify this document - under the terms of the GNU Free Documentation License, Version 1.2 - or any later version published by the Free Software Foundation; - with no Invariant Sections, no Front-Cover Texts, and no Back-Cover - Texts. A copy of the license is included in the section entitled ``GNU - Free Documentation License''. - - If you have Invariant Sections, Front-Cover Texts and Back-Cover -Texts, replace the "with...Texts." line with this: - - with the Invariant Sections being LIST THEIR TITLES, with - the Front-Cover Texts being LIST, and with the Back-Cover Texts - being LIST. - - If you have Invariant Sections without Cover Texts, or some other -combination of the three, merge those two alternatives to suit the -situation. - - If your document contains nontrivial examples of program code, we -recommend releasing these examples in parallel under your choice of -free software license, such as the GNU General Public License, to -permit their use in free software. - - -File: user-guide.info, Node: Index, Prev: GNU Free Documentation Licence, Up: Top - -Index -***** - -[index] -* Menu: - -* alu (scheduler): Unify. (line 49) -* AppArmour: Troubleshooting. (line 96) -* arch: Getting GlusterFS. (line 6) -* booster: Booster. (line 6) -* commercial support: Introduction. (line 36) -* DNS round robin: Transport modules. (line 29) -* fcntl: POSIX Locks. (line 6) -* FDL, GNU Free Documentation License: GNU Free Documentation Licence. - (line 6) -* fixed-id (translator): Fixed ID. (line 6) -* GlusterFS client: Client. (line 6) -* GlusterFS mailing list: Introduction. (line 28) -* GlusterFS server: Server. (line 6) -* infiniband transport: Transport modules. (line 58) -* InfiniBand, installation: Pre requisites. (line 51) -* io-cache (translator): IO Cache. (line 6) -* io-threads (translator): IO Threads. (line 6) -* IRC channel, #gluster: Introduction. (line 31) -* libibverbs: Pre requisites. (line 51) -* namespace: Unify. (line 207) -* nufa (scheduler): Unify. (line 175) -* OpenSuSE: Troubleshooting. (line 96) -* posix-locks (translator): POSIX Locks. (line 6) -* random (scheduler): Unify. (line 159) -* read-ahead (translator): Read Ahead. (line 6) -* record locking: POSIX Locks. (line 6) -* Redhat Enterprise Linux: Troubleshooting. (line 78) -* Replicate: Replicate. (line 6) -* rot-13 (translator): ROT-13. (line 6) -* rr (scheduler): Unify. (line 138) -* scheduler (unify): Unify. (line 6) -* self heal (replicate): Replicate. (line 46) -* self heal (unify): Unify. (line 223) -* stripe (translator): Stripe. (line 6) -* trace (translator): Trace. (line 6) -* unify (translator): Unify. (line 6) -* unify invariants: Unify. (line 16) -* write-behind (translator): Write Behind. (line 6) -* Gluster, Inc.: Introduction. (line 36) - - - -Tag Table: -Node: Top704 -Node: Acknowledgements2304 -Node: Introduction3214 -Node: Installation and Invocation4649 -Node: Pre requisites4933 -Node: Getting GlusterFS7023 -Ref: Getting GlusterFS-Footnote-17809 -Node: Building7857 -Node: Running GlusterFS9559 -Node: Server9770 -Node: Client11358 -Node: A Tutorial Introduction13564 -Node: Concepts17101 -Node: Filesystems in Userspace17316 -Node: Translator18457 -Node: Volume specification file21160 -Node: Translators23632 -Node: Storage Translators24201 -Ref: Storage Translators-Footnote-125008 -Node: POSIX25142 -Node: BDB25765 -Node: Client and Server Translators26822 -Node: Transport modules27298 -Node: Client protocol31445 -Node: Server protocol32384 -Node: Clustering Translators33373 -Node: Unify34260 -Ref: Unify-Footnote-143859 -Node: Replicate43951 -Node: Stripe49006 -Node: Performance Translators50164 -Node: Read Ahead50438 -Node: Write Behind52170 -Node: IO Threads53579 -Node: IO Cache54367 -Node: Booster55691 -Node: Features Translators57105 -Node: POSIX Locks57333 -Node: Fixed ID58650 -Node: Miscellaneous Translators59136 -Node: ROT-1359334 -Node: Trace60013 -Node: Usage Scenarios61282 -Ref: Usage Scenarios-Footnote-167215 -Node: Troubleshooting67290 -Node: GNU Free Documentation Licence73638 -Node: Index96087 - -End Tag Table diff --git a/doc/user-guide/legacy/user-guide.pdf b/doc/user-guide/legacy/user-guide.pdf deleted file mode 100644 index ed7bd2a99..000000000 Binary files a/doc/user-guide/legacy/user-guide.pdf and /dev/null differ diff --git a/doc/user-guide/legacy/user-guide.texi b/doc/user-guide/legacy/user-guide.texi deleted file mode 100644 index 8e429853f..000000000 --- a/doc/user-guide/legacy/user-guide.texi +++ /dev/null @@ -1,2246 +0,0 @@ -\input texinfo -@setfilename user-guide.info -@settitle GlusterFS 2.0 User Guide -@afourpaper - -@direntry -* GlusterFS: (user-guide). GlusterFS distributed filesystem user guide -@end direntry - -@copying -This is the user manual for GlusterFS 2.0. - -Copyright @copyright{} 2007-2011 @email{@b{Gluster}} , Inc. Permission is granted to -copy, distribute and/or modify this document under the terms of the -@acronym{GNU} Free Documentation License, Version 1.2 or any later -version published by the Free Software Foundation; with no Invariant -Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the -license is included in the chapter entitled ``@acronym{GNU} Free -Documentation License''. -@end copying - -@titlepage -@title GlusterFS 2.0 User Guide [DRAFT] -@subtitle January 15, 2008 -@author http://gluster.org/core-team.php -@author @email{@b{Gluster}} -@page -@vskip 0pt plus 1filll -@insertcopying -@end titlepage - -@c Info stuff -@ifnottex -@node Top -@top GlusterFS 2.0 User Guide - -@insertcopying -@menu -* Acknowledgements:: -* Introduction:: -* Installation and Invocation:: -* Concepts:: -* Translators:: -* Usage Scenarios:: -* Troubleshooting:: -* GNU Free Documentation Licence:: -* Index:: - -@detailmenu - --- The Detailed Node Listing --- - -Installation and Invocation - -* Pre requisites:: -* Getting GlusterFS:: -* Building:: -* Running GlusterFS:: -* A Tutorial Introduction:: - -Running GlusterFS - -* Server:: -* Client:: - -Concepts - -* Filesystems in Userspace:: -* Translator:: -* Volume specification file:: - -Translators - -* Storage Translators:: -* Client and Server Translators:: -* Clustering Translators:: -* Performance Translators:: -* Features Translators:: - -Storage Translators - -* POSIX:: - -Client and Server Translators - -* Transport modules:: -* Client protocol:: -* Server protocol:: - -Clustering Translators - -* Unify:: -* Replicate:: -* Stripe:: - -Performance Translators - -* Read Ahead:: -* Write Behind:: -* IO Threads:: -* IO Cache:: - -Features Translators - -* POSIX Locks:: -* Fixed ID:: - -Miscellaneous Translators - -* ROT-13:: -* Trace:: - -@end detailmenu -@end menu - -@end ifnottex -@c Info stuff end - -@contents - -@node Acknowledgements -@unnumbered Acknowledgements -GlusterFS continues to be a wonderful and enriching experience for all -of us involved. - -GlusterFS development would not have been possible at this pace if -not for our enthusiastic users. People from around the world have -helped us with bug reports, performance numbers, and feature suggestions. -A huge thanks to them all. - -Matthew Paine - for RPMs & general enthu - -Leonardo Rodrigues de Mello - for DEBs - -Julian Perez & Adam D'Auria - for multi-server tutorial - -Paul England - for HA spec - -Brent Nelson - for many bug reports - -Jacques Mattheij - for Europe mirror. - -Patrick Negri - for TCP non-blocking connect. -@flushright -http://gluster.org/core-team.php (@email{list-hacking@@gluster.com}) -@email{@b{Gluster}} -@end flushright - -@node Introduction -@chapter Introduction - -GlusterFS is a distributed filesystem. It works at the file level, -not block level. - -A network filesystem is one which allows us to access remote files. A -distributed filesystem is one that stores data on multiple machines -and makes them all appear to be a part of the same filesystem. - -Need for distributed filesystems - -@itemize @bullet -@item Scalability: A distributed filesystem allows us to store more data than what can be stored on a single machine. - -@item Redundancy: We might want to replicate crucial data on to several machines. - -@item Uniform access: One can mount a remote volume (for example your home directory) from any machine and access the same data. -@end itemize - -@section Contacting us -You can reach us through the mailing list @strong{gluster-devel} -(@email{gluster-devel@@nongnu.org}). -@cindex GlusterFS mailing list - -You can also find many of the developers on @acronym{IRC}, on the @code{#gluster} -channel on Freenode (@indicateurl{irc.freenode.net}). -@cindex IRC channel, #gluster - -The GlusterFS documentation wiki is also useful: @* -@indicateurl{http://gluster.org/docs/index.php/GlusterFS} - -For commercial support, you can contact @email{@b{Gluster}} at: -@cindex commercial support -@cindex Gluster, Inc. - -@display -3194 Winding Vista Common -Fremont, CA 94539 -USA. - -Phone: +1 (510) 354 6801 -Toll free: +1 (888) 813 6309 -Fax: +1 (510) 372 0604 -@end display - -You can also email us at @email{support@@gluster.com}. - -@node Installation and Invocation -@chapter Installation and Invocation - -@menu -* Pre requisites:: -* Getting GlusterFS:: -* Building:: -* Running GlusterFS:: -* A Tutorial Introduction:: -@end menu - -@node Pre requisites -@section Pre requisites - -Before installing GlusterFS make sure you have the -following components installed. - -@subsection @acronym{FUSE} -GlusterFS has now built-in support for the @acronym{FUSE} protocol. -You need a kernel with @acronym{FUSE} support to mount GlusterFS. -You do not need the @acronym{FUSE} package (library and utilities), -but be aware of the following issues: - -@itemize -@item If you want unprivileged users to be able to mount GlusterFS filesystems, -you need a recent version of the @command{fusermount} utility. You already have -it if you have @acronym{FUSE} version 2.7.0 or higher installed; if that's not -the case, one will be compiled along with GlusterFS if you pass -@command{--enable-fusermount} to the @command{configure} script. @item You -need to ensure @acronym{FUSE} support is configured properly on your system. In -details: -@itemize -@item If your kernel has @acronym{FUSE} as a loadable module, make sure it's -loaded. -@item Create @command{/dev/fuse} (major 10, minor 229) either by means of udev -rules or by hand. -@item Optionally, if you want runtime control over your @acronym{FUSE} mounts, -mount the fusectl auxiliary filesystem: - -@example -# mount -t fusectl none /sys/fs/fuse/connections -@end example -@end itemize - -The @acronym{FUSE} packages shipped by the various distributions usually take care -about these things, so the easiest way to get the above tasks handled is still -installing the @acronym{FUSE} package(s). -@end itemize - -To get the best performance from GlusterFS,it is recommended that you use -our patched version of the @acronym{FUSE} kernel module. See Patched FUSE for details. - -@subsection Patched FUSE - -The GlusterFS project maintains a patched version of @acronym{FUSE} meant to be used -with GlusterFS. The patches increase GlusterFS performance. It is recommended that -all users use the patched @acronym{FUSE}. - -The patched @acronym{FUSE} tarball can be downloaded from: - -@indicateurl{ftp://ftp.gluster.com/pub/gluster/glusterfs/fuse/} - -The specific changes made to @acronym{FUSE} are: - -@itemize -@item The communication channel size between @acronym{FUSE} kernel module and GlusterFS has been increased to 1MB, permitting large reads and writes to be sent in bigger chunks. - -@item The kernel's read-ahead boundry has been extended upto 1MB. - -@item Block size returned in the @command{stat()}/@command{fstat()} calls tuned to 1MB, to make cp and similar commands perform I/O using that block size. - -@item @command{flock()} locking support has been added (although some rework in GlusterFS is needed for perfect compliance). -@end itemize - -@subsection libibverbs (optional) -@cindex InfiniBand, installation -@cindex libibverbs -This is only needed if you want GlusterFS to use InfiniBand as the -interconnect mechanism between server and client. You can get it from: - -@indicateurl{http://www.openfabrics.org/downloads.htm}. - -@subsection Bison and Flex -These should be already installed on most Linux systems. If not, use your distribution's -normal software installation procedures to install them. Make sure you install the -relevant developer packages also. - -@node Getting GlusterFS -@section Getting GlusterFS -@cindex arch -There are many ways to get hold of GlusterFS. For a production deployment, -the recommended method is to download the latest release tarball. -Release tarballs are available at: @indicateurl{http://gluster.org/download.php}. - -If you want the bleeding edge development source, you can get them -from the Git -@footnote{@indicateurl{http://git-scm.com}} -repository. First you must install Git itself. Then -you can check out the source - -@example -$ git clone git://git.sv.gnu.org/gluster.git glusterfs -@end example - -@node Building -@section Building -You can skip this section if you're installing from @acronym{RPM}s -or @acronym{DEB}s. - -GlusterFS uses the Autotools mechanism to build. As such, the procedure -is straight-forward. First, change into the GlusterFS source directory. - -@example -$ cd glusterfs- -@end example - -If you checked out the source from the Arch repository, you'll need -to run @command{./autogen.sh} first. Note that you'll need to have -Autoconf and Automake installed for this. - -Run @command{configure}. - -@example -$ ./configure -@end example - -The configure script accepts the following options: - -@cartouche -@table @code - -@item --disable-ibverbs -Disable the InfiniBand transport mechanism. - -@item --disable-fuse-client -Disable the @acronym{FUSE} client. - -@item --disable-server -Disable building of the GlusterFS server. - -@item --disable-bdb -Disable building of Berkeley DB based storage translator. - -@item --disable-mod_glusterfs -Disable building of Apache/lighttpd glusterfs plugins. - -@item --disable-epoll -Use poll instead of epoll. - -@item --disable-libglusterfsclient -Disable building of libglusterfsclient - -@item --enable-fusermount -Build fusermount - -@end table -@end cartouche - -Build and install GlusterFS. - -@example -# make install -@end example - -The binaries (@command{glusterfsd} and @command{glusterfs}) will be by -default installed in @command{/usr/local/sbin/}. Translator, -scheduler, and transport shared libraries will be installed in -@command{/usr/local/lib/glusterfs//}. Sample volume -specification files will be in @command{/usr/local/etc/glusterfs/}. -This document itself can be found in -@command{/usr/local/share/doc/glusterfs/}. If you passed the @command{--prefix} -argument to the configure script, then replace @command{/usr/local} in the preceding -paths with the prefix. - -@node Running GlusterFS -@section Running GlusterFS - -@menu -* Server:: -* Client:: -@end menu - -@node Server -@subsection Server -@cindex GlusterFS server - -The GlusterFS server is necessary to export storage volumes to remote clients -(See @ref{Server protocol} for more info). This section documents the invocation -of the GlusterFS server program and all the command-line options accepted by it. - -@cartouche -@table @code -Basic Options -@item -f, --volfile= - Use the volume file as the volume specification. - -@item -s, --volfile-server= - Server to get volume file from. This option overrides --volfile option. - -@item -l, --log-file= - Specify the path for the log file. - -@item -L, --log-level= - Set the log level for the server. Log level should be one of @acronym{DEBUG}, -@acronym{WARNING}, @acronym{ERROR}, @acronym{CRITICAL}, or @acronym{NONE}. - -Advanced Options -@item --debug - Run in debug mode. This option sets --no-daemon, --log-level to DEBUG and - --log-file to console. - -@item -N, --no-daemon - Run glusterfsd as a foreground process. - -@item -p, --pid-file= - Path for the @acronym{PID} file. - -@item --volfile-id= - 'key' of the volfile to be fetched from server. - -@item --volfile-server-port= - Listening port number of volfile server. - -@item --volfile-server-transport=[tcp|ib-verbs] - Transport type to get volfile from server. [default: @command{tcp}] - -@item --xlator-options= - Add/override a translator option for a volume with specified value. - -Miscellaneous Options -@item -?, --help - Show this help text. - -@item --usage - Display a short usage message. - -@item -V, --version - Show version information. -@end table -@end cartouche - -@node Client -@subsection Client -@cindex GlusterFS client - -The GlusterFS client process is necessary to access remote storage volumes and -mount them locally using @acronym{FUSE}. This section documents the invocation of the -client process and all its command-line arguments. - -@example - # glusterfs [options] -@end example - -The @command{mountpoint} is the directory where you want the GlusterFS -filesystem to appear. Example: - -@example - # glusterfs -f /usr/local/etc/glusterfs-client.vol /mnt -@end example - -The command-line options are detailed below. - -@tex -\vfill -@end tex -@page - -@cartouche -@table @code - -Basic Options -@item -f, --volfile= - Use the volume file as the volume specification. - -@item -s, --volfile-server= - Server to get volume file from. This option overrides --volfile option. - -@item -l, --log-file= - Specify the path for the log file. - -@item -L, --log-level= - Set the log level for the server. Log level should be one of @acronym{DEBUG}, -@acronym{WARNING}, @acronym{ERROR}, @acronym{CRITICAL}, or @acronym{NONE}. - -Advanced Options -@item --debug - Run in debug mode. This option sets --no-daemon, --log-level to DEBUG and - --log-file to console. - -@item -N, --no-daemon - Run @command{glusterfs} as a foreground process. - -@item -p, --pid-file= - Path for the @acronym{PID} file. - -@item --volfile-id= - 'key' of the volfile to be fetched from server. - -@item --volfile-server-port= - Listening port number of volfile server. - -@item --volfile-server-transport=[tcp|ib-verbs] - Transport type to get volfile from server. [default: @command{tcp}] - -@item --xlator-options= - Add/override a translator option for a volume with specified value. - -@item --volume-name= - Volume name in client spec to use. Defaults to the root volume. - -@acronym{FUSE} Options -@item --attribute-timeout= - Attribute timeout for inodes in the kernel, in seconds. Defaults to 1 second. - -@item --disable-direct-io-mode - Disable direct @acronym{I/O} mode in @acronym{FUSE} kernel module. This is set - automatically if kernel supports big writes (>= 2.6.26). - -@item -e, --entry-timeout= - Entry timeout for directory entries in the kernel, in seconds. - Defaults to 1 second. - -Missellaneous Options -@item -?, --help - Show this help information. - -@item -V, --version - Show version information. -@end table -@end cartouche - -@node A Tutorial Introduction -@section A Tutorial Introduction - -This section will show you how to quickly get GlusterFS up and running. We'll -configure GlusterFS as a simple network filesystem, with one server and one client. -In this mode of usage, GlusterFS can serve as a replacement for NFS. - -We'll make use of two machines; call them @emph{server} and -@emph{client} (If you don't want to setup two machines, just run -everything that follows on the same machine). In the examples that -follow, the shell prompts will use these names to clarify the machine -on which the command is being run. For example, a command that should -be run on the server will be shown with the prompt: - -@example -[root@@server]# -@end example - -Our goal is to make a directory on the @emph{server} (say, @command{/export}) -accessible to the @emph{client}. - -First of all, get GlusterFS installed on both the machines, as described in the -previous sections. Make sure you have the @acronym{FUSE} kernel module loaded. You -can ensure this by running: - -@example -[root@@server]# modprobe fuse -@end example - -Before we can run the GlusterFS client or server programs, we need to write -two files called @emph{volume specifications} (equivalently refered to as @emph{volfiles}). -The volfile describes the @emph{translator tree} on a node. The next chapter will -explain the concepts of `translator' and `volume specification' in detail. For now, -just assume that the volfile is like an NFS @command{/etc/export} file. - -On the server, create a text file somewhere (we'll assume the path -@command{/tmp/glusterfsd.vol}) with the following contents. - -@cartouche -@example -volume colon-o - type storage/posix - option directory /export -end-volume - -volume server - type protocol/server - subvolumes colon-o - option transport-type tcp - option auth.addr.colon-o.allow * -end-volume -@end example -@end cartouche - -A brief explanation of the file's contents. The first section defines a storage -volume, named ``colon-o'' (the volume names are arbitrary), which exports the -@command{/export} directory. The second section defines options for the translator -which will make the storage volume accessible remotely. It specifies @command{colon-o} as -a subvolume. This defines the @emph{translator tree}, about which more will be said -in the next chapter. The two options specify that the @acronym{TCP} protocol is to be -used (as opposed to InfiniBand, for example), and that access to the storage volume -is to be provided to clients with any @acronym{IP} address at all. If you wanted to -restrict access to this server to only your subnet for example, you'd specify -something like @command{192.168.1.*} in the second option line. - -On the client machine, create the following text file (again, we'll assume -the path to be @command{/tmp/glusterfs-client.vol}). Replace -@emph{server-ip-address} with the @acronym{IP} address of your server machine. If you -are doing all this on a single machine, use @command{127.0.0.1}. - -@cartouche -@example -volume client - type protocol/client - option transport-type tcp - option remote-host @emph{server-ip-address} - option remote-subvolume colon-o -end-volume -@end example -@end cartouche - -Now we need to start both the server and client programs. To start the server: - -@example -[root@@server]# glusterfsd -f /tmp/glusterfs-server.vol -@end example - -To start the client: - -@example -[root@@client]# glusterfs -f /tmp/glusterfs-client.vol /mnt/glusterfs -@end example - -You should now be able to see the files under the server's @command{/export} directory -in the @command{/mnt/glusterfs} directory on the client. That's it; GlusterFS is now -working as a network file system. - -@node Concepts -@chapter Concepts - -@menu -* Filesystems in Userspace:: -* Translator:: -* Volume specification file:: -@end menu - -@node Filesystems in Userspace -@section Filesystems in Userspace - -A filesystem is usually implemented in kernel space. Kernel space -development is much harder than userspace development. @acronym{FUSE} -is a kernel module/library that allows us to write a filesystem -completely in userspace. - -@acronym{FUSE} consists of a kernel module which interacts with the userspace -implementation using a device file @code{/dev/fuse}. When a process -makes a syscall on a @acronym{FUSE} filesystem, @acronym{VFS} hands the request to the -@acronym{FUSE} module, which writes the request to @code{/dev/fuse}. The -userspace implementation polls @code{/dev/fuse}, and when a request arrives, -processes it and writes the result back to @code{/dev/fuse}. The kernel then -reads from the device file and returns the result to the user process. - -In case of GlusterFS, the userspace program is the GlusterFS client. -The control flow is shown in the diagram below. The GlusterFS client -services the request by sending it to the server, which in turn -hands it to the local @acronym{POSIX} filesystem. - -@center @image{fuse,44pc,,,.pdf} -@center Fig 1. Control flow in GlusterFS - -@node Translator -@section Translator - -The @emph{translator} is the most important concept in GlusterFS. In -fact, GlusterFS is nothing but a collection of translators working -together, forming a translator @emph{tree}. - -The idea of a translator is perhaps best understood using an -analogy. Consider the @acronym{VFS} in the Linux kernel. The -@acronym{VFS} abstracts the various filesystem implementations (such -as @acronym{EXT3}, ReiserFS, @acronym{XFS}, etc.) supported by the -kernel. When an application calls the kernel to perform an operation -on a file, the kernel passes the request on to the appropriate -filesystem implementation. - -For example, let's say there are two partitions on a Linux machine: -@command{/}, which is an @acronym{EXT3} partition, and @command{/usr}, -which is a ReiserFS partition. Now if an application wants to open a -file called, say, @command{/etc/fstab}, then the kernel will -internally pass the request to the @acronym{EXT3} implementation. If -on the other hand, an application wants to read a file called -@command{/usr/src/linux/CREDITS}, then the kernel will call upon the -ReiserFS implementation to do the job. - -The ``filesystem implementation'' objects are analogous to GlusterFS -translators. A GlusterFS translator implements all the filesystem -operations. Whereas in @acronym{VFS} there is a two-level tree (with -the kernel at the root and all the filesystem implementation as its -children), in GlusterFS there exists a more elaborate tree structure. - -We can now define translators more precisely. A GlusterFS translator -is a shared object (@command{.so}) that implements every filesystem -call. GlusterFS translators can be arranged in an arbitrary tree -structure (subject to constraints imposed by the translators). When -GlusterFS receives a filesystem call, it passes it on to the -translator at the root of the translator tree. The root translator may -in turn pass it on to any or all of its children, and so on, until the -leaf nodes are reached. The result of a filesystem call is -communicated in the reverse fashion, from the leaf nodes up to the -root node, and then on to the application. - -So what might a translator tree look like? - -@tex -\vfill -@end tex -@page - -@center @image{xlator,44pc,,,.pdf} -@center Fig 2. A sample translator tree - -The diagram depicts three servers and one GlusterFS client. It is important -to note that conceptually, the translator tree spans machine boundaries. -Thus, the client machine in the diagram, @command{10.0.0.1}, can access -the aggregated storage of the filesystems on the server machines @command{10.0.0.2}, -@command{10.0.0.3}, and @command{10.0.0.4}. The translator diagram will make more -sense once you've read the next chapter and understood the functions of the -various translators. - -@node Volume specification file -@section Volume specification file -The volume specification file describes the translator tree for both the -server and client programs. - -A volume specification file is a sequence of volume definitions. -The syntax of a volume definition is explained below: - -@cartouche -@example -@strong{volume} @emph{volume-name} - @strong{type} @emph{translator-name} - @strong{option} @emph{option-name} @emph{option-value} - @dots{} - @strong{subvolumes} @emph{subvolume1} @emph{subvolume2} @dots{} -@strong{end-volume} -@end example - -@dots{} -@end cartouche - -@table @asis -@item @emph{volume-name} - An identifier for the volume. This is just a human-readable name, -and can contain any alphanumeric character. For instance, ``storage-1'', ``colon-o'', -or ``forty-two''. - -@item @emph{translator-name} - Name of one of the available translators. Example: @command{protocol/client}, -@command{cluster/unify}. - -@item @emph{option-name} - Name of a valid option for the translator. - -@item @emph{option-value} - Value for the option. Everything following the ``option'' keyword to the end of the -line is considered the value; it is up to the translator to parse it. - -@item @emph{subvolume1}, @emph{subvolume2}, @dots{} - Volume names of sub-volumes. The sub-volumes must already have been defined earlier -in the file. -@end table - -There are a few rules you must follow when writing a volume specification file: - -@itemize -@item Everything following a `@command{#}' is considered a comment and is ignored. Blank lines are also ignored. -@item All names and keywords are case-sensitive. -@item The order of options inside a volume definition does not matter. -@item An option value may not span multiple lines. -@item If an option is not specified, it will assume its default value. -@item A sub-volume must have already been defined before it can be referenced. This means you have to write the specification file ``bottom-up'', starting from the leaf nodes of the translator tree and moving up to the root. -@end itemize - -A simple example volume specification file is shown below: - -@cartouche -@example -# This is a comment line -volume client - type protocol/client - option transport-type tcp - option remote-host localhost # Also a comment - option remote-subvolume brick -# The subvolumes line may be absent -end-volume - -volume iot - type performance/io-threads - option thread-count 4 - subvolumes client -end-volume - -volume wb - type performance/write-behind - subvolumes iot -end-volume -@end example -@end cartouche - -@node Translators -@chapter Translators - -@menu -* Storage Translators:: -* Client and Server Translators:: -* Clustering Translators:: -* Performance Translators:: -* Features Translators:: -* Miscellaneous Translators:: -@end menu - -This chapter documents all the available GlusterFS translators in detail. -Each translator section will show its name (for example, @command{cluster/unify}), -briefly describe its purpose and workings, and list every option accepted by -that translator and their meaning. - -@node Storage Translators -@section Storage Translators - -The storage translators form the ``backend'' for GlusterFS. Currently, -the only available storage translator is the @acronym{POSIX} -translator, which stores files on a normal @acronym{POSIX} -filesystem. A pleasant consequence of this is that your data will -still be accessible if GlusterFS crashes or cannot be started. - -Other storage backends are planned for the future. One of the possibilities is an -Amazon S3 translator. Amazon S3 is an unlimited online storage service accessible -through a web services @acronym{API}. The S3 translator will allow you to access -the storage as a normal @acronym{POSIX} filesystem. -@footnote{Some more discussion about this can be found at: - -http://developer.amazonwebservices.com/connect/message.jspa?messageID=52873} - -@menu -* POSIX:: -* BDB:: -@end menu - -@node POSIX -@subsection POSIX -@example -type storage/posix -@end example - -The @command{posix} translator uses a normal @acronym{POSIX} -filesystem as its ``backend'' to actually store files and -directories. This can be any filesystem that supports extended -attributes (@acronym{EXT3}, ReiserFS, @acronym{XFS}, ...). Extended -attributes are used by some translators to store metadata, for -example, by the replicate and stripe translators. See -@ref{Replicate} and @ref{Stripe}, respectively for details. - -@cartouche -@table @code -@item directory -The directory on the local filesystem which is to be used for storage. -@end table -@end cartouche - -@node BDB -@subsection BDB -@example -type storage/bdb -@end example - -The @command{BDB} translator uses a @acronym{Berkeley DB} database as its -``backend'' to actually store files as key-value pair in the database and -directories as regular @acronym{POSIX} directories. Note that @acronym{BDB} -does not provide extended attribute support for regular files. Do not use -@acronym{BDB} as storage translator while using any translator that demands -extended attributes on ``backend''. - -@cartouche -@table @code -@item directory -The directory on the local filesystem which is to be used for storage. -@item mode [cache|persistent] (cache) -When @acronym{BDB} is run in @command{cache} mode, recovery of back-end is not completely -guaranteed. @command{persistent} guarantees that @acronym{BDB} can recover back-end from -@acronym{Berkeley DB} even if GlusterFS crashes. -@item errfile -The path of the file to be used as @command{errfile} for @acronym{Berkeley DB} to report -detailed error messages, if any. Note that all the contents of this file will be written -by @acronym{Berkeley DB}, not GlusterFS. -@item logdir - - -@end table -@end cartouche - -@node Client and Server Translators, Clustering Translators, Storage Translators, Translators -@section Client and Server Translators - -The client and server translator enable GlusterFS to export a -translator tree over the network or access a remote GlusterFS -server. These two translators implement GlusterFS's network protocol. - -@menu -* Transport modules:: -* Client protocol:: -* Server protocol:: -@end menu - -@node Transport modules -@subsection Transport modules -The client and server translators are capable of using any of the -pluggable transport modules. Currently available transport modules are -@command{tcp}, which uses a @acronym{TCP} connection between client -and server to communicate; @command{ib-sdp}, which uses a -@acronym{TCP} connection over InfiniBand, and @command{ibverbs}, which -uses high-speed InfiniBand connections. - -Each transport module comes in two different versions, one to be used on -the server side and the other on the client side. - -@subsubsection TCP - -The @acronym{TCP} transport module uses a @acronym{TCP/IP} connection between -the server and the client. - -@example - option transport-type tcp -@end example - -The @acronym{TCP} client module accepts the following options: - -@cartouche -@table @code -@item non-blocking-connect [no|off|on|yes] (on) -Whether to make the connection attempt asynchronous. -@item remote-port (24007) -Server port to connect to. -@cindex DNS round robin -@item remote-host * -Hostname or @acronym{IP} address of the server. If the host name resolves to -multiple IP addresses, all of them will be tried in a round-robin fashion. This -feature can be used to implement fail-over. -@end table -@end cartouche - -The @acronym{TCP} server module accepts the following options: - -@cartouche -@table @code -@item bind-address
(0.0.0.0) -The local interface on which the server should listen to requests. Default is to -listen on all interfaces. -@item listen-port (24007) -The local port to listen on. -@end table -@end cartouche - -@subsubsection IB-SDP -@example - option transport-type ib-sdp -@end example - -kernel implements socket interface for ib hardware. SDP is over ib-verbs. -This module accepts the same options as @command{tcp} - -@subsubsection ibverbs - -@example - option transport-type tcp -@end example - -@cindex infiniband transport - -InfiniBand is a scalable switched fabric interconnect mechanism -primarily used in high-performance computing. InfiniBand can deliver -data throughput of the order of 10 Gbit/s, with latencies of 4-5 ms. - -The @command{ib-verbs} transport accesses the InfiniBand hardware through -the ``verbs'' @acronym{API}, which is the lowest level of software access possible -and which gives the highest performance. On InfiniBand hardware, it is always -best to use @command{ib-verbs}. Use @command{ib-sdp} only if you cannot get -@command{ib-verbs} working for some reason. - -The @command{ib-verbs} client module accepts the following options: - -@cartouche -@table @code -@item non-blocking-connect [no|off|on|yes] (on) -Whether to make the connection attempt asynchronous. -@item remote-port (24007) -Server port to connect to. -@cindex DNS round robin -@item remote-host * -Hostname or @acronym{IP} address of the server. If the host name resolves to -multiple IP addresses, all of them will be tried in a round-robin fashion. This -feature can be used to implement fail-over. -@end table -@end cartouche - -The @command{ib-verbs} server module accepts the following options: - -@cartouche -@table @code -@item bind-address
(0.0.0.0) -The local interface on which the server should listen to requests. Default is to -listen on all interfaces. -@item listen-port (24007) -The local port to listen on. -@end table -@end cartouche - -The following options are common to both the client and server modules: - -If you are familiar with InfiniBand jargon, -the mode is used by GlusterFS is ``reliable connection-oriented channel transfer''. - -@cartouche -@table @code -@item ib-verbs-work-request-send-count (64) -Length of the send queue in datagrams. [Reason to increase/decrease?] - -@item ib-verbs-work-request-recv-count (64) -Length of the receive queue in datagrams. [Reason to increase/decrease?] - -@item ib-verbs-work-request-send-size (128KB) -Size of each datagram that is sent. [Reason to increase/decrease?] - -@item ib-verbs-work-request-recv-size (128KB) -Size of each datagram that is received. [Reason to increase/decrease?] - -@item ib-verbs-port (1) -Port number for ib-verbs. - -@item ib-verbs-mtu [256|512|1024|2048|4096] (2048) -The Maximum Transmission Unit [Reason to increase/decrease?] - -@item ib-verbs-device-name (first device in the list) -InfiniBand device to be used. -@end table -@end cartouche - -For maximum performance, you should ensure that the send/receive counts on both -the client and server are the same. - -ib-verbs is preferred over ib-sdp. - -@node Client protocol -@subsection Client -@example -type procotol/client -@end example - -The client translator enables the GlusterFS client to access a remote server's -translator tree. - -@cartouche -@table @code - -@item transport-type [tcp,ib-sdp,ib-verbs] (tcp) -The transport type to use. You should use the client versions of all the -transport modules (@command{tcp}, @command{ib-sdp}, -@command{ib-verbs}). -@item remote-subvolume * -The name of the volume on the remote host to attach to. Note that -this is @emph{not} the name of the @command{protocol/server} volume on the -server. It should be any volume under the server. -@item transport-timeout (120- seconds) -Inactivity timeout. If a reply is expected and no activity takes place -on the connection within this time, the transport connection will be -broken, and a new connection will be attempted. -@end table -@end cartouche - -@node Server protocol -@subsection Server -@example -type protocol/server -@end example - -The server translator exports a translator tree and makes it accessible to -remote GlusterFS clients. - -@cartouche -@table @code -@item client-volume-filename (/glusterfs-client.vol) -The volume specification file to use for the client. This is the file the -client will receive when it is invoked with the @command{--server} option -(@ref{Client}). - -@item transport-type [tcp,ib-verbs,ib-sdp] (tcp) -The transport to use. You should use the server versions of all the transport -modules (@command{tcp}, @command{ib-sdp}, @command{ib-verbs}). - -@item auth.addr..allow -IP addresses of the clients that are allowed to attach to the specified volume. -This can be a wildcard. For example, a wildcard of the form @command{192.168.*.*} -allows any host in the @command{192.168.x.x} subnet to connect to the server. - -@end table -@end cartouche - -@node Clustering Translators -@section Clustering Translators - -The clustering translators are the most important GlusterFS -translators, since it is these that make GlusterFS a cluster -filesystem. These translators together enable GlusterFS to access an -arbitrarily large amount of storage, and provide @acronym{RAID}-like -redundancy and distribution over the entire cluster. - -There are three clustering translators: @strong{unify}, @strong{replicate}, -and @strong{stripe}. The unify translator aggregates storage from -many server nodes. The replicate translator provides file replication. The stripe -translator allows a file to be spread across many server nodes. The following sections -look at each of these translators in detail. - -@menu -* Unify:: -* Replicate:: -* Stripe:: -@end menu - -@node Unify -@subsection Unify -@cindex unify (translator) -@cindex scheduler (unify) -@example -type cluster/unify -@end example - -The unify translator presents a `unified' view of all its sub-volumes. That is, -it makes the union of all its sub-volumes appear as a single volume. It is the -unify translator that gives GlusterFS the ability to access an arbitrarily -large amount of storage. - -For unify to work correctly, certain invariants need to be maintained across -the entire network. These are: - -@cindex unify invariants -@itemize -@item The directory structure of all the sub-volumes must be identical. -@item A particular file can exist on only one of the sub-volumes. Phrasing it in another way, a pathname such as @command{/home/calvin/homework.txt}) is unique across the entire cluster. -@end itemize - -@tex -\vfill -@end tex -@page - -@center @image{unify,44pc,,,.pdf} - -Looking at the second requirement, you might wonder how one can -accomplish storing redundant copies of a file, if no file can exist -multiple times. To answer, we must remember that these invariants are -from @emph{unify's perspective}. A translator such as replicate at a lower -level in the translator tree than unify may subvert this picture. - -The first invariant might seem quite tedious to ensure. We shall see -later that this is not so, since unify's @emph{self-heal} mechanism -takes care of maintaining it. - -The second invariant implies that unify needs some way to decide which file goes where. -Unify makes use of @emph{scheduler} modules for this purpose. - -When a file needs to be created, unify's scheduler decides upon the -sub-volume to be used to store the file. There are many schedulers -available, each using a different algorithm and suitable for different -purposes. - -The various schedulers are described in detail in the sections that follow. - -@subsubsection ALU -@cindex alu (scheduler) - -@example - option scheduler alu -@end example - -ALU stands for "Adaptive Least Usage". It is the most advanced -scheduler available in GlusterFS. It balances the load across volumes -taking several factors in account. It adapts itself to changing I/O -patterns according to its configuration. When properly configured, it -can eliminate the need for regular tuning of the filesystem to keep -volume load nicely balanced. - -The ALU scheduler is composed of multiple least-usage -sub-schedulers. Each sub-scheduler keeps track of a certain type of -load, for each of the sub-volumes, getting statistics from -the sub-volumes themselves. The sub-schedulers are these: - -@itemize -@item disk-usage: The used and free disk space on the volume. - -@item read-usage: The amount of reading done from this volume. - -@item write-usage: The amount of writing done to this volume. - -@item open-files-usage: The number of files currently open from this volume. - -@item disk-speed-usage: The speed at which the disks are spinning. This is a constant value and therefore not very useful. -@end itemize - -The ALU scheduler needs to know which of these sub-schedulers to use, -and in which order to evaluate them. This is done through the -@command{option alu.order} configuration directive. - -Each sub-scheduler needs to know two things: when to kick in (the -entry-threshold), and how long to stay in control (the -exit-threshold). For example: when unifying three disks of 100GB, -keeping an exact balance of disk-usage is not necesary. Instead, there -could be a 1GB margin, which can be used to nicely balance other -factors, such as read-usage. The disk-usage scheduler can be told to -kick in only when a certain threshold of discrepancy is passed, such -as 1GB. When it assumes control under this condition, it will write -all subsequent data to the least-used volume. If it is doing so, it is -unwise to stop right after the values are below the entry-threshold -again, since that would make it very likely that the situation will -occur again very soon. Such a situation would cause the ALU to spend -most of its time disk-usage scheduling, which is unfair to the other -sub-schedulers. The exit-threshold therefore defines the amount of -data that needs to be written to the least-used disk, before control -is relinquished again. - -In addition to the sub-schedulers, the ALU scheduler also has "limits" -options. These can stop the creation of new files on a volume once -values drop below a certain threshold. For example, setting -@command{option alu.limits.min-free-disk 5GB} will stop the scheduling -of files to volumes that have less than 5GB of free disk space, -leaving the files on that disk some room to grow. - -The actual values you assign to the thresholds for sub-schedulers and -limits depend on your situation. If you have fast-growing files, -you'll want to stop file-creation on a disk much earlier than when -hardly any of your files are growing. If you care less about -disk-usage balance than about read-usage balance, you'll want a bigger -disk-usage scheduler entry-threshold and a smaller read-usage -scheduler entry-threshold. - -For thresholds defining a size, values specifying "KB", "MB" and "GB" -are allowed. For example: @command{option alu.limits.min-free-disk 5GB}. - -@cartouche -@table @code -@item alu.order * ("disk-usage:write-usage:read-usage:open-files-usage:disk-speed") -@item alu.disk-usage.entry-threshold (1GB) -@item alu.disk-usage.exit-threshold (512MB) -@item alu.write-usage.entry-threshold <%> (25) -@item alu.write-usage.exit-threshold <%> (5) -@item alu.read-usage.entry-threshold <%> (25) -@item alu.read-usage.exit-threshold <%> (5) -@item alu.open-files-usage.entry-threshold (1000) -@item alu.open-files-usage.exit-threshold (100) -@item alu.limits.min-free-disk <%> -@item alu.limits.max-open-files -@end table -@end cartouche - -@subsubsection Round Robin (RR) -@cindex rr (scheduler) - -@example - option scheduler rr -@end example - -Round-Robin (RR) scheduler creates files in a round-robin -fashion. Each client will have its own round-robin loop. When your -files are mostly similar in size and I/O access pattern, this -scheduler is a good choice. RR scheduler checks for free disk space -on the server before scheduling, so you can know when to add -another server node. The default value of min-free-disk is 5% and is -checked on file creation calls, with atleast 10 seconds (by default) -elapsing between two checks. - -Options: -@cartouche -@table @code -@item rr.limits.min-free-disk <%> (5) -Minimum free disk space a node must have for RR to schedule a file to it. -@item rr.refresh-interval (10 seconds) -Time between two successive free disk space checks. -@end table -@end cartouche - -@subsubsection Random -@cindex random (scheduler) - -@example - option scheduler random -@end example - -The random scheduler schedules file creation randomly among its child nodes. -Like the round-robin scheduler, it also checks for a minimum amount of free disk -space before scheduling a file to a node. - -@cartouche -@table @code -@item random.limits.min-free-disk <%> (5) -Minimum free disk space a node must have for random to schedule a file to it. -@item random.refresh-interval (10 seconds) -Time between two successive free disk space checks. -@end table -@end cartouche - -@subsubsection NUFA -@cindex nufa (scheduler) - -@example - option scheduler nufa -@end example - -It is common in many GlusterFS computing environments for all deployed -machines to act as both servers and clients. For example, a -research lab may have 40 workstations each with its own storage. All -of these workstations might act as servers exporting a volume as well -as clients accessing the entire cluster's storage. In such a -situation, it makes sense to store locally created files on the local -workstation itself (assuming files are accessed most by the -workstation that created them). The Non-Uniform File Allocation (@acronym{NUFA}) -scheduler accomplishes that. - -@acronym{NUFA} gives the local system first priority for file creation -over other nodes. If the local volume does not have more free disk space -than a specified amount (5% by default) then @acronym{NUFA} schedules files -among the other child volumes in a round-robin fashion. - -@acronym{NUFA} is named after the similar strategy used for memory access, -@acronym{NUMA}@footnote{Non-Uniform Memory Access: -@indicateurl{http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access}}. - -@cartouche -@table @code -@item nufa.limits.min-free-disk <%> (5) -Minimum disk space that must be free (local or remote) for @acronym{NUFA} to schedule a -file to it. -@item nufa.refresh-interval (10 seconds) -Time between two successive free disk space checks. -@item nufa.local-volume-name -The name of the volume corresponding to the local system. This volume must be -one of the children of the unify volume. This option is mandatory. -@end table -@end cartouche - -@cindex namespace -@subsubsection Namespace -Namespace volume needed because: - - persistent inode numbers. - - file exists even when node is down. - -namespace files are simply touched. on every lookup it is checked. - -@cartouche -@table @code -@item namespace * -Name of the namespace volume (which should be one of the unify volume's children). -@item self-heal [on|off] (on) -Enable/disable self-heal. Unless you know what you are doing, do not disable self-heal. -@end table -@end cartouche - -@cindex self heal (unify) -@subsubsection Self Heal - * When a 'lookup()/stat()' call is made on directory for the first -time, a self-heal call is made, which checks for the consistancy of -its child nodes. If an entry is present in storage node, but not in -namespace, that entry is created in namespace, and vica-versa. There -is an writedir() API introduced which is used for the same. It also -checks for permissions, and uid/gid consistencies. - - * This check is also done when an server goes down and comes up. - - * If one starts with an empty namespace export, but has data in -storage nodes, a 'find .>/dev/null' or 'ls -lR >/dev/null' should help -to build namespace in one shot. Even otherwise, namespace is built on -demand when a file is looked up for the first time. - -NOTE: There are some issues (Kernel 'Oops' msgs) seen with fuse-2.6.3, -when someone deletes namespace in backend, when glusterfs is -running. But with fuse-2.6.5, this issue is not there. - -@node Replicate -@subsection Replicate (formerly AFR) -@cindex Replicate -@example -type cluster/replicate -@end example - -Replicate provides @acronym{RAID}-1 like functionality for -GlusterFS. Replicate replicates files and directories across the -subvolumes. Hence if Replicate has four subvolumes, there will be -four copies of all files and directories. Replicate provides -high-availability, i.e., in case one of the subvolumes go down -(e. g. server crash, network disconnection) Replicate will still -service the requests using the redundant copies. - -Replicate also provides self-heal functionality, i.e., in case the -crashed servers come up, the outdated files and directories will be -updated with the latest versions. Replicate uses extended -attributes of the backend file system to track the versioning of files -and directories and provide the self-heal feature. - -@example -volume replicate-example - type cluster/replicate - subvolumes brick1 brick2 brick3 -end-volume -@end example - -This sample configuration will replicate all directories and files on -brick1, brick2 and brick3. - -All the read operations happen from the first alive child. If all the -three sub-volumes are up, reads will be done from brick1; if brick1 is -down read will be done from brick2. In case read() was being done on -brick1 and it goes down, replicate transparently falls back to -brick2. - -The next release of GlusterFS will add the following features: -@itemize -@item Ability to specify the sub-volume from which read operations are to be done (this will help users who have one of the sub-volumes as a local storage volume). -@item Allow scheduling of read operations amongst the sub-volumes in a round-robin fashion. -@end itemize - -The order of the subvolumes list should be same across all the 'replicate's as -they will be used for locking purposes. - -@cindex self heal (replicate) -@subsubsection Self Heal -Replicate has self-heal feature, which updates the outdated file and -directory copies by the most recent versions. For example consider the -following config: - -@example -volume replicate-example - type cluster/replicate - subvolumes brick1 brick2 -end-volume -@end example - -@subsubsection File self-heal - -Now if we create a file foo.txt on replicate-example, the file will be created -on brick1 and brick2. The file will have two extended attributes associated -with it in the backend filesystem. One is trusted.afr.createtime and the -other is trusted.afr.version. The trusted.afr.createtime xattr has the -create time (in terms of seconds since epoch) and trusted.afr.version -is a number that is incremented each time a file is modified. This increment -happens during close (incase any write was done before close). - -If brick1 goes down, we edit foo.txt the version gets incremented. Now -the brick1 comes back up, when we open() on foo.txt replicate will check if -their versions are same. If they are not same, the outdated copy is -replaced by the latest copy and its version is updated. After the sync -the open() proceeds in the usual manner and the application calling open() -can continue on its access to the file. - -If brick1 goes down, we delete foo.txt and create a file with the same -name again i.e foo.txt. Now brick1 comes back up, clearly there is a -chance that the version on brick1 being more than the version on brick2, -this is where createtime extended attribute helps in deciding which -the outdated copy is. Hence we need to consider both createtime and -version to decide on the latest copy. - -The version attribute is incremented during the close() call. Version -will not be incremented in case there was no write() done. In case the -fd that the close() gets was got by create() call, we also create -the createtime extended attribute. - -@subsubsection Directory self-heal - -Suppose brick1 goes down, we delete foo.txt, brick1 comes back up, now -we should not create foo.txt on brick2 but we should delete foo.txt -on brick1. We handle this situation by having the createtime and version -attribute on the directory similar to the file. when lookup() is done -on the directory, we compare the createtime/version attributes of the -copies and see which files needs to be deleted and delete those files -and update the extended attributes of the outdated directory copy. -Each time a directory is modified (a file or a subdirectory is created -or deleted inside the directory) and one of the subvols is down, we -increment the directory's version. - -lookup() is a call initiated by the kernel on a file or directory -just before any access to that file or directory. In glusterfs, by -default, lookup() will not be called in case it was called in the -past one second on that particular file or directory. - -The extended attributes can be seen in the backend filesystem using -the @command{getfattr} command. (@command{getfattr -n trusted.afr.version }) - -@cartouche -@table @code -@item debug [on|off] (off) -@item self-heal [on|off] (on) -@item replicate (*:1) -@item lock-node (first child is used by default) -@end table -@end cartouche - -@node Stripe -@subsection Stripe -@cindex stripe (translator) -@example -type cluster/stripe -@end example - -The stripe translator distributes the contents of a file over its -sub-volumes. It does this by creating a file equal in size to the -total size of the file on each of its sub-volumes. It then writes only -a part of the file to each sub-volume, leaving the rest of it empty. -These empty regions are called `holes' in Unix terminology. The holes -do not consume any disk space. - -The diagram below makes this clear. - -@center @image{stripe,44pc,,,.pdf} - -You can configure stripe so that only filenames matching a pattern -are striped. You can also configure the size of the data to be stored -on each sub-volume. - -@cartouche -@table @code -@item block-size : (*:0 no striping) -Distribute files matching @command{} over the sub-volumes, -storing at least @command{} on each sub-volume. For example, - -@example - option block-size *.mpg:1M -@end example - -distributes all files ending in @command{.mpg}, storing at least 1 MB on -each sub-volume. - -Any number of @command{block-size} option lines may be present, specifying -different sizes for different file name patterns. -@end table -@end cartouche - -@node Performance Translators -@section Performance Translators - -@menu -* Read Ahead:: -* Write Behind:: -* IO Threads:: -* IO Cache:: -* Booster:: -@end menu - -@node Read Ahead -@subsection Read Ahead -@cindex read-ahead (translator) -@example -type performance/read-ahead -@end example - -The read-ahead translator pre-fetches data in advance on every read. -This benefits applications that mostly process files in sequential order, -since the next block of data will already be available by the time the -application is done with the current one. - -Additionally, the read-ahead translator also behaves as a read-aggregator. -Many small read operations are combined and issued as fewer, larger read -requests to the server. - -Read-ahead deals in ``pages'' as the unit of data fetched. The page size -is configurable, as is the ``page count'', which is the number of pages -that are pre-fetched. - -Read-ahead is best used with InfiniBand (using the ib-verbs transport). -On FastEthernet and Gigabit Ethernet networks, -GlusterFS can achieve the link-maximum throughput even without -read-ahead, making it quite superflous. - -Note that read-ahead only happens if the reads are perfectly -sequential. If your application accesses data in a random fashion, -using read-ahead might actually lead to a performance loss, since -read-ahead will pointlessly fetch pages which won't be used by the -application. - -@cartouche -Options: -@table @code -@item page-size (256KB) -The unit of data that is pre-fetched. -@item page-count (2) -The number of pages that are pre-fetched. -@item force-atime-update [on|off|yes|no] (off|no) -Whether to force an access time (atime) update on the file on every read. Without -this, the atime will be slightly imprecise, as it will reflect the time when -the read-ahead translator read the data, not when the application actually read it. -@end table -@end cartouche - -@node Write Behind -@subsection Write Behind -@cindex write-behind (translator) -@example -type performance/write-behind -@end example - -The write-behind translator improves the latency of a write operation. -It does this by relegating the write operation to the background and -returning to the application even as the write is in progress. Using the -write-behind translator, successive write requests can be pipelined. -This mode of write-behind operation is best used on the client side, to -enable decreased write latency for the application. - -The write-behind translator can also aggregate write requests. If the -@command{aggregate-size} option is specified, then successive writes upto that -size are accumulated and written in a single operation. This mode of operation -is best used on the server side, as this will decrease the disk's head movement -when multiple files are being written to in parallel. - -The @command{aggregate-size} option has a default value of 128KB. Although -this works well for most users, you should always experiment with different values -to determine the one that will deliver maximum performance. This is because the -performance of write-behind depends on your interconnect, size of RAM, and the -work load. - -@cartouche -@table @code -@item aggregate-size (128KB) -Amount of data to accumulate before doing a write -@item flush-behind [on|yes|off|no] (off|no) - -@end table -@end cartouche - -@node IO Threads -@subsection IO Threads -@cindex io-threads (translator) -@example -type performance/io-threads -@end example - -The IO threads translator is intended to increase the responsiveness -of the server to metadata operations by doing file I/O (read, write) -in a background thread. Since the GlusterFS server is -single-threaded, using the IO threads translator can significantly -improve performance. This translator is best used on the server side, -loaded just below the server protocol translator. - -IO threads operates by handing out read and write requests to a separate thread. -The total number of threads in existence at a time is constant, and configurable. - -@cartouche -@table @code -@item thread-count (1) -Number of threads to use. -@end table -@end cartouche - -@node IO Cache -@subsection IO Cache -@cindex io-cache (translator) -@example -type performance/io-cache -@end example - -The IO cache translator caches data that has been read. This is useful -if many applications read the same data multiple times, and if reads -are much more frequent than writes (for example, IO caching may be -useful in a web hosting environment, where most clients will simply -read some files and only a few will write to them). - -The IO cache translator reads data from its child in @command{page-size} chunks. -It caches data upto @command{cache-size} bytes. The cache is maintained as -a prioritized least-recently-used (@acronym{LRU}) list, with priorities determined -by user-specified patterns to match filenames. - -When the IO cache translator detects a write operation, the -cache for that file is flushed. - -The IO cache translator periodically verifies the consistency of -cached data, using the modification times on the files. The verification timeout -is configurable. - -@cartouche -@table @code -@item page-size (128KB) -Size of a page. -@item cache-size (n) (32MB) -Total amount of data to be cached. -@item force-revalidate-timeout (1) -Timeout to force a cache consistency verification, in seconds. -@item priority (*:0) -Filename patterns listed in order of priority. -@end table -@end cartouche - -@node Booster -@subsection Booster -@cindex booster -@example - type performance/booster -@end example - -The booster translator gives applications a faster path to communicate -read and write requests to GlusterFS. Normally, all requests to GlusterFS from -applications go through FUSE, as indicated in @ref{Filesystems in Userspace}. -Using the booster translator in conjunction with the GlusterFS booster shared -library, an application can bypass the FUSE path and send read/write requests -directly to the GlusterFS client process. - -The booster mechanism consists of two parts: the booster translator, -and the booster shared library. The booster translator is meant to be -loaded on the client side, usually at the root of the translator tree. -The booster shared library should be @command{LD_PRELOAD}ed with the -application. - -The booster translator when loaded opens a Unix domain socket and -listens for read/write requests on it. The booster shared library -intercepts read and write system calls and sends the requests to the -GlusterFS process directly using the Unix domain socket, bypassing FUSE. -This leads to superior performance. - -Once you've loaded the booster translator in your volume specification file, you -can start your application as: - -@example - $ LD_PRELOAD=/usr/local/bin/glusterfs-booster.so your_app -@end example - -The booster translator accepts no options. - -@node Features Translators -@section Features Translators - -@menu -* POSIX Locks:: -* Fixed ID:: -@end menu - -@node POSIX Locks -@subsection POSIX Locks -@cindex record locking -@cindex fcntl -@cindex posix-locks (translator) -@example -type features/posix-locks -@end example - -This translator provides storage independent POSIX record locking -support (@command{fcntl} locking). Typically you'll want to load this on the -server side, just above the @acronym{POSIX} storage translator. Using this -translator you can get both advisory locking and mandatory locking -support. It also handles @command{flock()} locks properly. - -Caveat: Consider a file that does not have its mandatory locking bits -(+setgid, -group execution) turned on. Assume that this file is now -opened by a process on a client that has the write-behind xlator -loaded. The write-behind xlator does not cache anything for files -which have mandatory locking enabled, to avoid incoherence. Let's say -that mandatory locking is now enabled on this file through another -client. The former client will not know about this change, and -write-behind may erroneously report a write as being successful when -in fact it would fail due to the region it is writing to being locked. - -There seems to be no easy way to fix this. To work around this -problem, it is recommended that you never enable the mandatory bits on -a file while it is open. - -@cartouche -@table @code -@item mandatory [on|off] (on) -Turns mandatory locking on. -@end table -@end cartouche - -@node Fixed ID -@subsection Fixed ID -@cindex fixed-id (translator) -@example -type features/fixed-id -@end example - -The fixed ID translator makes all filesystem requests from the client -to appear to be coming from a fixed, specified -@acronym{UID}/@acronym{GID}, regardless of which user actually -initiated the request. - -@cartouche -@table @code -@item fixed-uid [if not set, not used] -The @acronym{UID} to send to the server -@item fixed-gid [if not set, not used] -The @acronym{GID} to send to the server -@end table -@end cartouche - -@node Miscellaneous Translators -@section Miscellaneous Translators - -@menu -* ROT-13:: -* Trace:: -@end menu - -@node ROT-13 -@subsection ROT-13 -@cindex rot-13 (translator) -@example -type encryption/rot-13 -@end example - -@acronym{ROT-13} is a toy translator that can ``encrypt'' and ``decrypt'' file -contents using the @acronym{ROT-13} algorithm. @acronym{ROT-13} is a trivial -algorithm that rotates each alphabet by thirteen places. Thus, 'A' becomes 'N', -'B' becomes 'O', and 'Z' becomes 'M'. - -It goes without saying that you shouldn't use this translator if you need -@emph{real} encryption (a future release of GlusterFS will have real encryption -translators). - -@cartouche -@table @code -@item encrypt-write [on|off] (on) -Whether to encrypt on write -@item decrypt-read [on|off] (on) -Whether to decrypt on read -@end table -@end cartouche - -@node Trace -@subsection Trace -@cindex trace (translator) -@example -type debug/trace -@end example - -The trace translator is intended for debugging purposes. When loaded, it -logs all the system calls received by the server or client (wherever -trace is loaded), their arguments, and the results. You must use a GlusterFS log -level of DEBUG (See @ref{Running GlusterFS}) for trace to work. - -Sample trace output (lines have been wrapped for readability): -@cartouche -@example -2007-10-30 00:08:58 D [trace.c:1579:trace_opendir] trace: callid: 68 -(*this=0x8059e40, loc=0x8091984 @{path=/iozone3_283, inode=0x8091f00@}, - fd=0x8091d50) - -2007-10-30 00:08:58 D [trace.c:630:trace_opendir_cbk] trace: -(*this=0x8059e40, op_ret=4, op_errno=1, fd=0x8091d50) - -2007-10-30 00:08:58 D [trace.c:1602:trace_readdir] trace: callid: 69 -(*this=0x8059e40, size=4096, offset=0 fd=0x8091d50) - -2007-10-30 00:08:58 D [trace.c:215:trace_readdir_cbk] trace: -(*this=0x8059e40, op_ret=0, op_errno=0, count=4) - -2007-10-30 00:08:58 D [trace.c:1624:trace_closedir] trace: callid: 71 -(*this=0x8059e40, *fd=0x8091d50) - -2007-10-30 00:08:58 D [trace.c:809:trace_closedir_cbk] trace: -(*this=0x8059e40, op_ret=0, op_errno=1) -@end example -@end cartouche - -@node Usage Scenarios -@chapter Usage Scenarios - -@section Advanced Striping - -This section is based on the Advanced Striping tutorial written by -Anand Avati on the GlusterFS wiki -@footnote{http://gluster.org/docs/index.php/Mixing_Striped_and_Regular_Files}. - -@subsection Mixed Storage Requirements - -There are two ways of scheduling the I/O. One at file level (using -unify translator) and other at block level (using stripe -translator). Striped I/O is good for files that are potentially large -and require high parallel throughput (for example, a single file of -400GB being accessed by 100s and 1000s of systems simultaneously and -randomly). For most of the cases, file level scheduling works best. - -In the real world, it is desirable to mix file level and block level -scheduling on a single storage volume. Alternatively users can choose -to have two separate volumes and hence two mount points, but the -applications may demand a single storage system to host both. - -This document explains how to mix file level scheduling with stripe. - -@subsection Configuration Brief - -This setup demonstrates how users can configure unify translator with -appropriate I/O scheduler for file level scheduling and strip for only -matching patterns. This way, GlusterFS chooses appropriate I/O profile -and knows how to efficiently handle both the types of data. - -A simple technique to achieve this effect is to create a stripe set of -unify and stripe blocks, where unify is the first sub-volume. Files -that do not match the stripe policy passed on to first unify -sub-volume and inturn scheduled arcoss the cluster using its file -level I/O scheduler. - -@image{advanced-stripe,44pc,,,.pdf} - -@subsection Preparing GlusterFS Envoronment - -Create the directories /export/namespace, /export/unify and -/export/stripe on all the storage bricks. - - Place the following server and client volume spec file under -/etc/glusterfs (or appropriate installed path) and replace the IP -addresses / access control fields to match your environment. - -@cartouche -@example - ## file: /etc/glusterfs/glusterfsd.vol - volume posix-unify - type storage/posix - option directory /export/for-unify - end-volume - - volume posix-stripe - type storage/posix - option directory /export/for-stripe - end-volume - - volume posix-namespace - type storage/posix - option directory /export/for-namespace - end-volume - - volume server - type protocol/server - option transport-type tcp - option auth.addr.posix-unify.allow 192.168.1.* - option auth.addr.posix-stripe.allow 192.168.1.* - option auth.addr.posix-namespace.allow 192.168.1.* - subvolumes posix-unify posix-stripe posix-namespace - end-volume -@end example -@end cartouche - -@cartouche -@example - ## file: /etc/glusterfs/glusterfs.vol - volume client-namespace - type protocol/client - option transport-type tcp - option remote-host 192.168.1.1 - option remote-subvolume posix-namespace - end-volume - - volume client-unify-1 - type protocol/client - option transport-type tcp - option remote-host 192.168.1.1 - option remote-subvolume posix-unify - end-volume - - volume client-unify-2 - type protocol/client - option transport-type tcp - option remote-host 192.168.1.2 - option remote-subvolume posix-unify - end-volume - - volume client-unify-3 - type protocol/client - option transport-type tcp - option remote-host 192.168.1.3 - option remote-subvolume posix-unify - end-volume - - volume client-unify-4 - type protocol/client - option transport-type tcp - option remote-host 192.168.1.4 - option remote-subvolume posix-unify - end-volume - - volume client-stripe-1 - type protocol/client - option transport-type tcp - option remote-host 192.168.1.1 - option remote-subvolume posix-stripe - end-volume - - volume client-stripe-2 - type protocol/client - option transport-type tcp - option remote-host 192.168.1.2 - option remote-subvolume posix-stripe - end-volume - - volume client-stripe-3 - type protocol/client - option transport-type tcp - option remote-host 192.168.1.3 - option remote-subvolume posix-stripe - end-volume - - volume client-stripe-4 - type protocol/client - option transport-type tcp - option remote-host 192.168.1.4 - option remote-subvolume posix-stripe - end-volume - - volume unify - type cluster/unify - option scheduler rr - subvolumes cluster-unify-1 cluster-unify-2 cluster-unify-3 cluster-unify-4 - end-volume - - volume stripe - type cluster/stripe - option block-size *.img:2MB # All files ending with .img are striped with 2MB stripe block size. - subvolumes unify cluster-stripe-1 cluster-stripe-2 cluster-stripe-3 cluster-stripe-4 - end-volume -@end example -@end cartouche - - -Bring up the Storage - -Starting GlusterFS Server: If you have installed through binary -package, you can start the service through init.d startup script. If -not: - -@example -[root@@server]# glusterfsd -@end example - -Mounting GlusterFS Volumes: - -@example -[root@@client]# glusterfs -s [BRICK-IP-ADDRESS] /mnt/cluster -@end example - -Improving upon this Setup - -Infiniband Verbs RDMA transport is much faster than TCP/IP GigE -transport. - -Use of performance translators such as read-ahead, write-behind, -io-cache, io-threads, booster is recommended. - -Replace round-robin (rr) scheduler with ALU to handle more dynamic -storage environments. - -@node Troubleshooting -@chapter Troubleshooting - -This chapter is a general troubleshooting guide to GlusterFS. It lists -common GlusterFS server and client error messages, debugging hints, and -concludes with the suggested procedure to report bugs in GlusterFS. - -@section GlusterFS error messages - -@subsection Server errors - -@example -glusterfsd: FATAL: could not open specfile: -'/etc/glusterfs/glusterfsd.vol' -@end example - -The GlusterFS server expects the volume specification file to be -at @command{/etc/glusterfs/glusterfsd.vol}. The example -specification file will be installed as -@command{/etc/glusterfs/glusterfsd.vol.sample}. You need to edit -it and rename it, or provide a different specification file using -the @command{--spec-file} command line option (See @ref{Server}). - -@vskip 4ex - -@example -gf_log_init: failed to open logfile "/usr/var/log/glusterfs/glusterfsd.log" - (Permission denied) -@end example - -You don't have permission to create files in the -@command{/usr/var/log/glusterfs} directory. Make sure you are running -GlusterFS as root. Alternatively, specify a different path for the log -file using the @command{--log-file} option (See @ref{Server}). - -@subsection Client errors - -@example -fusermount: failed to access mountpoint /mnt: - Transport endpoint is not connected -@end example - -A previous failed (or hung) mount of GlusterFS is preventing it from being -mounted again in the same location. The fix is to do: - -@example -# umount /mnt -@end example - -and try mounting again. - -@vskip 4ex - -@strong{``Transport endpoint is not connected''.} - -If you get this error when you try a command such as @command{ls} or @command{cat}, -it means the GlusterFS mount did not succeed. Try running GlusterFS in @command{DEBUG} -logging level and study the log messages to discover the cause. - -@vskip 4ex - -@strong{``Connect to server failed'', ``SERVER-ADDRESS: Connection refused''.} - -GluserFS Server is not running or dead. Check your network -connections and firewall settings. To check if the server is reachable, -try: - -@example -telnet IP-ADDRESS 24007 -@end example - -If the server is accessible, your `telnet' command should connect and -block. If not you will see an error message such as @command{telnet: Unable to -connect to remote host: Connection refused}. 24007 is the default -GlusterFS port. If you have changed it, then use the corresponding -port instead. - -@vskip 4ex - -@example -gf_log_init: failed to open logfile "/usr/var/log/glusterfs/glusterfs.log" - (Permission denied) -@end example - -You don't have permission to create files in the -@command{/usr/var/log/glusterfs} directory. Make sure you are running -GlusterFS as root. Alternatively, specify a different path for the log -file using the @command{--log-file} option (See @ref{Client}). - -@section FUSE error messages -@command{modprobe fuse} fails with: ``Unknown symbol in module, or unknown parameter''. -@cindex Redhat Enterprise Linux - -If you are using fuse-2.6.x on Redhat Enterprise Linux Work Station 4 -and Advanced Server 4 with 2.6.9-42.ELlargesmp, 2.6.9-42.ELsmp, -2.6.9-42.EL kernels and get this error while loading @acronym{FUSE} kernel -module, you need to apply the following patch. - -For fuse-2.6.2: - -@indicateurl{http://ftp.gluster.com/pub/gluster/glusterfs/fuse/fuse-2.6.2-rhel-build.patch} - -For fuse-2.6.3: - -@indicateurl{http://ftp.gluster.com/pub/gluster/glusterfs/fuse/fuse-2.6.3-rhel-build.patch} - -@section AppArmour and GlusterFS -@cindex AppArmour -@cindex OpenSuSE -Under OpenSuSE GNU/Linux, the AppArmour security feature does not -allow GlusterFS to create temporary files or network socket -connections even while running as root. You will see error messages -like `Unable to open log file: Operation not permitted' or `Connection -refused'. Disabling AppArmour using YaST or properly configuring -AppArmour to recognize @command{glusterfsd} or @command{glusterfs}/@command{fusermount} -should solve the problem. - -@section Reporting a bug - -If you encounter a bug in GlusterFS, please follow the below -guidelines when you report it to the mailing list. Be sure to report -it! User feedback is crucial to the health of the project and we value -it highly. - -@subsection General instructions - -When running GlusterFS in a non-production environment, be sure to -build it with the following command: - -@example - $ make CFLAGS='-g -O0 -DDEBUG' -@end example - -This includes debugging information which will be helpful in getting -backtraces (see below) and also disable optimization. Enabling -optimization can result in incorrect line numbers being reported to -gdb. - -@subsection Volume specification files - -Attach all relevant server and client spec files you were using when -you encountered the bug. Also tell us details of your setup, i.e., how -many clients and how many servers. - -@subsection Log files - -Set the loglevel of your client and server programs to @acronym{DEBUG} (by -passing the -L @acronym{DEBUG} option) and attach the log files with your bug -report. Obviously, if only the client is failing (for example), you -only need to send us the client log file. - -@subsection Backtrace - -If GlusterFS has encountered a segmentation fault or has crashed for -some other reason, include the backtrace with the bug report. You can -get the backtrace using the following procedure. - -Run the GlusterFS client or server inside gdb. - -@example - $ gdb ./glusterfs - (gdb) set args -f client.spec -N -l/path/to/log/file -LDEBUG /mnt/point - (gdb) run -@end example - -Now when the process segfaults, you can get the backtrace by typing: - -@example - (gdb) bt -@end example - -If the GlusterFS process has crashed and dumped a core file (you can -find this in / if running as a daemon and in the current directory -otherwise), you can do: - -@example - $ gdb /path/to/glusterfs /path/to/core. -@end example - -and then get the backtrace. - -If the GlusterFS server or client seems to be hung, then you can get -the backtrace by attaching gdb to the process. First get the @command{PID} of -the process (using ps), and then do: - -@example - $ gdb ./glusterfs -@end example - -Press Ctrl-C to interrupt the process and then generate the backtrace. - -@subsection Reproducing the bug - -If the bug is reproducible, please include the steps necessary to do -so. If the bug is not reproducible, send us the bug report anyway. - -@subsection Other information - -If you think it is relevant, send us also the version of @acronym{FUSE} you're -using, the kernel version, platform. - -@node GNU Free Documentation Licence -@appendix GNU Free Documentation Licence -@include fdl.texi - -@node Index -@unnumbered Index -@printindex cp - -@bye diff --git a/doc/user-guide/legacy/xlator.odg b/doc/user-guide/legacy/xlator.odg deleted file mode 100644 index 179a65f6e..000000000 Binary files a/doc/user-guide/legacy/xlator.odg and /dev/null differ diff --git a/doc/user-guide/legacy/xlator.pdf b/doc/user-guide/legacy/xlator.pdf deleted file mode 100644 index a07e14d67..000000000 Binary files a/doc/user-guide/legacy/xlator.pdf and /dev/null differ -- cgit