summaryrefslogtreecommitdiffstats
path: root/doc/legacy/docbook/admin_troubleshooting.xml
diff options
context:
space:
mode:
Diffstat (limited to 'doc/legacy/docbook/admin_troubleshooting.xml')
-rw-r--r--doc/legacy/docbook/admin_troubleshooting.xml518
1 files changed, 0 insertions, 518 deletions
diff --git a/doc/legacy/docbook/admin_troubleshooting.xml b/doc/legacy/docbook/admin_troubleshooting.xml
deleted file mode 100644
index af1259ada43..00000000000
--- a/doc/legacy/docbook/admin_troubleshooting.xml
+++ /dev/null
@@ -1,518 +0,0 @@
-<?xml version='1.0' encoding='UTF-8'?>
-<!-- This document was created with Syntext Serna Free. --><!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "docbookV4.5/docbookx.dtd" []>
-<!--
- Copyright (c) 2006-2012 Red Hat, Inc. <http://www.redhat.com>
- This file is part of GlusterFS.
-
- This file is licensed to you under your choice of the GNU Lesser
- General Public License, version 3 or any later version (LGPLv3 or
- later), or the GNU General Public License, version 2 (GPLv2), in all
- cases as published by the Free Software Foundation.
--->
-<chapter id="chap-Administration_Guide-Troubleshooting">
- <title>Troubleshooting GlusterFS </title>
- <para>This section describes how to manage GlusterFS logs and most common troubleshooting scenarios
-related to GlusterFS.
-</para>
- <section>
- <title>Managing GlusterFS Logs </title>
- <para>This section describes how to manage GlusterFS logs by performing the following operation:
-
-</para>
- <itemizedlist>
- <listitem>
- <para>Rotating Logs
-</para>
- </listitem>
- </itemizedlist>
- <section>
- <title>Rotating Logs </title>
- <para>Administrators can rotate the log file in a volume, as needed.
-</para>
- <para><emphasis role="bold">To rotate a log file </emphasis></para>
- <itemizedlist>
- <listitem>
- <para>Rotate the log file using the following command:
-</para>
- <para><command># gluster volume log rotate <replaceable>VOLNAME</replaceable></command></para>
- <para>For example, to rotate the log file on test-volume:
-</para>
- <programlisting># gluster volume log rotate test-volume
-log rotate successful
-</programlisting>
- <note>
- <para>When a log file is rotated, the contents of the current log file are moved to log-file-
-name.epoch-time-stamp.
-</para>
- </note>
- </listitem>
- </itemizedlist>
- </section>
- </section>
- <section>
- <title>Troubleshooting Geo-replication </title>
- <para>This section describes the most common troubleshooting scenarios related to GlusterFS Geo-replication.
-</para>
- <section>
- <title>Locating Log Files </title>
- <para>For every Geo-replication session, the following three log files are associated to it (four, if the slave is a
-gluster volume):
-</para>
- <itemizedlist>
- <listitem>
- <para>Master-log-file - log file for the process which monitors the Master volume
-</para>
- </listitem>
- <listitem>
- <para>Slave-log-file - log file for process which initiates the changes in slave
-</para>
- </listitem>
- <listitem>
- <para>Master-gluster-log-file - log file for the maintenance mount point that Geo-replication module
-uses to monitor the master volume
-</para>
- </listitem>
- <listitem>
- <para>Slave-gluster-log-file - is the slave&apos;s counterpart of it
-</para>
- </listitem>
- </itemizedlist>
- <para><emphasis role="bold">Master Log File</emphasis>
-</para>
- <para>To get the Master-log-file for geo-replication, use the following command:
-</para>
- <para><command>gluster volume geo-replication <code>MASTER SLAVE</code> config log-file</command>
-</para>
- <para>For example:
-</para>
- <para><command># gluster volume geo-replication Volume1 example.com:/data/remote_dir config log-file </command></para>
- <para><emphasis role="bold">Slave Log File </emphasis></para>
- <para>To get the log file for Geo-replication on slave (glusterd must be running on slave machine), use the
-following commands:
-</para>
- <orderedlist>
- <listitem>
- <para>On master, run the following command:
-</para>
- <para><command># gluster volume geo-replication Volume1 example.com:/data/remote_dir config session-owner 5f6e5200-756f-11e0-a1f0-0800200c9a66 </command></para>
- <para>Displays the session owner details.
-</para>
- </listitem>
- <listitem>
- <para>On slave, run the following command:
-</para>
- <para><command># gluster volume geo-replication /data/remote_dir config log-file /var/log/gluster/${session-owner}:remote-mirror.log </command></para>
- </listitem>
- <listitem>
- <para>Replace the session owner details (output of Step 1) to the output of the Step 2 to get the
-location of the log file.
-</para>
- <para><command>/var/log/gluster/5f6e5200-756f-11e0-a1f0-0800200c9a66:remote-mirror.log</command>
-</para>
- </listitem>
- </orderedlist>
- </section>
- <section>
- <title>Rotating Geo-replication Logs</title>
- <para>Administrators can rotate the log file of a particular master-slave session, as needed.
-When you run geo-replication&apos;s <command> log-rotate</command> command, the log file
-is backed up with the current timestamp suffixed to the file
-name and signal is sent to gsyncd to start logging to a new
-log file.</para>
- <para><emphasis role="bold">To rotate a geo-replication log file </emphasis></para>
- <itemizedlist>
- <listitem>
- <para>Rotate log file for a particular master-slave session using the following command:
-</para>
- <para><command># gluster volume geo-replication <replaceable>master slave</replaceable> log-rotate</command>
-</para>
- <para>For example, to rotate the log file of master <filename>Volume1</filename> and slave <filename>example.com:/data/remote_dir</filename>
-:
-</para>
- <programlisting># gluster volume geo-replication Volume1 example.com:/data/remote_dir log rotate
-log rotate successful</programlisting>
- </listitem>
- <listitem>
- <para>Rotate log file for all sessions for a master volume using the following command:
-</para>
- <para><command># gluster volume geo-replication <replaceable>master</replaceable> log-rotate</command>
-</para>
- <para>For example, to rotate the log file of master <filename>Volume1</filename>:
-</para>
- <programlisting># gluster volume geo-replication Volume1 log rotate
-log rotate successful</programlisting>
- </listitem>
- <listitem>
- <para>Rotate log file for all sessions using the following command:
-</para>
- <para><command># gluster volume geo-replication log-rotate</command>
-</para>
- <para>For example, to rotate the log file for all sessions:</para>
- <programlisting># gluster volume geo-replication log rotate
-log rotate successful</programlisting>
- </listitem>
- </itemizedlist>
- </section>
- <section>
- <title>Synchronization is not complete </title>
- <para><emphasis role="bold">Description</emphasis>: GlusterFS Geo-replication did not synchronize the data completely but still the geo-
-replication status displayed is OK.
-</para>
- <para><emphasis role="bold">Solution</emphasis>: You can enforce a full sync of the data by erasing the index and restarting GlusterFS Geo-
-replication. After restarting, GlusterFS Geo-replication begins synchronizing all the data. All files are compared using checksum, which can be a lengthy and high resource utilization operation on large
-data sets. If the error situation persists, contact Red Hat Support.
-</para>
- <para>For more information about erasing index, see <xref linkend="sect-Administration_Guide-Managing_Volumes-Tuning"/>.
-</para>
- </section>
- <section>
- <title>Issues in Data Synchronization </title>
- <para><emphasis role="bold">Description</emphasis>: Geo-replication display status as OK, but the files do not get synced, only
-directories and symlink gets synced with the following error message in the log:
-</para>
- <para><errortext>[2011-05-02 13:42:13.467644] E [master:288:regjob] GMaster: failed to sync ./some_file` </errortext></para>
- <para><emphasis role="bold">Solution</emphasis>: Geo-replication invokes rsync v3.0.0 or higher on the host and the remote machine. You must verify if
-you have installed the required version.
-</para>
- </section>
- <section>
- <title>Geo-replication status displays Faulty very often </title>
- <para><emphasis role="bold">Description</emphasis>: Geo-replication displays status as faulty very often with a backtrace similar to
-the following:
-</para>
- <para><errortext>2011-04-28 14:06:18.378859] E [syncdutils:131:log_raise_exception] &lt;top&gt;: FAIL: Traceback (most recent call last): File &quot;/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py&quot;, line 152, in twraptf(*aa) File &quot;/usr/local/libexec/glusterfs/python/syncdaemon/repce.py&quot;, line 118, in listen rid, exc, res = recv(self.inf) File &quot;/usr/local/libexec/glusterfs/python/syncdaemon/repce.py&quot;, line 42, in recv return pickle.load(inf) EOFError </errortext></para>
- <para><emphasis role="bold">Solution</emphasis>: This error indicates that the RPC communication between the master gsyncd module and slave
-gsyncd module is broken and this can happen for various reasons. Check if it satisfies all the following
-pre-requisites:
-</para>
- <itemizedlist>
- <listitem>
- <para>Password-less SSH is set up properly between the host and the remote machine.
-</para>
- </listitem>
- <listitem>
- <para>If FUSE is installed in the machine, because geo-replication module mounts the GlusterFS volume
-using FUSE to sync data.
-</para>
- </listitem>
- <listitem>
- <para>If the <emphasis role="bold">Slave</emphasis> is a volume, check if that volume is started.
-</para>
- </listitem>
- <listitem>
- <para>If the Slave is a plain directory, verify if the directory has been created already with the
-required permissions.
-</para>
- </listitem>
- <listitem>
- <para>If GlusterFS 3.2 or higher is not installed in the default location (in Master) and has been prefixed to be
-installed in a custom location, configure the <command>gluster-command</command> for it to point to the exact
-location.
-</para>
- </listitem>
- <listitem>
- <para>If GlusterFS 3.2 or higher is not installed in the default location (in slave) and has been prefixed to be
-installed in a custom location, configure the <command>remote-gsyncd-command</command> for it to point to the
-exact place where gsyncd is located.
-</para>
- </listitem>
- </itemizedlist>
- </section>
- <section>
- <title>Intermediate Master goes to Faulty State </title>
- <para><emphasis role="bold">Description</emphasis>: In a cascading set-up, the intermediate master goes to faulty state with the following
-log:
-</para>
- <para><errortext>raise RuntimeError (&quot;aborting on uuid change from %s to %s&quot; % \ RuntimeError: aborting on uuid change from af07e07c-427f-4586-ab9f- 4bf7d299be81 to de6b5040-8f4e-4575-8831-c4f55bd41154 </errortext></para>
- <para><emphasis role="bold">Solution</emphasis>: In a cascading set-up the Intermediate master is loyal to the original primary master. The
-above log means that the geo-replication module has detected change in primary master.
-If this is the desired behavior, delete the config option volume-id in the session initiated from the
-intermediate master.
-</para>
- </section>
- </section>
- <section>
- <title>Troubleshooting POSIX ACLs </title>
- <para>This section describes the most common troubleshooting issues related to POSIX ACLs.
-</para>
- <section>
- <title>setfacl command fails with “setfacl: &lt;file or directory name&gt;: Operation not supported” error </title>
- <para>You may face this error when the backend file systems in one of the servers is not mounted with
-the &quot;-o acl&quot; option. The same can be confirmed by viewing the following error message in the log file
-of the server &quot;Posix access control list is not supported&quot;.
-</para>
- <para><emphasis role="bold">Solution</emphasis>: Remount the backend file system with &quot;-o acl&quot; option. For more information, see <xref linkend="sect-Administration_Guide-ACLs-Activating_ACLs-Server"/>.
-</para>
- </section>
- </section>
- <section>
- <title>Troubleshooting Hadoop Compatible Storage </title>
- <para>This section describes the most common troubleshooting issues related to Hadoop Compatible
-Storage.
-
- </para>
- <section id="sect-Administration_Guide-Troubleshooting-Test_Section_1">
- <title>Time Sync</title>
- <para>Running MapReduce job may throw exceptions if the time is out-of-sync on the hosts in the cluster.
-
- </para>
- <para><emphasis role="bold">Solution</emphasis>: Sync the time on all hosts using ntpd program.
-</para>
- </section>
- </section>
- <section>
- <title>Troubleshooting NFS </title>
- <para>This section describes the most common troubleshooting issues related to NFS .
-</para>
- <section>
- <title>mount command on NFS client fails with “RPC Error: Program not registered” </title>
- <para>Start portmap or rpcbind service on the NFS server.
-</para>
- <para>This error is encountered when the server has not started correctly.
-</para>
- <para>On most Linux distributions this is fixed by starting portmap:
-</para>
- <para><command>$ /etc/init.d/portmap start</command>
-</para>
- <para>On some distributions where portmap has been replaced by rpcbind, the following command is
-required:
-</para>
- <para><command>$ /etc/init.d/rpcbind start </command></para>
- <para>After starting portmap or rpcbind, gluster NFS server needs to be restarted.
-</para>
- </section>
- <section>
- <title>NFS server start-up fails with “Port is already in use” error in the log file.&quot; </title>
- <para>Another Gluster NFS server is running on the same machine.
-</para>
- <para>This error can arise in case there is already a Gluster NFS server running on the same machine.
-This situation can be confirmed from the log file, if the following error lines exist:
-</para>
- <para><screen>[2010-05-26 23:40:49] E [rpc-socket.c:126:rpcsvc_socket_listen] rpc-socket: binding socket failed:Address already in use
-[2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use
-[2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection
-[2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed
-[2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
-[2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
-[2010-05-26 23:40:49] C [nfs.c:531:notify] nfs: Failed to initialize protocols</screen></para>
- <para>To resolve this error one of the Gluster NFS servers will have to be shutdown. At this time,
-Gluster NFS server does not support running multiple NFS servers on the same machine.
-</para>
- </section>
- <section>
- <title>mount command fails with “rpc.statd” related error message </title>
- <para>If the mount command fails with the following error message:
-</para>
- <para><errortext>mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use &apos;-o nolock&apos; to keep locks local, or start statd. </errortext></para>
- <para><errortext>Start rpc.statd </errortext></para>
- <para>For NFS clients to mount the NFS server, rpc.statd service must be running on the clients. </para>
- <para>Start
-rpc.statd service by running the following command:
-</para>
- <para><command>$ rpc.statd </command></para>
- </section>
- <section>
- <title>mount command takes too long to finish. </title>
- <para>Start rpcbind service on the NFS client.
-</para>
- <para>The problem is that the rpcbind or portmap service is not running on the NFS client. The
-resolution for this is to start either of these services by running the following command:
-</para>
- <para><command>$ /etc/init.d/portmap start</command>
-</para>
- <para>On some distributions where portmap has been replaced by rpcbind, the following command is
-required:
-</para>
- <para><command>$ /etc/init.d/rpcbind start</command></para>
- </section>
- <section>
- <title>NFS server glusterfsd starts but initialization fails with “nfsrpc- service: portmap registration of program failed” error message in the log. </title>
- <para>NFS start-up can succeed but the initialization of the NFS service can still fail preventing clients
-from accessing the mount points. Such a situation can be confirmed from the following error
-messages in the log file:
-</para>
- <para><screen>[2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap
-[2010-05-26 23:33:47] E [rpcsvc.c:2682:rpcsvc_program_register] rpc-service: portmap registration of program failed
-[2010-05-26 23:33:47] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
-[2010-05-26 23:33:47] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
-[2010-05-26 23:33:47] C [nfs.c:531:notify] nfs: Failed to initialize protocols
-[2010-05-26 23:33:49] E [rpcsvc.c:2614:rpcsvc_program_unregister_portmap] rpc-service: Could not unregister with portmap
-[2010-05-26 23:33:49] E [rpcsvc.c:2731:rpcsvc_program_unregister] rpc-service: portmap unregistration of program failed
-[2010-05-26 23:33:49] E [rpcsvc.c:2744:rpcsvc_program_unregister] rpc-service: Program unregistration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465</screen></para>
- <orderedlist>
- <listitem>
- <para>Start portmap or rpcbind service on the NFS server.
-</para>
- <para>On most Linux distributions, portmap can be started using the following command:
-</para>
- <para><command>$ /etc/init.d/portmap start </command></para>
- <para>On some distributions where portmap has been replaced by rpcbind, run the following command:
-</para>
- <para><command>$ /etc/init.d/rpcbind start </command></para>
- <para>After starting portmap or rpcbind, gluster NFS server needs to be restarted.
-</para>
- </listitem>
- <listitem>
- <para>Stop another NFS server running on the same machine.
-</para>
- <para>Such an error is also seen when there is another NFS server running on the same machine but it is
-not the Gluster NFS server. On Linux systems, this could be the kernel NFS server. Resolution
-involves stopping the other NFS server or not running the Gluster NFS server on the machine.
-Before stopping the kernel NFS server, ensure that no critical service depends on access to that
-NFS server&apos;s exports.
-</para>
- <para>On Linux, kernel NFS servers can be stopped by using either of the following commands
-depending on the distribution in use:
-</para>
- <para><command>$ /etc/init.d/nfs-kernel-server stop</command>
-</para>
- <para><command>$ /etc/init.d/nfs stop</command></para>
- </listitem>
- <listitem>
- <para>Restart Gluster NFS server.
-</para>
- </listitem>
- </orderedlist>
- </section>
- <section>
- <title>mount command fails with NFS server failed error. </title>
- <para>mount command fails with following error
-</para>
- <para><emphasis role="italic">mount: mount to NFS server &apos;10.1.10.11&apos; failed: timed out (retrying).</emphasis></para>
- <para>Perform one of the following to resolve this issue:
-</para>
- <orderedlist>
- <listitem>
- <para>Disable name lookup requests from NFS server to a DNS server.
-</para>
- <para>The NFS server attempts to authenticate NFS clients by performing a reverse DNS lookup to
-match hostnames in the volume file with the client IP addresses. There can be a situation where
-the NFS server either is not able to connect to the DNS server or the DNS server is taking too long
-to responsd to DNS request. These delays can result in delayed replies from the NFS server to the
-NFS client resulting in the timeout error seen above.
-</para>
- <para>NFS server provides a work-around that disables DNS requests, instead relying only on the client
-IP addresses for authentication. The following option can be added for successful mounting in
-such situations:
-</para>
- <para><command>option rpc-auth.addr.namelookup off </command></para>
- <para><note>
- <para>Note: Remember that disabling the NFS server forces authentication of clients to use only IP
-addresses and if the authentication rules in the volume file use hostnames, those authentication
-rules will fail and disallow mounting for those clients.
-</para>
- </note></para>
- <para>or</para>
- </listitem>
- <listitem>
- <para>NFS version used by the NFS client is other than version 3.
-</para>
- <para>Gluster NFS server supports version 3 of NFS protocol. In recent Linux kernels, the default NFS
-version has been changed from 3 to 4. It is possible that the client machine is unable to connect
-to the Gluster NFS server because it is using version 4 messages which are not understood by
-Gluster NFS server. The timeout can be resolved by forcing the NFS client to use version 3. The
-<emphasis role="bold">vers</emphasis> option to mount command is used for this purpose:
-</para>
- <para><command>$ mount <replaceable>nfsserver</replaceable><replaceable>:export</replaceable> -o vers=3 <replaceable>mount-point</replaceable></command>
-</para>
- </listitem>
- </orderedlist>
- </section>
- <section>
- <title>showmount fails with clnt_create: RPC: Unable to receive </title>
- <para>Check your firewall setting to open ports 111 for portmap requests/replies and Gluster NFS
-server requests/replies. Gluster NFS server operates over the following port numbers: 38465,
-38466, and 38467.
-</para>
- <para>For more information, see <xref linkend="sect-Administration_Guide-Test_Chapter-GlusterFS_Client-Native-RPM"/>.
-</para>
- </section>
- <section>
- <title>Application fails with &quot;Invalid argument&quot; or &quot;Value too large for defined data type&quot; error. </title>
- <para>These two errors generally happen for 32-bit nfs clients or applications that do not support 64-bit
-inode numbers or large files.
-Use the following option from the CLI to make Gluster NFS return 32-bit inode numbers instead:
-nfs.enable-ino32 &lt;on|off&gt;
-</para>
- <para>Applications that will benefit are those that were either:
-</para>
- <itemizedlist>
- <listitem>
- <para>built 32-bit and run on 32-bit machines such that they do not support large files by default</para>
- </listitem>
- <listitem>
- <para>built 32-bit on 64-bit systems
-</para>
- </listitem>
- </itemizedlist>
- <para>This option is disabled by default so NFS returns 64-bit inode numbers by default.
-</para>
- <para>Applications which can be rebuilt from source are recommended to rebuild using the following
-flag with gcc:</para>
- <para><command> -D_FILE_OFFSET_BITS=64</command>
-</para>
- </section>
- </section>
- <section>
- <title>Troubleshooting File Locks</title>
- <para>In GlusterFS 3.3 you can use <command>statedump</command> command to list the locks held on files. The statedump output also provides information on each lock with its range, basename, PID of the application holding the lock, and so on. You can analyze the output to know about the locks whose owner/application is no longer running or interested in that lock. After ensuring that the no application is using the file, you can clear the lock using the following <command>clear lock</command> command:</para>
- <para><command># <command>gluster volume clear-locks <replaceable>VOLNAME path</replaceable> kind {blocked | granted | all}{inode [range] | entry [basename] | posix [range]}</command></command></para>
- <para>For more information on performing <command>statedump</command>, see <xref linkend="sect-Administration_Guide-Monitor_Workload-Performing_Statedump"/></para>
- <para><emphasis role="bold">To identify locked file and clear locks</emphasis></para>
- <orderedlist>
- <listitem>
- <para>Perform statedump on the volume to view the files that are locked using the following command:</para>
- <para> <command># gluster volume statedump <replaceable>VOLNAME</replaceable> inode</command></para>
- <para>For example, to display statedump of test-volume:</para>
- <para><programlisting># gluster volume statedump test-volume
-Volume statedump successful</programlisting></para>
- <para>The statedump files are created on the brick servers in the<filename> /tmp</filename> directory or in the directory set using <command>server.statedump-path</command> volume option. The naming convention of the dump file is <filename>&lt;brick-path&gt;.&lt;brick-pid&gt;.dump</filename>.</para>
- <para>The following are the sample contents of the statedump file. It indicates that GlusterFS has entered into a state where there is an entry lock (entrylk) and an inode lock (inodelk). Ensure that those are stale locks and no resources own them. </para>
- <para><screen>[xlator.features.locks.vol-locks.inode]
-path=/
-mandatory=0
-entrylk-count=1
-lock-dump.domain.domain=vol-replicate-0
-xlator.feature.locks.lock-dump.domain.entrylk.entrylk[0](ACTIVE)=type=ENTRYLK_WRLCK on basename=file1, pid = 714782904, owner=ffffff2a3c7f0000, transport=0x20e0670, , granted at Mon Feb 27 16:01:01 2012
-
-conn.2.bound_xl./gfs/brick1.hashsize=14057
-conn.2.bound_xl./gfs/brick1.name=/gfs/brick1/inode
-conn.2.bound_xl./gfs/brick1.lru_limit=16384
-conn.2.bound_xl./gfs/brick1.active_size=2
-conn.2.bound_xl./gfs/brick1.lru_size=0
-conn.2.bound_xl./gfs/brick1.purge_size=0
-
-[conn.2.bound_xl./gfs/brick1.active.1]
-gfid=538a3d4a-01b0-4d03-9dc9-843cd8704d07
-nlookup=1
-ref=2
-ia_type=1
-[xlator.features.locks.vol-locks.inode]
-path=/file1
-mandatory=0
-inodelk-count=1
-lock-dump.domain.domain=vol-replicate-0
-inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 714787072, owner=00ffff2a3c7f0000, transport=0x20e0670, , granted at Mon Feb 27 16:01:01 2012</screen></para>
- </listitem>
- <listitem>
- <para>Clear the lock using the following command:</para>
- <para><command># <command>gluster volume clear-locks <replaceable>VOLNAME path</replaceable> kind granted entry basename</command></command></para>
- <para>For example, to clear the entry lock on <filename>file1</filename> of test-volume:
-</para>
- <para><screen># gluster volume clear-locks test-volume / kind granted entry file1
-Volume clear-locks successful
-vol-locks: entry blocked locks=0 granted locks=1</screen></para>
- </listitem>
- <listitem>
- <para>Clear the inode lock using the following command:</para>
- <para><command># <command>gluster volume clear-locks <replaceable>VOLNAME path</replaceable> kind granted inode range </command></command></para>
- <para>For example, to clear the inode lock on <filename>file1</filename> of test-volume:
-</para>
- <para><screen># gluster volume clear-locks test-volume /file1 kind granted inode 0,0-0
-Volume clear-locks successful
-vol-locks: inode blocked locks=0 granted locks=1</screen></para>
- <para>You can perform statedump on test-volume again to verify that the above inode and entry locks are cleared.</para>
- </listitem>
- </orderedlist>
- </section>
-</chapter>