summaryrefslogtreecommitdiffstats
path: root/tests/bugs
Commit message (Collapse)AuthorAgeFilesLines
* features/index: Don't delete current xattrop index.Ravishankar N2014-06-243-2/+31
| | | | | | | | | | | | | | | | | | | Problem: `gluster v heal <volname> statistics heal-count` was not able to read the number of entries to be healed from the source brick because the base xattrop entries in indices/base_indices_holder and indices/xattrop were getting deleted after a successful heal and the code flow prevented them from creating it again. Fix: Don't delete the xattrop index unless it is stale (i.e. brick is restarted) Change-Id: Ief4eee0ddf42c4d8b711d00751be92bbbc7bbbb0 BUG: 1101647 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/7897 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* rpc: implement server.manage-gids for group resolving on the bricksNiels de Vos2014-05-231-13/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new volume option 'server.manage-gids' can be enabled in environments where a user belongs to more than the current absolute maximum of 93 groups. This option triggers the following behavior: 1. The AUTH_GLUSTERFS structure sent by GlusterFS clients (fuse, nfs or libgfapi) will contain only one (1) auxiliary group, instead of a full list. This reduces network usage and prevents problems in encoding the AUTH_GLUSTERFS structure which should fit in 400 bytes. 2. The single group in the RPC Calls received by the server is replaced by resolving the groups server-side. Permission checks and similar in lower xlators are applied against the full list of groups where the user belongs to, and not the single auxiliary group that the client sent. Cherry picked from commit 2fd499d148fc8865c77de8b2c73fe0b7e1737882: > BUG: 1053579 > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: http://review.gluster.org/7501 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Santosh Pradhan <spradhan@redhat.com> > Reviewed-by: Harshavardhana <harsha@harshavardhana.net> > Reviewed-by: Anand Avati <avati@redhat.com> Change-Id: I9e540de13e3022f8b63ff893ecba511129a47b91 BUG: 1096425 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/7830 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Santosh Pradhan <spradhan@redhat.com>
* rpc: warn and truncate grouplist if RPC/AUTH can not hold everythingNiels de Vos2014-05-221-0/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The GlusterFS protocol currently uses AUTH_GLUSTERFS_V2 in the RPC/AUTH header. This header contains the uid, gid and auxiliary groups of the user/process that accesses the Gluster Volume. The AUTH_GLUSTERFS_V2 structure allows up to 65535 auxiliary groups to be passed on. Unfortunately, the RPC/AUTH header is limited to 400 bytes by the RPC specification: http://tools.ietf.org/html/rfc5531#section-8.2 In order to not cause complete failures on the client-side when trying to encode a AUTH_GLUSTERFS_V2 that would result in more than 400 bytes, we can calculate the expected size of the other elements: 1 | pid 1 | uid 1 | gid 1 | groups_len XX | groups_val (GF_MAX_AUX_GROUPS=65535) 1 | lk_owner_len YY | lk_owner_val (GF_MAX_LOCK_OWNER_LEN=1024) ----+------------------------------------------- 5 | total xdr-units one XDR-unit is defined as BYTES_PER_XDR_UNIT = 4 bytes MAX_AUTH_BYTES = 400 is the maximum, this is 100 xdr-units. XX + YY can be 95 to fill the 100 xdr-units. Note that the on-wire protocol has tighter requirements than the internal structures. It is possible for xlators to use more groups and a bigger lk_owner than that can be sent by a GlusterFS-client. This change prevents overflows when allocating the RPC/AUTH header. Two new macros are introduced to calculate the number of groups that fit in the RPC/AUTH header, when taking the size of the lk_owner in account. In case the list of groups exceeds the maximum possible, only the first groups are passed over the RPC/GlusterFS protocol to the bricks. A warning is added to the logs, so that most system administrators will get informed. The reducing of the number of groups is not a new inventions. The RPC/AUTH header (AUTH_SYS or AUTH_UNIX) that NFS uses has a limit of 16 groups. Most, if not all, NFS-clients will reduce any bigger number of groups to 16. (nfs.server-aux-gids can be used to workaround the limit of 16 groups, but the Gluster NFS-server will be limited to a maximum of 93 groups, or fewer in case the lk_owner structure contains more items.) Cherry picked from commit 8235de189845986a535d676b1fd2c894b9c02e52: > BUG: 1053579 > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: http://review.gluster.org/7202 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Harshavardhana <harsha@harshavardhana.net> > Reviewed-by: Santosh Pradhan <spradhan@redhat.com> > Reviewed-by: Vijay Bellur <vbellur@redhat.com> Change-Id: I8410e59d0fd246d601b54b961d3ae9cb5a858c10 BUG: 1096425 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/7829 Reviewed-by: Lalatendu Mohanty <lmohanty@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* posix: if brick-uid or brick-gid is not specified, do not setNiels de Vos2014-05-091-0/+47
| | | | | | | | | | | | | | | | | | | | | Current code would set owner uid/gid explicitly to 0/0 on start even if none was specified. Fix it. Cherry picked from commit 8bdc329: > Change-Id: I72dec9e79c51bd1eb3af5334c42b7c23b01d0258 > BUG: 1040275 > Signed-off-by: Anand Avati <avati@redhat.com> > Reviewed-on: http://review.gluster.org/6476 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Tested-by: Lukáš Bezdička <lukas.bezdicka@gooddata.com> > Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> > Reviewed-by: Vijay Bellur <vbellur@redhat.com> Change-Id: Ie0396c1a4e6e0979ea9c855d33db963544a75c42 BUG: 1095971 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/7720 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* write-behind: track filesize when doing extending writesNiels de Vos2014-05-092-0/+140
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A program that calls mmap() on a newly created sparse file, may receive a SIGBUS signal. If SIGBUS is not handled, a segmentation fault will occur and the program will exit. A bug in the write-behind translator can cause the creation of a sparse file created with open(), seek(), write() to be cached. The last write() may not be sent to the server, until write-behind deems this necessary. * open(.., O_TRUNC, ...)/creat() the file, it is 0 bytes big * seek() into the file, use offset 31 * write() 1 byte to the file * the range from byte 0-30 are unwritten so called 'sparse' The following illustration tries to capture this: Legend: [ = start of file _ = unallocated/unwritten bytes # = allocated bytes in the file ] = end of file [_______________#] | | '- byte 0 '- byte 31 Without this change, reading from byte 0-30 will return an error, and reading the same area through an mmap()'d pointer will trigger a SIGBUS. Reading from this range did not trigger the outstanding write() to be flushed. The brick that receives the read() (translated over the network from mmap()) does not know that the file has been extended, and returns -EINVAL. This error gets transported back from the brick to the glusterfs-fuse client, and translated by the Linux kernel/VFS into SIGBUS triggered by mmap(). In order to solve this, a new attribute to the wb_inode structure is introduced; the current size of the file. All FOPs that can modify the size, are expected to update wb_inode->size. This makes it possible for extending writes with an offset bigger than EOF to mark the unwritten area as modified/pending. Cherry picked from commit b0515e2a4a08b657ef7e9715fb8c6222c700e78c: > Change-Id: If5ba6646732e6be26568541ea9b12852a5d0b988 > BUG: 1058663 > Signed-off-by: Niels de Vos <ndevos@redhat.com> > Reviewed-on: http://review.gluster.org/6835 > Tested-by: Gluster Build System <jenkins@build.gluster.com> > Reviewed-by: Raghavendra G <rgowdapp@redhat.com> > Reviewed-by: Anand Avati <avati@redhat.com> Change-Id: Ic73288202c6cb37d8c066628e9c489c6fade49bd BUG: 1071191 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/7166 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cluster/afr: Unable to self heal symbolic linksVenkatesh Somyajulu2014-05-081-0/+49
| | | | | | | | | | | | | | | | | | | | | | Problem: Under the entry self heal, readlink is done at the source and sink. When readlink is done at the sink, because link is not present at the sink, afr expects ENOENT. AFR translator takes decisions for new link creation based on ENOENT but server translator is modified to return ESTALE because of which afr xlator is not able to heal. Fix: The check for inode absence at server includes ESTALE as well. Change-Id: I9218da214ed44f7219570ad9dae298d6b5cbded9 BUG: 1046624 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/6600 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* storage/posix: do not dereference gfid symlinks before ↵Xavier Hernandez2014-05-051-0/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | posix_handle_mkdir_hashes() Whenever a new directory is created, its corresponding gfid file must also be created. This was done first calling MAKE_HANDLE_PATH() to get the path of the gfid file, then calling posix_handle_mkdir_hashes() to create the parent directories of the gfid, and finally creating the soft-link. In normal circumstances, the gfid we want to create won't exist and MAKE_HANDLE_PATH() will return a simple path to the new gfid. However if the volume is damaged and a self-heal is running, it is possible that we try to create an already existing gfid. In this case, MAKE_HANDLE_PATH() will return a path to the directory instead of the path to the gfid. To solve this problem, every time a path to a gfid is needed, a call to MAKE_HANDLE_ABSPATH() is made instead of the call to MAKE_HANDLE_PATH(). BUG: 859581 Change-Id: I84405bf04562e647fc02445f45358e9451f9b479 Signed-off-by: Xavier Hernandez <xhernandez@datalab.es> Reviewed-on: http://review.gluster.org/6736 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com>
* Tests: Changes to tests for glfs-heal implementation.Pranith Kumar K2014-04-281-23/+0
| | | | | | | | | | | | | | | volume heal <volname> info does not depend on self-heald anymore so removed one complete test tests/bugs/bug-880898.t and also removed specific tests related to that in tests/basic/self-heald.t. Change-Id: I2e5cd3641b86cad63af41c9564d57ab6ebb60742 BUG: 1039544 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/6515 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* storage/posix: perform chmod after chown.Ravishankar N2014-02-171-0/+45
| | | | | | | | | | | | | | | | | | | | | | Problem: When a replica brick is added to a volume, set-user-ID and set-group-ID permission bits of files are not set correctly in the new brick. The issue is in the posix_setattr() call where we do a chmod followed by a chown. But according to the man pages for chown: When the owner or group of an executable file are changed by an unprivileged user the S_ISUID and S_ISGID mode bits are cleared. POSIX does not specify whether this also should happen when root does the chown(). Fix: Swap the chmod and chown calls in posix_setattr() BUG: 1058797 Change-Id: Id2fbd8394cf6faf669f414775409f20f46009f2b Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/6988 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* features/marker: Filter quota xattrs on file as wellPranith Kumar K2014-02-021-1/+0
| | | | | | | | | | | | | | | | | | | | | | Problem: Quota contributions of a file/directory are tracked by quota xlator using xattrs on the file. Quota allows these xattrs to be healed as part of metadata self-heal. This leads to wrong quota calculations on this brick after self-heal because quota xattrs don't represent the actual contributions on the brick anymore. Fix: Don't let self-heal of this xattr happen as part of self-heal by filtering quota xattrs on file in listxattr. Change-Id: Iea68a116595ba271e58c6fdcc3dd21c7bb55ebb3 BUG: 1035576 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/6374 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/6812
* mgmt/glusterd: make sure quota enforcer has established connection with ↵Raghavendra G2014-01-283-2/+51
| | | | | | | | | | | | | | | | | | quotad before marking quota as enabled. without this patch there is a window of time when quota is marked as enabled in quota-enforcer, but connection to quotad wouldn't have been established. Any checklimit done during this period can result in a failed fop because of unavailability of quotad. Change-Id: I0d509fabc434dd55ce9ec59157123524197fcc80 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> BUG: 969461 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/6572 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/6820
* features/quota: remove in-memory accounting of files in enforcerRaghavendra G2014-01-281-0/+35
| | | | | | | | | | | | | | | | Accounting was done in enforcer (though marker is the ultimate source of truth) to offset cached directory size becoming stale. However, with enforcer being moved to brick we can no longer maintain correct cluster wide size for a directory. Hence removing accounting code from enforcer. Change-Id: I5ea94234da4da85ed5f5ced1354d8de3454b3fcb BUG: 969461 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/6434 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/6818
* mount/fuse: cleanup old graphs.Raghavendra G2014-01-281-0/+45
| | | | | | | | | | | | | | | | | | | | | | | After a graph switch, a PARENT_DOWN event from fuse indicates protocol/client to shutdown all its sockets. However, this event will be sent only when the first fop is received from fuse mount after graph switch. Also, this event is sent only on previously active graph. So, if there are multiple graph switches when there is no activity on mountpoint, we'll be left with a list of graphs with their sockets still open. This patch fixes the issue by sending PARENT_DOWN to previously non-active graph when a new graph is available irrespective of whether there is an activity on mount-point. Change-Id: I9e676658d797c0b2dd3ed5ca5a56271bd94b4bb6 BUG: 924726 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/4713 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/6813 Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Treat ESTALE on nameless lookup as ENOENTPranith Kumar K2014-01-271-0/+34
| | | | | | | | | Change-Id: I635fc0fa955b33590f1c5b4dfec22d591ea8575c BUG: 1032894 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/6593 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* locks: various fixesAnand Avati2014-01-141-0/+2
| | | | | | | | | | | | | | | | - implement ref/unref of entry locks (and fix bad pointer deref crashes) - code cleanup and deleted various data types - fix improper read/write lock conflict detection in entrylk - fix indefinite hang of blocked locks on disconnect - register locks in client_t synchronously, fix crashes in disconnect path Change-Id: Id273690c9111b8052139d1847060d1fb5a711924 BUG: 849630 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/6695 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* test: Modified bug-1037501.t scriptVenkatesh Somyajulu2013-12-271-76/+109
| | | | | | | | | Change-Id: I6d49d8a66a6dc68619005e731969010b013cb834 BUG: 1037501 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/6598 Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cluster/afr: For entry self heal, mark all source bricksVenkatesh Somyajulu2013-12-191-0/+220
| | | | | | | | | | | | | | | | | | | | | | | Problem: Whenever a new brick is added into a replicate volume, all source bricks are not marked as source. Only one of them is marked as source. Here marked as source refers to adding extended attribute at the backend of a file corresponding to the newly added brick. As well as source bricks should point to the newly added brick so that heal can be triggered. Fix: All source bricks will now point to newly added bricks and heal can be triggered based on the extended attributes. Change-Id: Ia7cf118270fecc429bdecddbcb9201f23fedc7a1 BUG: 1037501 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/6419 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* posix: do not allow to set/get "trusted.glusterfs.volume-id" xattrVijaykumar M2013-11-261-0/+60
| | | | | | | | | Change-Id: I2e9a2264b1fd5ebc1ed0aff30225e89acbd0bcb4 BUG: 1034716 Signed-off-by: Vijaykumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/6361 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli/glusterd: Changes to quota command Quota featureRaghavendra G2013-11-261-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | re-work. Following are the cli commands that are new/re-worked: ====================================================== volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} | volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} | volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>} volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [detail|clients|mem|inode|fd|callpool] volume statedump <VOLNAME> [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history] glusterd changes: ================= * Quota limits are now set as extended attributes by glusterd from the aux mount created by the cli. * The gfids of the directories on which quota limits are set for a given volume are stored in /var/lib/glusterd/vols/<volname>/quota.conf file in binary format, and whose cksum and version is stored in /var/lib/glusterd/vols/<volname>/quota.cksum. Original-author: Krutika Dhananjay <kdhananj@redhat.com> Original-author: Krishnan Parthasarathi <kparthas@redhat.com> BUG: 969461 Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Reviewed-on: http://review.gluster.org/6003 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* posix: placeholders for GFID to path conversionRaghavendra G2013-11-261-0/+156
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | what? ===== The following is an attempt to generate the paths of a file when only its gfid is known. To find the path of a directory, the symlink handle to the directory maintained in the ".glusterfs" backend directory is read. The symlink handle is generated using the gfid of the directory. It (handle) contains the directory's name and parent gfid, which are used to recursively construct the absolute path as seen by the user from the mount point. A similar approach cannot be used for a regular file or a symbolic link since its hardlink handle, generated using its gfid, doesn't contain its parent gfid and basename. So xattrs are set to store the parent gfids and the number of hardlinks to a file or a symlink having the same parent gfid. When an user/application requests for the paths of a regular file or a symlink with multiple hardlinks, using the parent gfids stored in the xattrs, the paths of the parent directories are generated as mentioned earlier. The base names of the hardlinks (with the same parent gfid) are determined by matching the actual backend inode numbers of each entry in the parent directory with that of the hardlink handle. Xattr is set on a regular file, link, and symbolic link as follows, Xattr name : trusted.pgfid.<pargfidstr> Xattr value : <number of hardlinks to a regular file/symlink with the same parentgfid> If a regular file, hard link, symbolic link is created then an xattr in the above format is set in the backend. how to use? =========== This functionality can be used through getxattr interface. Two keys - glusterfs.ancestry.dentry and glusterfs.ancestry.path - enable usage of this functionality. A successful getxattr will have the result stored under same keys. Values will be, glusterfs.ancestry.dentry: -------------------------- A linked list of gf-dirent structures for all possible paths from root to this gfid. If there are multiple paths, the linked-list will be a series of paths one after another. Each path will be a series of dentries representing all components of the path. This key is primarily for internal usage within glusterfs. glusterfs.ancestry.path: ------------------------ A string containing all possible paths from root to this gfid. Multiple hardlinks of a file or a symlink are displayed as a colon seperated list (this could interfere with path components containing ':'). e.g. If there is a file "file1" in root directory with two hardlinks, "/dir2/link2tofile1" and "/dir1/link1tofile1", then [root@alpha gfsmntpt]# getfattr -n glusterfs.ancestry.path -e text file1 glusterfs.ancestry.path="/file1:/dir2/link2tofile1:/dir1/link1tofile1" Thanks Amar, Avati and Venky for the inputs. Original Author: Ramana Raja <rraja@redhat.com> BUG: 990028 Signed-off-by: Raghavendra G <rgowdapp@redhat.com> Change-Id: I0eaa9101e333e0c1f66ccefd9e95944dd4a27497 Reviewed-on: http://review.gluster.org/5951 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr: Provide HA for pathinfo getxattrPranith Kumar K2013-11-261-0/+32
| | | | | | | | | | | | | | | | | | | | | | | Problem: afr_[f]getxattr_pathinfo_cbks fail the fop even when it succeeded on one of the bricks. This can happen if the last response to pathinfo [f]getxattr is a failure. Fix: Remember if any of the [f]getxattr_pathinfos are successful and send that as the op_ret/op_errno value to the xlators above. Note: Winding fop to a client xlator that is not connected to server produces an error log. Preventing that by not even winding fop when client xlator is DOWN. Change-Id: I846e8c47423ffcfa2eabffe8924534781a36841a BUG: 1032927 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/6332 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* fuse: Check the return status from state->resolve_nowv3.5.0qa1Vijaykumar M2013-11-141-0/+35
| | | | | | | | | Change-Id: I85fc6dd393449d365bb908b38c2827b58cb08171 BUG: 1030208 Signed-off-by: Vijaykumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/6262 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Release big-lock after log-rotate handler returnsKrutika Dhananjay2013-10-301-0/+26
| | | | | | | | | | | | ... so that subsequent volume commands don't block waiting forever, for the lock to be released. Change-Id: I24b5ec47f6982900ab74ff1b492d523f31ecfb7f BUG: 1022055 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6122 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd : Improved quota volume reset commandAnuradha2013-10-281-0/+39
| | | | | | | | | | | | | | | | | | | | | | | Quota volume reset command without "force" option fixed, doesn't fail anymore. It resets unprotected fields and not the protected ones. Also, an appropriate message is provided to the user for the following cases : 1. only unprotected fields are reset, "force" option should be used to reset protected fields. 2. Both protected and unprotected fields are reset. 3. No field was reset, "force" option required. Test case for the same also added. Change-Id: I24e8f1be87b79ccd81bf6f933e00608b861c7a16 BUG: 1022905 Signed-off-by: Anuradha <atalur@redhat.com> Reviewed-on: http://review.gluster.org/6135 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli,glusterd: Changes to cli-glusterd communicationKaushal M2013-10-172-11/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Glusterd changes: With this patch, glusterd creates a socket file in DATADIR/run/glusterd.socket , and listen on it for cli requests. It listens for 2 rpc programs on the socket file, - The glusterd cli rpc program, for all cli commands - A reduced glusterd handshake program, just for the 'system:: getspec' command The location of the socket file can be changed with the glusterd option 'glusterd-sockfile'. To retain compatibility with the '--remote-host' cli option, glusterd also listens for the cli requests on port 24007. But, for the sake of security, it listens using a reduced cli rpc program on the port. The reduced rpc program only contains read-only procs used for 'volume (info|list|status)', 'peer status' and 'system:: getwd' cli commands. CLI changes: The gluster cli now uses the glusterd socket file for communicating with glusterd by default. A new option '--gluster-sock' has been added to allow specifying the sockfile used to connect. Using the '--remote-host' option will make cli connect to the given host & port. Tests changes: cluster.rc has been modified to make use of socket files and use different log files for each glusterd. Some of the tests using cluster.rc have been fixed. Change-Id: Iaf24bc22f42f8014a5fa300ce37c7fc9b1b92b53 BUG: 980754 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5280 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: [Feature] Command implementation to get heal-countVenkatesh Somyajulu2013-10-142-0/+176
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently to know the number of files to be healed, either user has to go to backend and check the number of entries present in indices/xattrop directory. But if a volume consists of large number of bricks, going to each backend and counting the number of entries is a time-taking task. Otherwise user can give gluster volume heal vol-name info command but with this approach if no. of entries are very hugh in the indices/ xattrop directory, it will comsume time. So as a feature, new command is implemented. Command 1: gluster volume heal vn statistics heal-count This command will get the number of entries present in every brick of a volume. The output displays only entries count. Command 2: gluster volume heal vn statistics heal-count replica 192.168.122.1:/home/user/brickname Here if we are concerned with just one replica. So providing any one of the brick of a replica will get the number of entries to be healed for that replica only. Example: Replicate volume with replica count 2. Backend status: -------------- [root@dhcp-0-17 xattrop]# ls -lia | wc -l 1918 NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy entries so actual no. of entries to be healed are 1916. [root@dhcp-0-17 xattrop]# pwd /home/user/2ty/.glusterfs/indices/xattrop Command output: -------------- Gathering count of entries to be healed on volume volume3 has been successful Brick 192.168.122.1:/home/user/22iu Status: Brick is Not connected Entries count is not available Brick 192.168.122.1:/home/user/2ty Number of entries: 1916 Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290 BUG: 1015990 Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com> Reviewed-on: http://review.gluster.org/6044 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli,glusterd: Implement 'volume status tasks'Krutika Dhananjay2013-10-082-4/+4
| | | | | | | | | | | | | | | | | | | oVirt's Gluster Integration needs an inexpensive command that can be executed every 10 seconds to monitor async tasks and their parameters, for all volumes. The solution involves adding a 'tasks' sub-command to 'volume status' to fetch only the async task IDs, type and other relevant parameters. Only the originator glusterd participates in this command as all the information needed is available on all the nodes. This is to make the command suitable for being executed every 10 seconds. Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1 BUG: 1012346 Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-on: http://review.gluster.org/6006 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* Tests: Enable fore-ground self-healPranith Kumar K2013-10-031-0/+1
| | | | | | | | | Change-Id: Ibfca8ddb7c663d44ed447be13b2eabb7bd393bb3 BUG: 993981 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/6028 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* tests: Create a regression-tests package for distributionHarshavardhana2013-09-191-1/+1
| | | | | | | | | | | | | | | | As of today regression tests are an in-house breed, by making it a new package and distributing it ensures larger set of people use it and contribute to it. This can also be used by any consumer/user to build their own environment for glusterfs regression testing which is today limited only to 'upstream' 'glusterfs' releases and build.gluster.org Change-Id: I4f7e9fd1c49982dcf0d788ef6a83ffe895a956ac BUG: 764966 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/5674 Reviewed-by: Niels de Vos <ndevos@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* cli/glusterd: improve rebalance fix-layout status reportingRavishankar N2013-09-192-2/+50
| | | | | | | | | | | | | | | | | | | Problem: Currenly the CLI rebalance status command output does not indicate the 'type' of rebalance, i.e. whether a full rebalance or only a fix-layout was carried out. Fix: After the rebalance status of all peers is received by the originator glusterd, alter it to reflect the type of rebalance before passing it on to the CLI process. Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3 BUG: 1004744 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/5826 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* mgmt/glusterd: Update sub_count on remove brickVijay Bellur2013-09-111-0/+25
| | | | | | | | | | Change-Id: I7c17de39da03c6b2764790581e097936da406695 BUG: 1002556 Signed-off-by: Vijay Bellur <vbellur@redhat.com> Reviewed-on: http://review.gluster.org/5893 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cli: Fix 'status all' xml output when volumes are not startedKaushal M2013-09-111-0/+26
| | | | | | | | | | | | CLI now only outputs one XML document for 'status all' only containing those volumes which are started. BUG: 1004218 Change-Id: Id4130fe59b3b74475d8bd1cc8134ac59a28f1b7e Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5773 Reviewed-by: Vijay Bellur <vbellur@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterfsd: use-readdirp w/o arguments should not turn off readdirpHarshavardhana2013-09-091-0/+7
| | | | | | | | | | | | | | `use-readdirp` has an optional argument in argp - specifying just `--use-readdirp` command line should not 'turn off' readdirp, since that undermines the meaning of such an argument. Change-Id: I965d87e29bd0d61997d9be96fa698e270a2ee173 BUG: 983477 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/5851 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/dht: assign layout onto missing directories tooAnand Avati2013-09-091-0/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | The current self-healing algorithm is ignoring missing directories for assigning new layout. When lookup() is racing against mkdir() or when self-healing a half-done mkdir(), the layout assignment split must happen based on the final number of directories, and not the currently existing number of directories (because we finish mkdir() of missing directories before hash layout assignment). Without this fix, concurrent mkdir() and lookup() will step on each others feet, create a messed up layout on disk, and end up with different in-memory layouts. Once two clients have different in-memory layouts, creation of subdirectory will not arbitrate on the same hashed subvolume and will result in GFID mismatch of the sub-directory. Change-Id: Ia47acad67c265060405984c822b4d37512b9dbb3 BUG: 907072 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/5849 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Peter Portante <pportant@redhat.com> Tested-by: Peter Portante <pportant@redhat.com>
* glusterfsd: Round robin DNS should not be relied upon withHarshavardhana2013-09-061-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | config service availability for clients. Backupvolfile server as it stands is slow and prone to errors with mount script and its combination with RRDNS. Instead in theory it should use all the available nodes in 'trusted pool' by default (Right now we don't have a mechanism in place for this) Nevertheless this patch provides a scenario where a list of volfile-server can be provided on command as shown below ----------------------------------------------------------------- $ glusterfs -s server1 .. -s serverN --volfile-id=<volname> \ <mount_point> ----------------------------------------------------------------- OR ----------------------------------------------------------------- $ mount -t glusterfs -obackup-volfile-servers=<server2>: \ <server3>:...:<serverN> <server1>:/<volname> <mount_point> ----------------------------------------------------------------- Here ':' is used as a separator for mount script parsing Now these will be remembered and recursively attempted for fetching vol-file until exhausted. This would ensure that the clients get 'volume' configs in a consistent manner avoiding the need to poll through RRDNS. Change-Id: If808bb8a52e6034c61574cdae3ac4e7e83513a40 BUG: 986429 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/5400 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* stripe: remove unused param, handle mem alloc failureKaleb S. KEITHLEY2013-08-281-0/+54
| | | | | | | | | Change-Id: I9c27b1edab111031ca8eea9cc49480ea01e39089 BUG: 1002207 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/5716 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* nfs: persistent caching of connected NFS-clientsNiels de Vos2013-08-281-0/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce /var/lib/glusterfs/nfs/rmtab to contain a list of NFS-clients which have a volume mounted. The volume option 'nfs.mount-rmtab' can be set to an alternative filename. When the file is located on shared storage, multiple gNFS servers can use the same file to present a single NFS-server. This cache is read when a system administrator calls 'showmount -a' and updated when an NFS-client calls MNT or UMNT from the MOUNT protocol. Usage: - create a volume for storing the shared rmtab file - mount the volume on all storage servers, at the same location - make sure that the volume is mounted at boot (add to /etc/fstab) - place the rmtab file on the volume: # gluster volume set <VOLUME> nfs.mount-rmtab <MOUNTPOINT>/<FILENAME> - any subsequent mount requests will add an entry to this file - 'showmount -a' requests will return the NFS-clients using the cluster Note: The NFS-server does currently not support reconfigure(). When a configuration option is set/changed, the NFS-server glusterfs process gets restarted. This causes the active NFS-clients to be forgotten (the entries are saved in the old rmtab, but we do not have a reference to that file any more, so we can't re-add them). Therefor a re-mount done by the NFS-clients is needed before they get listed in the rmtab again. Change-Id: I58f47135d60ad112849d647bea4e1129683dd2b3 BUG: 904065 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/4430 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
* mount/fuse: perform lookup() on inodes linked through readdirplusAnand Avati2013-08-231-2/+2
| | | | | | | | | | | | | Some xlators still require lookup() fop to be sent for proper working. This patch remembers inodes which have been linked through readdiprlus and makes the resolver send lookups on them. Change-Id: Ibe8a04a659539d90dfc794521b51bf2bda017a0b BUG: 979910 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/5267 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* bug-979365.t: fix wrong expectation of encountering fsyncAnand Avati2013-08-221-1/+6
| | | | | | | | | | | | | | After the append-write detection patch, FSYNCs may or may not be issued depeneding on the order in which writes reach the server (in the presence of write-behind). Fix the test case to understand this non-deterministic behavior. Change-Id: I1dc3453a6dd4a12a66551948eb8311d789ac2ecf BUG: 927146 Signed-off-by: Anand Avati <avati@redhat.com> Reviewed-on: http://review.gluster.org/5694 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
* glusterd: add check in remove-brick start variantRavishankar N2013-08-211-11/+16
| | | | | | | | | | | | | | | | | | | | | | The 'start' variant of the remove-brick command only applies at the dht level wherein we can remove all the bricks of a sub-volume (and remove multiple such sub-volumes) but not select bricks of it. This patch disallows removing individual replica bricks of multiple sub-volumes (i.e. reducing the replcia count of the volume) using remove-brick 'start'. The preferred method for such an operation is to use commit force. This patch also reverts the check to prevent removal of bricks from a replicate volume (commit 0d415f7) BUG: 961669 Change-Id: I447ad27f73a0963b5e09fb317bf7267a7a5a6147 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/5566 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* tests: Avoid another 'grep' in ps outputHarshavardhana2013-08-201-1/+1
| | | | | | | | | | | | | | | Previous fix at -----------------------------> 764c42813df3de9659a1ab7b7356779d2e5d64e5 <----------------------------- Was incomplete Change-Id: I6662168a7af078935a5cbcfae76ec40fc0c112b8 BUG: 887098 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/5672 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli: Add server uuid into volume brick info xmlTimothy Asir2013-08-181-0/+27
| | | | | | | | | | | | | | | | | Add server uuid as an attribute to the existing brick details in the volume info cli xml output. Currently, when a node has more than one ip, the oVirt-engine fails to map the corresponding server using the ip alone. If we get the host uuid along with brick details in volume info command it will be easy for ovirt-engine to find out the server and thereby we can avoid confusion in finding the server. Change-Id: I3c9c9acea80e10e0b2977477759d9af045e48959 BUG: 955588 Signed-off-by: Timothy Asir <tjeyasin@redhat.com> Reviewed-on: http://review.gluster.org/4875 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* tests: make sure to avoid 'grep' in ps outputHarshavardhana2013-08-181-1/+1
| | | | | | | | | Change-Id: I48909facd2e3a2dc52a18e44d58c0e0fa2d96ec3 BUG: 887098 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/5631 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* protocol/server: Relax lktable finodelk del_locker checkPranith Kumar K2013-08-171-0/+3
| | | | | | | | | | | | | | | | | | | | | | Problem: Client xlator issues finodelk using anon-fd when the fd is not opened on the file. This can also happen between attempts to re-open the file after client disconnects. It can so happen that lock is taken using anon-fd and the file is now re-opened and unlock would come with re-opened fd. This will lead to leak in lk-table entry, which also holds reference to fd which leads to fd-leak on the brick. Fix: Don't check for fds to be equal for tracking finodelks. Since inodelk is identified by (gfid, connection, lk-owner) fd equality is not needed. Change-Id: I62152d84caef0b863c973845e618076d388e6848 BUG: 993247 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/5499 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: remove-brick:Allow simultaneous removal of multiple subvolumes.Ravishankar N2013-08-131-0/+52
| | | | | | | | | | | | | | | | | Currently, remove-brick supports removal of only one distributed stripe/ replica pair at a time. Fix it to support removal of multiple pairs. This is consistent with add-brick behaviour which supports adding multiple stripe/replica pairs simultaneously. Removal is successful irrespective of the order of the bricks given at the CLI, as long as the bricks are from the same subvolume(s). Change-Id: I7c11c1235ce07b124155978b9d48d0ea65396103 BUG: 974007 Signed-off-by: Ravishankar N <ravishankar@redhat.com> Reviewed-on: http://review.gluster.org/5210 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli,glusterd: Fix when tasks are shown in 'volume status'Kaushal M2013-08-031-0/+24
| | | | | | | | | | | | | Asynchronous tasks are shown in 'volume status' only for a normal volume status request for either all volumes or a single volume. Change-Id: I9d47101511776a179d213598782ca0bbdf32b8c2 BUG: 888752 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/5308 Reviewed-by: Amar Tumballi <amarts@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* performance/open-behind: Fix fd-leaks in unlink, renamePranith Kumar K2013-08-031-0/+35
| | | | | | | | | Change-Id: Ia8d4bed7ccd316a83c397b53b9c1b1806024f83e BUG: 991622 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/5493 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* cluster/afr: Disable eager-lock if open-fd-count > 1Pranith Kumar K2013-08-021-0/+31
| | | | | | | | | | | | | | | | | Lets say mount1 has eager-lock(full-lock) and after the eager-lock is taken mount2 opened the same file, it won't be able to perform any data operations until mount1 releases eager-lock. To avoid such scenario do not enable eager-lock for transaction if open-fd-count is > 1. Delaying of changelog piggybacking is avoided in this situation. Change-Id: I51b45d6a7c216a78860aff0265a0b8dabc6423a5 BUG: 910217 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/5432 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: venkatesh somyajulu <vsomyaju@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* Revert "storage/posix: Remove the interim fix that handles the gfid race"shishir gowda2013-07-301-1/+2
| | | | | | | | | | | | | | | | | | This reverts commit 97807e75956a2d240282bc64fab1b71762de0546. In a distribute or distribute-replica volume, this fix is required to prevent gfid mis-match due to race issues. test script bug-767585-gfid.t needs a sleep of 2, cause after setting backend gfid directly, we try to heal, and with this fix, we do not allow setxattr of gfid within creation of 1 second if not created by itself Change-Id: Ie3f4b385416889fd5de444638a64a7eaaf24cd60 BUG: 951195 Signed-off-by: shishir gowda <sgowda@redhat.com> Reviewed-on: http://review.gluster.org/5240 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>
* libglusterfs/client_t client_t implementation, phase 1Kaleb S. KEITHLEY2013-07-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Implementation of client_t The feature page for client_t is at http://www.gluster.org/community/documentation/index.php/Planning34/client_t In addition to adding libglusterfs/client_t.[ch] it also extracts/moves the locktable functionality from xlators/protocol/server to libglusterfs, where it is used; thus it may now be shared by other xlators too. This patch is large as it is. Hooking up the state dump is left to do in phase 2 of this patch set. (N.B. this change/patch-set supercedes previous change 3689, which was corrupted during a rebase. That change will be abandoned.) BUG: 849630 Change-Id: I1433743190630a6d8119a72b81439c0c4c990340 Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com> Reviewed-on: http://review.gluster.org/3957 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Amar Tumballi <amarts@redhat.com>