summaryrefslogtreecommitdiffstats
path: root/xlators/mgmt/glusterd
Commit message (Collapse)AuthorAgeFilesLines
* glusterd: Fetching the txn_id before performing glusterd_op_bricks_select in ↵Avra Sengupta2014-06-024-42/+51
| | | | | | | | | | | | | | | | | | | | glusterd_brick_op() In glusterd_brick_op(), the txn_id mut be fetched before failing the transaction for any other reason. Moving the fetching of txn_id to the beginning of the function. Also initializing txn_id to priv->global_txn_id where it wasn't initialized. Change-Id: I44d7daa444f00a626f24670c92324725f6c5fb35 BUG: 1102656 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7926 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: return failure when server quorum is not metRaghavendra Bhat2014-06-011-1/+2
| | | | | | | | | | | Change-Id: I0a7df141d01cf70da5aac9658b2a5a19f660dd3b BUG: 1101561 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/7894 Reviewed-by: Sachin Pandit <spandit@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* cli: 'Snapshot Volume: yes/no' for volume info needs to be removedVijaikumar M2014-05-291-15/+0
| | | | | | | | | | | | | | | | | | | | With initial design where the snap volume used to be displayed in gluster volume info, we used "Snap Volume: yes/on" to distinguish the volume whether its a snap volume or the original volume. But with new design the snap volumes are not listed in the volume info, hence this entry (snap volume:) doesn't make sense to show. Change-Id: Ic5b9948bf4ef74e89a611742c74a8989cb406866 BUG: 1098910 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7794 Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Sachin Pandit <spandit@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Use external umount for un-mounting snapshotVijaikumar M2014-05-293-17/+43
| | | | | | | | | | | | | | | umount2 doesn't cleanup /etc/mtab entry after doing umount, so use external umount command for un-mount operation. Change-Id: I1a91a700433e2bf15dd21e6fcdd9da54441048d1 BUG: 1098084 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7775 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* rpm: Rename old hookscripts in an RPM-standard way.Poornima G2014-05-291-1/+2
| | | | | | | | | | | Change-Id: I5e47ac8af8033821787281574276f38933de73cf BUG: 1100251 Signed-off-by: Poornima G <pgurusid@redhat.com> Reviewed-on: http://review.gluster.org/7362 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cluster/afr: Don't support heal info healed/heal-failed commandsPranith Kumar K2014-05-271-0/+10
| | | | | | | | | | Change-Id: Iecfd3150e4f4e795e3403bcb1ac56340759a37d0 BUG: 1098027 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/7766 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Ravishankar N <ravishankar@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Invoke restore cleanup for missed restoresAvra Sengupta2014-05-272-0/+15
| | | | | | | | | | | | | | | | While performing missed restores invoke restore cleanup, to cleanup the snap file (from which the vol was restored) and also the old volinfo and if the old volume is a restored volume, then its lvm too. Change-Id: Ifa5700c69f49fa0e22e0060a039c2e5c0b02b585 BUG: 1100324 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7848 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* Glusterd/Rebalance: Update rebalance status properly inSusant Palai2014-05-262-0/+11
| | | | | | | | | | | | | | node_state.info credit: kaushal@redhat.com spalai@redhat.com Change-Id: I08d0771e2168a4a6ebd473e8a937b8b2eda1341a BUG: 1075087 Signed-off-by: Susant Palai <spalai@redhat.com> Reviewed-on: http://review.gluster.org/7214 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* mgmt/glusterd: Allow mount/umount requests over AF_INETVijay Bellur2014-05-252-7/+11
| | | | | | | | | | | | | | | | Along with a simple naming convention change to avoid confusion as per below. s/gd_svc_cli_prog_ro/gd_svc_cli_trusted_progs/ s/gd_svc_cli_actors_ro/gd_svc_cli_trusted_actors/ Change-Id: Ibc73d88846636656f060a811f641f37a1a864615 BUG: 1077452 Original-Author: Vijay Bellur <vbellur@redhat.com> Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/7821 Reviewed-by: Kotresh HR <khiremat@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd/vol_locks: Dereference ctx->dict, only if ctx != NULLAvra Sengupta2014-05-231-6/+6
| | | | | | | | | | | Change-Id: I7f753aff197fe08fad255fc75d7f88d2a4632de8 BUG: 1100325 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7849 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Sachin Pandit <spandit@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: brick_start shouldn't be done from child threadVijaikumar M2014-05-221-8/+19
| | | | | | | | | | | | | | | | | | | | | | | When creating a volume snapshot, the back-end operation 'taking a lvm_snapshot and starting brick' for the each brick are executed in parallel using synctask framework. brick_start was releasing a big_lock with brick_connect and does a lock again. This will cause a deadlock in some race condition where main-thread waiting for one of the synctask thread to finish and synctask-thread waiting for the big_lock. Solution is not to start_brick from from synctask Change-Id: Iaaf0be3070fb71e63c2de8fc2938d2b77d40057d BUG: 1100218 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7842 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Disable ping-timer between glusterd and brick processVijaikumar M2014-05-192-4/+8
| | | | | | | | | | | | | | | | | | | | | | When there are too many IO happening, brick process epoll thread will be busy and fails to respond to the glusterd pick packet within 30sec. Also epoll thread can be blocked by a big-lock. Solution is to disable ping-timer by default and only enable where ever required Later when the epoll thread model changed and made lighter, we need to revert back this change. http://review.gluster.com/3842 is one such approach. Change-Id: I7f80ad3eb00f7d9c4d4527305932f7cf4920e73f BUG: 1097224 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7753 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rpcsvc: Validate RPC procedure number before fetchSantosh Kumar Pradhan2014-05-174-13/+14
| | | | | | | | | | | | | | | | | | | | | While accessing the procedures of given RPC program in, rpcsvc_get_program_vector_sizer(), It was not checking boundary conditions which would cause buffer overflow and subsequently SEGV. Make sure rpcsvc_actor_t arrays have numactors number of actors. FIX: Validate the RPC procedure number before fetching the actor. Special Thanks to: Murray Ketchion, Grant Byers Change-Id: I8b5abd406d47fab8fca65b3beb73cdfe8cd85b72 BUG: 1096020 Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com> Reviewed-on: http://review.gluster.org/7726 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Anand Avati <avati@redhat.com>
* NetBSD build fixesEmmanuel Dreyfus2014-05-171-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - Shell scripts: == is specific to bash and ksh. Use = instead. - Shell scripts: use sh instead of bash if bash functionnality is not used - Shell scripts: ${var/search/replace} is specific to bash - sed: The -i option is specific to GNU sed. - Makefiles: $< outside of generic rules only work in GNU make. - xdrproc_t() is not universally defined as variadic. Do not specify third argument if it is not used - NetBSD FUSE specific: only include <perfuse.h> in FUSE client code, it harms in other locations - configure: Search for gettext() in libintl as NetBSD stores it there - Like MacOS X, NetBSD has unmount(2) and not umount(2) (un vs u) Some other build issues previously included in this change were removed: - __THROW macro, addressed in http://review.gluster.com/#/c/7757/ - getmntent() compat shared with MacOS X, in http://review.gluster.com/#/c/7722/ This patchset adds warning fixes for mount_glusterfs BUG: 764655 Change-Id: I2f1faf8ff96362d3e2baf237b943df619011f1f4 Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org> Reviewed-on: http://review.gluster.org/7783 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net>
* contrib: Cross platform fixes after recent commitsHarshavardhana2014-05-172-10/+10
| | | | | | | | | | | | | | | - provide a getment_r () version which behaves as re-entrant with some caveats for NetBSD/OSX specific. - some apparent warning issues fixed, always use PRI* format specification avoid using %ld i.e not portable Change-Id: Ib3d1a73b426e38b436b356355b97db0104a1a4a5 BUG: 1089172 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/7722 Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/snapshot: Putting back the missed_snaps_list codeAvra Sengupta2014-05-151-1/+8
| | | | | | | | | | | | | | Setting of missed_snap_count was removed as part of an earlier patch. Putting back the code. Signed-off-by: Avra Sengupta <asengupt@redhat.com> Change-Id: Ib6412d6100145e94d10f6f4a8a1fe4e645c1a69e BUG: 1097725 Reviewed-on: http://review.gluster.org/7764 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Differentiate snap-volume directories properly.Avra Sengupta2014-05-151-4/+3
| | | | | | | | | | | | | | | | | | If /var/lib/glusterd is hosted on xfs system, the entry->d_type not showing the correct d_type owes to the restore path prematurely exiting. Hence checking entry->d_name to differntiate <snap-name>/info file, missed_snaps_list file and the <snap-name>/geo-replication directory, from the actual volume directories, without impacting the gluster volumes. Change-Id: I9a774a845282fe7cc697e37bbcf7c4545aee7678 BUG: 1094557 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7680 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Sachin Pandit <spandit@redhat.com> Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd/geo-rep: Allow gverify.sh and S56glusterd-geo-rep-create-post.shAvra Sengupta2014-05-141-16/+68
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to operate for non-root privileged slave volume Mounting the slave-volume on local node, to perform disk checks in order to allow gverify.sh to operate for non-root privileged slave volume Allowing the hook script S56glusterd-geo-rep-create-post.sh to operate for non-root privileged slave volume Modified peer_add_secret_pub.in to accept username as argument and add the pem keys to the users's_home_dir/.ssh/authorized_keys Wrote set_geo_rep_pem_keys.sh which accepts username as argument and copies the pem keys from the user's home directory to $GLUSTERD_WORKING_DIR/geo-replication/ and then copies the keys to other nodes in the cluster and add them to the respective authorized keys. The script takes as argument the user name and assumes that the user will be present in all the nodes in the cluster. It is not needed for root. To summarize: For a privileged slave user, execute the following on master node as super user: gluster system:: execute gsec_create gluster volume geo-replication <master_vol> [root@]<slave_ip>::<slave_vol> create push_pem For a non-privileged slave user execute the following on master node as super user: gluster system:: execute gsec_create gluster volume geo-replication <master_vol> <slave_user>@<slave_ip>::<slave_vol> create push_pem then on the slave node execute the following as super user: /usr/local/libexec/glusterfs/set_geo_rep_pem_keys.sh <slave_user> BUG: 1077452 Change-Id: I88020968aa5b13a2c2ab86b1d6661b60071f6f5e Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7744 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* gsyncd / geo-rep: Partial support for Non-root geo-replication.Venky Shankar2014-05-145-120/+158
| | | | | | | | | | | | | | | | | | | | | | | | This patch enables geo-replication to be run as an unprivileged user. As of now, this is just the partial support, but is very close to achieve full functionality. Current limitation * Geo-replication executed Gluster CLI commands on the slave via SSH. On a non-root setup, Gluster CLI would run as an unprivileged user, failing to execute the command. As a workaround (for testing), setuid(2) Gluster CLI executable or use the glusterd option to accept commands by unprivileged CLI process. The nature of cli commands are "system::" commands (for key management) and remote volume info fetching. Remote volume info fetching has been modified to use --remote-host gluster cli option rather than ssh and remote cli execution. Change-Id: Ica89e2ba9b7f48fd6e1c876c477d7822dc693617 BUG: 1077452 Signed-off-by: Venky Shankar <vshankar@redhat.com> Reviewed-on: http://review.gluster.org/7658 Tested-by: Gluster Build System <jenkins@build.gluster.com>
* glusterd: Allow setting volume options by default based on op-versionKaushal M2014-05-145-5/+83
| | | | | | | | | | | | | | | | | | A new function glusterd_enable_default_options is introduced, which will set some volume options on a volume based on op-version. This function is called near the end of the volume create and will allow some options to be enabled based on op-version on newly created volumes. This will also be called during volume reset, to reset the options to their default values if they had changed. Change-Id: I91057d9e42409b17a884728b43ae3721328d4831 BUG: 1096616 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/7734 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* geo-rep/glusterd: Pause and Resume feature for geo-replicationKotresh H R2014-05-131-14/+263
| | | | | | | | | | | | | | | | | This patch introduces pause and resume cli command for geo-replication. Signed-off-by: Kotresh H R <khiremat@redhat.com> Change-Id: I4f5e58e9175fe85077d56088473252391fb57de7 BUG: 1093602 Signed-off-by: Kotresh H R <khiremat@redhat.com> Reviewed-on: http://review.gluster.org/7643 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Venky Shankar <vshankar@redhat.com> Tested-by: Venky Shankar <vshankar@redhat.com>
* glusterd/snapshot : Quorum check should not be made if weSachin Pandit2014-05-132-129/+163
| | | | | | | | | | | | | | | | | | | | | | | | perform snapshot status command. Problem : Snapshot status command used to fail as it used to hit the quorum check path. Solution : The condition checking where snapname is fetched based on the presence of snap_volume is moved inside create switch case. And also moved the chunk of code which does the actual quorum check to new function to make the code more readable. Change-Id: Idda2d7c576cdfab3a7d087bfa74bfa616372c20e BUG: 1096700 Signed-off-by: Sachin Pandit <spandit@redhat.com> Reviewed-on: http://review.gluster.org/7737 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: On gaining quorum spawn_daemons in new threadKaushal M2014-05-124-28/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | During startup, if a glusterd has peers, it waits till quorum is obtained to spawn bricks and other services. If peers are not present, the daemons are started during glusterd' startup itself. The spawning of daemons as a quorum action was done without using a seperate thread, unlike the spawn on startup. Since, quotad was launched using the blocking runner_run api, this leads to the thread being blocked. The calling thread is almost always the epoll thread and this leads to a deadlock. The runner_run call blocks the epoll thread waiting for quotad to start, as a result glusterd cannot serve any requests. But the startup of quotad is blocked as it cannot fetch the volfile from glusterd. The fix for this is to launch the spawn daemons task in a seperate thread. This will free up the epoll thread and prevents the above deadlock from happening. Change-Id: Ife47b3591223cdfdfb2b4ea8dcd73e63f18e8749 BUG: 1095585 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/7703 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* libgfapi: Added support to fetch volume info from glusterd and store in ↵Soumya Koduri2014-05-111-0/+137
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | glfs object. Defined new APIs in the libgfapi module, given a glfs object, * to send handshake RPC call to glusterd process to fetch UUID of the volume * store it in the glusterfs_context linked to the glfs object. * to parse UUID from its cannonical string format into 16-byte array before sending it to the libgfapi users. Defined a RPC call in glusterd which can be used to query volume related info by other processes using 'clnt_handshake_procs'. Note - Currently this RPC call to glusterd process is used only to fetch UUID. But it can be extended to get other volume related structures as well. In addition to the above, defined a new variable to keep track of such handshake RPCs still in progress to make sure all the corresponding RPC callbacks have been processed before libgfapi returns the glfs object initialized. Also bumping up the GFAPI current version number since there is a new API "glfs_get_volume_id" defined and exposed by libgfapi as part of these changes. Change-Id: I303f76d7177d32d25bdb301b1dbcf5cd73f42807 BUG: 1090363 Signed-off-by: Soumya Koduri <skoduri@redhat.com> Reviewed-on: http://review.gluster.org/7218 Reviewed-by: Anand Avati <avati@redhat.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* rpc: implement server.manage-gids for group resolving on the bricksNiels de Vos2014-05-092-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | The new volume option 'server.manage-gids' can be enabled in environments where a user belongs to more than the current absolute maximum of 93 groups. This option triggers the following behavior: 1. The AUTH_GLUSTERFS structure sent by GlusterFS clients (fuse, nfs or libgfapi) will contain only one (1) auxiliary group, instead of a full list. This reduces network usage and prevents problems in encoding the AUTH_GLUSTERFS structure which should fit in 400 bytes. 2. The single group in the RPC Calls received by the server is replaced by resolving the groups server-side. Permission checks and similar in lower xlators are applied against the full list of groups where the user belongs to, and not the single auxiliary group that the client sent. Change-Id: I9e540de13e3022f8b63ff893ecba511129a47b91 BUG: 1053579 Signed-off-by: Niels de Vos <ndevos@redhat.com> Reviewed-on: http://review.gluster.org/7501 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Santosh Pradhan <spradhan@redhat.com> Reviewed-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-by: Anand Avati <avati@redhat.com>
* mgmt/glusterd: delete oldest snapshot upon exceeding soft-limitRaghavendra Bhat2014-05-082-0/+106
| | | | | | | | | | Change-Id: I2d6ebae3ced1910f2dee43eeb9fc430e9f31073f BUG: 1061685 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/7587 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: volume gets deleted if restore failsRajesh Joseph2014-05-081-0/+8
| | | | | | | | | | | | | | | If the restore command fails in pre-validate phase then main volume gets deleted. Fix: Perform cleanup only when pre-validate passes. Change-Id: I7128c8582c3dd166a5683babb7e136ad0b56f0ac BUG: 1061685 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/7665 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Don't release big_lock before completing snapshot creationVijaikumar M2014-05-083-5/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | Releasing the big-lock can cause problem like deadlock or memory corruption. Same happened with bug 1091926 where glusterd on node-2 entered a commit phase and released a big-lock. The originator node received timeout for the commit phase and triggered a post-validate cleanup to the node-2. Now node-2 continued to work with the object that are alreday cleaned-up and resulted in a crash. Solution is to not to release big-lock in the commit phase of snapshot creation. Change-Id: I571194fdb0b0ecc91bd13f2a9fc92fe4338d14dc BUG: 1091926 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7579 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Execute lvm snapshots in parallelVijaikumar M2014-05-085-73/+248
| | | | | | | | | | | | | | | | | Back-end LVM Snapshot is executed parallely as synop task This helps is gaining performance when there are more bricks in a node. This patch also removes unwanted logs printed in snapshot cleanup Change-Id: I3174cb4547ebb670eca37a98eb9d75ecb0672a90 BUG: 1061685 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7461 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snapshot: Add brick-count suffix for the LVM snapshotVijaikumar M2014-05-085-278/+245
| | | | | | | | | | | | | | | | When there are more than one brick created from the same LVM volume group, there will be a conflict with the LVM snapshot name we use. Solution is to add a brick-count suffix to the LVM snapshot name Change-Id: I7258e69fe0b50e86b81c66ab1db523ab3c7cbae0 BUG: 1091934 Signed-off-by: Vijaikumar M <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/7581 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/glusterd: Prevent spurious brick restartsPranith Kumar K2014-05-083-23/+44
| | | | | | | | | Change-Id: I7ee5d18b926d6c31e3e4ea2f5fbe9050c8e1dee8 BUG: 959986 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/4954 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* mgmt/glusterd: Perform Pending quorum actions after OpPranith Kumar K2014-05-081-0/+7
| | | | | | | | | Change-Id: I2bb67b5fb4a6f6dac892ef3206e7a79706018a6e BUG: 959986 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/4955 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Use a calloc-ed copy of txn_id for glusterd_do_replace_brickAvra Sengupta2014-05-081-3/+8
| | | | | | | | | | | | | | | As glusterd_do_replace_brick() is spawned through gf_timer_call_after(), by the time it's called the event is freed, and the txn_id is lost. Hence using a calloc-ed copy, which will be freed as a part of rb_ctx dict. Change-Id: I3e309fe1a7ba96ad1d1ce01f4d2aa18178f59244 BUG: 1095097 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7686 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* mgmt/glusterd: quorum check before taking the snapshotRaghavendra Bhat2014-05-075-23/+667
| | | | | | | | | | | | | | | | | | | | without force option: quorum fails if glusterds are not in quorum. If glusterd are in quorum, then volume quorum (i.e quorum of the bricks) is checked. volume quorum fails even if one of the bricks are down. with force option: even though the glusterds are not in quorum, and some bricks are down, the quorum check of the volume (i.e bricks) is done and if the volume quorum is met, snapshot is taken. Change-Id: I06971e45d5cf09880032ef038bfe011e6c244268 BUG: 1061685 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/7463 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: NFS server wrongly started with `DEBUG` log-levelHarshavardhana2014-05-071-2/+1
| | | | | | | | | | | | | | Disable DEBUG Change-Id: I011231ba3df4a42f892f1305867bfc74bb101269 BUG: 1089172 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/7654 Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd : Port glusterd sync log messages to gf_msg APIAtin Mukherjee2014-05-064-33/+67
| | | | | | | | | Change-Id: Ic3ed2c96d8fc3a15fedaa80517a2c79c0c858963 BUG: 1075611 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/7652 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: port network failure log messages to gf_msg APIKrishnan Parthasarathi2014-05-063-11/+33
| | | | | | | | | Change-Id: I23df6d179e9d66a71721e9844a34c5b96586f90f BUG: 1075611 Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-on: http://review.gluster.org/7462 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Kaushal M <kaushal@redhat.com>
* glusterd: Port server quorum messages to the gf_msg APIKaushal M2014-05-065-6/+84
| | | | | | | | | | Change-Id: I84716cc07f3cbd8c1b2825a5676d6693fed6fade BUG: 1075611 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/7578 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Allow bumping up the cluster op-versionKaushal M2014-05-062-0/+77
| | | | | | | | | | | | | | | | | | | | | | | | | This patch allows a user to bump up the cluster op-version by doing # gluster volume set all cluster.op-version <OP-VERSION> The op-version will be bumped only if - all the peers in the cluster support it, and - the new op-version is greater than the current cluster op-version This set operation will not do any other change other than changing and saving the cluster op-version in the glusterd.info file. It will NOT, - change any existing volume - add the option to the global options list - fix the cluster op-version to the given version, it can be bumped up by other volume set commands. Change-Id: I084b4fcc45e79dc2ca7b7680d7bb371bb175af39 BUG: 1092592 Signed-off-by: Kaushal M <kaushal@redhat.com> Reviewed-on: http://review.gluster.org/7603 Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd/snashot: Perform missed snap createsAvra Sengupta2014-05-064-145/+589
| | | | | | | | | | | | | | | | | | | | | | | | | When a brick is started, and the glusterfsd process requests for volfile, the brick_name is sent in the req dict. In glusterd, after fetching the spec the brick_name is looked up in the missed_snap_list, and any missing snap creates on the same brick are performed. After this, the glusterd responds back with the specfile. Also collate brick data from the node's hosting the bricks during restore. In case the data is absent, the local node's data is used. This is needed to ensure that, during a restore we collect the information created when a missed snap create is performed. Change-Id: I47cefdeba96f2702be810965734cf0fac61d3d2d BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7551 Reviewed-by: Santosh Pradhan <spradhan@redhat.com> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* glusterd: Fetch brick mount_dirs during brick create.Avra Sengupta2014-05-0611-117/+461
| | | | | | | | | | | | | | | | | Fetch the mount directory path for a brick, during volume create, add-brick, and replace-brick. When a snap-create is missed, use this mount directory information to create the brick path for the missed snap brick. Change-Id: Iad3eec96a32cf340f26bdf3f28e2f529e4b77e31 BUG: 1061685 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/7550 Reviewed-by: Rajesh Joseph <rjoseph@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
* mgmt/gluster: Use fsync instead of O_SYNCPranith Kumar K2014-05-053-39/+21
| | | | | | | | | | | | | | | | | Glusterd uses O_SYNC to write to temp file then performs renames to the actual file and performs fsync on parent directory. Until this rename happens syncing writes to the file can be deferred. In this patch O_SYNC open of temp file is removed and fsync of the fd before rename is done. Change-Id: Ie7da161b0daec845c7dcfab4154cc45c2f49d825 BUG: 908277 Signed-off-by: Pranith Kumar K <pkarampu@redhat.com> Reviewed-on: http://review.gluster.org/7370 Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd/snapshot: umount2 on OSX/NetBSD is unmountHarshavardhana2014-05-031-9/+21
| | | | | | | | | | Change-Id: I8de4d47bb2a54b915243ea029cce2585fba34876 BUG: 1089172 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/7651 Reviewed-by: Justin Clift <justin@gluster.org> Tested-by: Justin Clift <justin@gluster.org> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd: Allow symlink parent for snap_mount_folderHarshavardhana2014-05-031-1/+1
| | | | | | | | | | | | If '/var' is a symlink which is on OSX, 'glusterd' initialization fails which is not necessary fix it. Change-Id: I83adc16cfc0e0deaa18acf74ba99299ba4a21d60 BUG: 1061685 Signed-off-by: Harshavardhana <harsha@harshavardhana.net> Reviewed-on: http://review.gluster.org/7558 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Anand Avati <avati@redhat.com>
* glusterd : Volname, brickpath & volfpath length validationAtin Mukherjee2014-05-036-21/+61
| | | | | | | | | | | | | | | | | | | | | | | | While creating a volume and adding a brick validation for _POSIX_PATH_MAX is done on absolute pathname instead of relative pathname due to which a brickpath having less than _POSIX_PATH_MAX may also fail the validation if the directory length is greater than (_POSIX_PATH_MAX -strlen(brickpath/volume name). Also this fix addresses one cli response message correction which says the volume file is too long instead of brick path is too long (when brickpath length validation doesn't fail and vol file length validation fails.) It is also important to note that with the current design of volfile naming, it can not be guranteed that volname and brickpath can have max of _POSIX_PATH_MAX characters. Change-Id: I1283d1f9dea96ae797620002c8723719f26a866d BUG: 1085330 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/7420 Reviewed-by: Niels de Vos <ndevos@redhat.com> Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* mgmt/glusterd: handle postvalidate carefully when prevalidate failsRaghavendra Bhat2014-05-033-24/+59
| | | | | | | | | | | | | | * Also changed the order of peers retrieval and snapshot retrieval upon glusterd start, so that the snapshot bricks can be properly resolved while cleaning up the snapshots. Change-Id: I120704e4412a9cadb8d90a9b7969f2b4a1196bc5 BUG: 1061685 Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com> Reviewed-on: http://review.gluster.org/7494 Reviewed-by: Kaushal M <kaushal@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* cli/hooks : Add volume set options to enable/disable nfs-ganesha support.Meghana M2014-05-033-1/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1. gluster volume set nfs-ganesha.enable ON/OFF If the option is set to ON, the volume field in the nfs-ganesha configuartion file is edited. Gluster-nfs is disabled on that volume and the volume is exported using nfs-ganesha. 2.gluster volume set nfs-ganesha.host IP This is used to provide the IP of the nfs-ganesha host. Note : nfs-ganesha.host MUST be set before using nfs-ganesha.enable ON The switch from gluster-nfs to nfs-ganesha is mostly done by the hook-scripts in the post phase of the 'set' option. As a result, gluster volume reset does not function as it is expected to. By default, nfs-ganesha will be set to off but the process will not be killed. Hence, a few changes have to be made post 'reset' option as well. Those changes also have been added. Change-Id: I7fdc14ee49d1724af96eda33c6a3ec08b1020788 BUG: 1092283 Signed-off-by: Meghana <mmadhusu@redhat.com> Reviewed-on: http://review.gluster.org/7321 Reviewed-by: Raghavendra Talur <rtalur@redhat.com> Reviewed-by: Santosh Pradhan <spradhan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Restore cleanupRajesh Joseph2014-05-026-29/+518
| | | | | | | | | | | | | | | If restores fails for some reason then we should revert the restore operation. To do so we take the backup of vols folder before doing a restore and if the restore fails then we revert the changes done. Change-Id: I97f72aec3a34fc122bf137beb336e94db3a04dff BUG: 1061685 Signed-off-by: Rajesh Joseph <rjoseph@redhat.com> Reviewed-on: http://review.gluster.org/7548 Reviewed-by: Santosh Pradhan <spradhan@redhat.com> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd: Differentiate rebalance status and remove-brick status messagesggarg2014-05-021-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | previously when user triggred 'gluster volume remove-brick VOLNAME BRICK start' then command' gluster volume rebalance <volname> status' showing output even user has not triggred "rebalance start" and when user triggred 'gluster volume rebalance <volname> start' then command 'gluster volume remove-brick VOLNAME BRICK status' showing output even user has not run rebalance start and remove brick start. regression test failed in previous patch. file test/dht.rc and test/bug/bug-973073 edited to avoid regression test failure. now with this fix it will differentiate rebalance and remove-brick status messages. Signed-off-by: ggarg <ggarg@redhat.com> Change-Id: I7f92ad247863b9f5fbc0887cc2ead07754bcfb4f BUG: 1089668 Reviewed-on: http://review.gluster.org/7517 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-by: Humble Devassy Chirammal <humble.devassy@gmail.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
* glusterd/snapshot: Activation and De-activation of snapshotJoseph Fernandes2014-05-023-53/+342
| | | | | | | | | | | | | | | | | | | | | Previously, snapshots by default were activated on creation and there was no option to activate or deactivate them on demand. This will allow the user to activate and deactivate on demand. The CLI goes as follows 1) Activate the snap using a command "gluster snapshot activate <snapname> [force]" 2) Deactivate the snap using a command "gluster snapshot deactivate <snapname>" Note: Even now the snapshot will be activated during creation. Change-Id: I0946d800780f26c63fa1fcaf29aabc900140448f BUG: 1061685 Signed-off-by: Joseph Fernandes <josferna@redhat.com> Reviewed-on: http://review.gluster.org/7476 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>