| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
- This diff adds support for detecting and tracking idle client connections.
- It allows *service translators* (server, nfs) to opt-in to detect and close idle client connections.
- Right now it explicitly restricts the service to NFS as a safety.
Here are the debug logs when a client connection gets closed:
[2016-03-29 17:27:06.154232] W [socket.c:2426:socket_timeout_handler] 0-socket: Shutting down idle client connection (idle=20s,fd=20,conn=[2401:db00:11:d0af:face:0:3:0:957]->[2401:db00:11:d0af:face:0:3:0:2049])!
[2016-03-29 17:27:06.154292] D [event-epoll.c:655:__event_epoll_timeout_slot] 0-epoll: Connection on slot->fd=9 was idle for 20 seconds!
[2016-03-29 17:27:06.163282] D [socket.c:629:__socket_rwv] 0-socket.nfs-server: EOF on socket
[2016-03-29 17:27:06.163298] D [socket.c:2474:socket_event_handler] 0-transport: disconnecting now
[2016-03-29 17:27:06.163316] D [event-epoll.c:614:event_dispatch_epoll_handler] 0-epoll: generation bumped on idx=9 from gen=4 to slot->gen=5, fd=20, slot->fd=20
Test Plan: - Used stuck NFS mounts to create idle clients and unstuck them.
Reviewers: kvigor, rwareing
Reviewed By: rwareing
Subscribers: dld, moox, dph
Differential Revision: https://phabricator.fb.com/D3112099
Change-Id: Ic06c89e03f87daabab7f07f892390edd1a1fcc20
Signed-off-by: Jeff Darcy <jdarcy@fb.com>
Reviewed-on: https://review.gluster.org/18265
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
Tested-by: Jeff Darcy <jeff@pl.atyp.us>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
|
|\
| |
| |
| | |
Change-Id: Ie35cd1c8c7808949ddf79b3189f1f8bf0ff70ed8
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Commit 086436a introduced generation number (cleanup_gen) to ensure that
rpc layer doesn't end up cleaning up the connection object if
application layer has already destroyed it. Bumping up cleanup_gen was
done only in rpc_clnt_connection_cleanup (). However the same is needed
in rpc_clnt_reconnect_cleanup () too as with out it if the object gets destroyed
through the reconnect event in the application layer, rpc layer will
still end up in trying to delete the object resulting into double free
and crash.
Peer probing an invalid host/IP was the basic test to catch this issue.
Cherry picked from commit 39e09ad1e0e93f08153688c31433c38529f93716:
> Change-Id: Id5332f3239cb324cead34eb51cf73d426733bd46
> BUG: 1433578
> Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
> Reviewed-on: https://review.gluster.org/16914
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> Reviewed-by: Milind Changire <mchangir@redhat.com>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
Change-Id: Id5332f3239cb324cead34eb51cf73d426733bd46
BUG: 1462447
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/17743
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Milind Changire <mchangir@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Locking during notify was introduced as part of commit
aa22f24f5db7659387704998ae01520708869873 [1]. The fix was introduced
to fix out-of-order CONNECT/DISCONNECT events from rpc-clnt to parent
xlators [2]. However as part of handling DISCONNECT protocol/client
does unwind saved frames (with failure) waiting for responses. This
saved_frames_unwind can be a costly operation and hence ideally
shouldn't be included in the critical section of notifylock, as it
unnecessarily delays the reconnection to same brick. Also, its not a
good practise to pass control to other xlators holding a lock as it
can lead to deadlocks. So, this patch removes locking in rpc-clnt
while notifying parent xlators.
To fix [2], two changes are present in this patch:
* notify DISCONNECT before cleaning up rpc connection (same as commit
a6b63e11b7758cf1bfcb6798, patch [3]).
* protocol/client uses rpc_clnt_cleanup_and_start, which cleans up rpc
connection and does a start while handling a DISCONNECT event from
rpc. Note that patch [3] was reverted as rpc_clnt_start called in
quick_reconnect path of protocol/client didn't invoke connect on
transport as the connection was not cleaned up _yet_ (as cleanup was
moved post notification in rpc-clnt). This resulted in clients never
attempting connect to bricks.
Note that one of the neater ways to fix [2] (without using locks) is
to introduce generation numbers to map CONNECT and DISCONNECTS across
epochs and ignore DISCONNECT events if they don't belong to current
epoch. However, this approach is a bit complex to implement and
requires time. So, current patch is a hacky stop-gap fix till we come
up with a more cleaner solution.
[1] http://review.gluster.org/15916
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1386626
[3] http://review.gluster.org/15681
Cherry picked from commit 773f32caf190af4ee48818279b6e6d3c9f2ecc79:
> Change-Id: I62daeee8bb1430004e28558f6eb133efd4ccf418
> Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
> BUG: 1427012
> Reviewed-on: https://review.gluster.org/16784
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Milind Changire <mchangir@redhat.com>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Change-Id: I62daeee8bb1430004e28558f6eb133efd4ccf418
Reported-by: Markus Stockhausen <mst@collogia.de>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
BUG: 1462447
Reviewed-on: https://review.gluster.org/17733
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Milind Changire <mchangir@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Backport of https://review.gluster.org/16613
Issue:
When fio is run on multiple clients (each client writes to its own files),
and meanwhile the clients does a readdirp, thus the client which did
a readdirp will now recieve the upcalls. In this scenario the client
disconnects with rpc decode failed error.
RCA:
Upcall calls rpcsvc_request_submit to submit the request to socket:
rpcsvc_request_submit currently:
rpcsvc_request_submit () {
iobuf = iobuf_new
iov = iobuf->ptr
fill iobuf to contain xdrised upcall content - proghdr
rpcsvc_callback_submit (..iov..)
...
if (iobuf)
iobuf_unref (iobuf)
}
rpcsvc_callback_submit (... iov...) {
...
iobuf = iobuf_new
iov1 = iobuf->ptr
fill iobuf to contain xdrised rpc header - rpchdr
msg.rpchdr = iov1
msg.proghdr = iov
...
rpc_transport_submit_request (msg)
...
if (iobuf)
iobuf_unref (iobuf)
}
rpcsvc_callback_submit assumes that once rpc_transport_submit_request()
returns the msg is written on to socket and thus the buffers(rpchdr, proghdr)
can be freed, which is not the case. In especially high workload,
rpc_transport_submit_request() may not be able to write to socket immediately
and hence adds it to its own queue and returns as successful. Thus, we have
use after free, for rpchdr and proghdr. Hence the clients gets garbage rpchdr
and proghdr and thus fails to decode the rpc, resulting in disconnect.
To prevent this, we need to add the rpchdr and proghdr to a iobref and send
it in msg:
iobref_add (iobref, iobufs)
msg.iobref = iobref;
The socket layer takes a ref on msg.iobref, if it cannot write to socket and
is adding to the queue. Thus we do not have use after free.
Thank You for discussing, debugging and fixing along:
Prashanth Pai <ppai@redhat.com>
Raghavendra G <rgowdapp@redhat.com>
Rajesh Joseph <rjoseph@redhat.com>
Kotresh HR <khiremat@redhat.com>
Mohammed Rafi KC <rkavunga@redhat.com>
Soumya Koduri <skoduri@redhat.com>
> Reviewed-on: https://review.gluster.org/16613
> Reviewed-by: Prashanth Pai <ppai@redhat.com>
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: soumya k <skoduri@redhat.com>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Change-Id: Ifa6bf6f4879141f42b46830a37c1574b21b37275
BUG: 1422788
Signed-off-by: Poornima G <pgurusid@redhat.com>
Reviewed-on: https://review.gluster.org/16638
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Prashanth Pai <ppai@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Summary:
Every once in a while rpcbind crashes and the NFS endpoints go bye-bye.
This diff makes it such that we should almost never encounter the case
where we have NFS up and rpcbind down causing bad endpoints and hanging
mounts for our customers.
Test Plan: Added prove tests + tested on dev server
Reviewers: dph, moox, rwareing
Reviewed By: rwareing
Differential Revision: https://phabricator.fb.com/D2571724
Tasks: 8803558
Change-Id: I35acb2d731185a7b20020cb57bdd4d879e978df4
Signature: t1:2571724:1445555327:3276a4dcc4da71346b09d4aeb46c69dddcc7c5ba
Reviewed-on: https://review.gluster.org/17961
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Differential Revision: https://phabricator.intern.facebook.com/D5376801
Change-Id: I5bf733a395ef2b85065200fa5810ced27ee0d682
Reviewed-on: https://review.gluster.org/17719
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: Jeff Darcy <jeff@pl.atyp.us>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Summary:
- Large clusters explode with such a low timeout since the peer info
exchange is serialized.
Test Plan: - Build and pushed to gfsbudev.ash3c06 where problem first observed
Reviewers: dph, moox, sshreyas
Reviewed By: sshreyas
FB-commit-id: 82f7af1
Change-Id: Id7c2f408eeb8847118e0ad53465c9fca4c6d9fb5
Signed-off-by: Kevin Vigor <kvigor@fb.com>
Reviewed-on: https://review.gluster.org/16857
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
re-registering
Summary: Per title
Test Plan: Run prove tests to make sure we didn't break anything
Reviewers: dph, rwareing
Reviewed By: rwareing
FB-commit-id: 78a9a0c
Change-Id: I05ed6b7c715a71e5819fbe8116e7c3146010f836
Signed-off-by: Kevin Vigor <kvigor@fb.com>
Reviewed-on: https://review.gluster.org/16849
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Summary:
It was observed while testing the SHD threading code, that under high loads SHD/AFR related
SyncOps & SyncTasks can actually hang/deadlock as the transport
disconnected event (for frame timeouts) never gets bubbled up correctly. Various
tests indicated the ping timeouts worked fine, while "frame timeouts"
did not. The only difference? Ping timeouts actually disconnect
the transport while frame timeouts did not. So from a high-level we
know this prevents deadlock as subsequent tests showed the deadlocks
no longer ocurred (after this change). That said, there may be some
more elegant solution. For now though, forcing a reconnect is
preferential vs hanging clients or deadlocking the SHD.
Test Plan:
It's fairly difficult to write a good prove test for this since it requires human eyes to observe if the SHD is deadlocked (I'm open to ideas). Here's the repro though:
1. Create a 3x replicated cluster on a host.
2. Set the frame-timeout low (say 2 sec)
3. Down a brick, and write a pile of files (maybe 2000)
4. Bring up the downed brick and let the SHD begin healing files
5. During the heal process, kill -STOP <pid of brick> (hang) one of the bricks
Without this patch the SHD will be deadlocked, even though the frame timed out after 2 seconds. With the patch, the plug is pulled on the transport, a disconnect is bubbled up
to the syncop and the SHD resumes.
Reviewers: dph, meyering, cjh
Reviewed By: cjh
Subscribers: ethanr
Conflicts:
rpc/rpc-lib/src/rpc-clnt.c
FB-commit-id: c99357c
Change-Id: I344079161492b195267c2d64b6eab0b441f12ded
Signed-off-by: Kevin Vigor <kvigor@fb.com>
Reviewed-on: https://review.gluster.org/16846
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Summary: - Per title, this is operationally better for us
Test Plan: - Prove tests
Reviewers: dph, cjh, jackl
Reviewed By: jackl
Subscribers: ssl-diffs@
FB-commit-id: c9dec448d8f4f39f553b7dd5825be47d14495415
Change-Id: Id90420eb9c3eaf7252915b04a81f6dcefaf86be5
Signed-off-by: Kevin Vigor <kvigor@fb.com>
Reviewed-on: http://review.gluster.org/16378
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
|
|\|
| |
| |
| |
| | |
Change-Id: I844adf2aef161a44d446f8cd9b7ebcb224ee618a
Signed-off-by: Kevin Vigor <kvigor@fb.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It is possible that the notification thread which notifies
protocol/client layer about the disconnection is put to sleep
and meanwhile, a fuse thread or a timer thread initiates and
completes reconnection to the brick. The notification thread
is then woken up and protocol/client layer updates its flags
to indicate that network is disconnected. No reconnection is
initiated because reconnection is rpc-lib layer's responsibility
and its flags indicate that connection is connected.
Fix: Serialize connect and disconnect notify
> Credit: Raghavendra Talur <rtalur@redhat.com>
> Reviewed-on: http://review.gluster.org/15916
> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
(cherry picked from commit aa22f24f5db7659387704998ae01520708869873)
Change-Id: I8ff5d1a3283b47f5c26848a42016a40bc34ffc1d
BUG: 1401534
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-on: http://review.gluster.org/16025
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Backport of: http://review.gluster.org/15747
When there are already existing non-granular indices created that are
yet to be healed, if granular-entry-heal option is toggled from 'off' to
'on', AFR self-heal whenever it kicks in, will try to look for granular
indices in 'entry-changes'. Because of the absence of name indices,
granular entry healing logic will fail to heal these directories, and
worse yet unset pending extended attributes with the assumption that
are no entries that need heal.
To get around this, a new CLI is introduced which will invoke glfsheal
program to figure whether at the time an attempt is made to enable
granular entry heal, there are pending heals on the volume OR there
are one or more bricks that are down. If either of them is true, the
command will be failed with the appropriate error.
New CLI: gluster volume heal <VOL> granular-entry-heal {enable,disable}
Change-Id: I342e0390f847fcb015a50ef58aedfcbcb58f4ed3
BUG: 1398501
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/15942
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Problem: Continuous warning message(ENODATA) are coming in socket_rwv
while SSL is enabled.
Solution: To avoid the warning message update one condition in
socket_poller loop code before break from loop in case
of error returned by poll functions.
> BUG: 1386450
> Change-Id: I19b3a92d4c3ba380738379f5679c1c354f0ab9b1
> Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
> Reviewed-on: http://review.gluster.org/15677
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
> (cherry picked from commit ec64ce2e1684003f4e7a20d4372e414bfbddb6fb)
Change-Id: I70eaf8d454a1538e14b50c6fb1074f84dd10cdf5
BUG: 1387976
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: http://review.gluster.org/15706
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Summary:
- Changes halo-decision to be based on the lowest halo value observed
- Adds halo-min-sample option to wait until N latency samples have been
gathered prior to activating halos.
- Fixed 3 edge cases where halo's weren't being correctly
config'd, or not configured as quickly as is possible. Namely:
1. Don't mark a child down if there's no better alternative (and you'd
no longer satisfy min/max replicas); fixes unneccessary flapping.
2. If a child goes down and this causes us to fall below max_replicas,
swap in a warm child immediately if it is within our halo latency
(don't wait around for the next "ping"); swaps in a new child
immediately helping with resiliency.
3. If the child latency is within the halo, and it's currently marked
up, mark it down if it's the highest latency child and the number of
children is > max_replicas; this will allow us to support the
SHD use-case where we can "beam" a single copy to a geo and have it
replicate within the geo after that.
- More commenting
Test Plan:
- Run halo prove tests
- Pointed compiled code at gfsglobal.prn2, tested out an NFS daemon and
FUSE mounts to ensure they worked as expected on a large scale
cluster.
Reviewers: dph, jackl, cjh, mmckeen
Reviewed By: mmckeen
FB-commit-id: 7e2e8ae6b8ec62a5e0b31c9fd6100c81795b3424
Change-Id: Iba2b2f1bc848b4546cb96117ff1895f83953a4f8
Signed-off-by: Kevin Vigor <kvigor@fb.com>
Reviewed-on: http://review.gluster.org/16304
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Summary:
- Realtime latencies in practice have far too much jitter under real
loading conditions, instead let's use a running average which will get
very "heavy" over time such that temp spikes in brick latency will not
affect halo decisions.
Test Plan: - Run prove tests
Reviewed By: mmckeen
Change-Id: I5ebf9bc93c67d9a226287796dd7ca5eeb7b1cfa5
Signed-off-by: Kevin Vigor <kvigor@fb.com>
Reviewed-on: http://review.gluster.org/16301
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Summary:
Several prove tests use the 'launch_cluster' function to set up a
clustered volume. This replies on using multiple local IP
addresses, one for each server. Since IPV6 provides only ::1 as
a local address, as opposed to IPv4's complete 127.x.x.x subnet,
this cannot work in a pure IPv6 environment.
However, FB systems do at least have enough IPv4 stack to talk
locally, so fix launch_cluster to work properly when default
transport is IPv6.
To do this:
1) explicitly set transport.address-family volume option to inet in
launch_cluster().
2) teach glusterd to honor transport.address-family when connecting
to peer glusterds in glusterd_friend_rpc_create(). Previously
transport.address-family was used only for binding local socket,
not for communicating with peers.
Test Plan:
prove -f --timer ./tests/basic/glusterd/arbiter-volume-probe.t
Reviewers:
Subscribers:
Tasks:
Blame Revision:
Change-Id: I077d8549dcdbe4919ac7df34856a4b2d1428cdb6
Signed-off-by: Kevin Vigor <kvigor@fb.com>
Reviewed-on: http://review.gluster.org/16225
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Summary: Fix deadlock in ping timer callback.
Test Plan: run, mount volume.
Reviewers: rwareing
Reviewed By: rwareing
Differential Revision: https://phabricator.intern.facebook.com/D3744945
Signature: t1:3744945:1474061471:3e3d1a5cefc541d26973535887c1f08c017fc049
Change-Id: Iaf94eb4c3acaa8b3ceeeb6a273db4109eea29a7c
Signed-off-by: Kevin Vigor <kvigor@fb.com>
Reviewed-on: http://review.gluster.org/16168
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Summary:
Replace complex and slow port selection code with bind(0).
Test Plan:
runtests.sh
Reviewers:
sshreyas
Subscribers:
Tasks:
Blame Revision:
Change-Id: I408a8528e58e1aafcd32eba6a8f1a759e0bf274e
Signed-off-by: Kevin Vigor <kvigor@fb.com>
Reviewed-on: http://review.gluster.org/16150
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Summary:
- Check for AF_INET *and* AF_INET6.
- This is a cherry-pick of D3057373 to 3.8
Signed-off-by: Shreyas Siravara <sshreyas@fb.com>
Change-Id: I53eb79284eddfee6e13821c6570809f575b96769
Reviewed-on: http://review.gluster.org/16155
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Kevin Vigor <kvigor@fb.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Summary:
Halo Geo-replication is a feature which allows Gluster or NFS clients to write locally to their region (as defined by a latency "halo" or threshold if you like), and have their writes asynchronously propagate from their origin to the rest of the cluster. Clients can also write synchronously to the cluster simply by specifying a halo-latency which is very large (e.g. 10seconds) which will include all bricks.
In other words, it allows clients to decide at mount time if they desire synchronous or asynchronous IO into a cluster and the cluster can support both of these modes to any number of clients simultaneously.
There are a few new volume options due to this feature:
halo-shd-latency: The threshold below which self-heal daemons will
consider children (bricks) connected.
halo-nfsd-latency: The threshold below which NFS daemons will consider
children (bricks) connected.
halo-latency: The threshold below which all other clients will
consider children (bricks) connected.
halo-min-replicas: The minimum number of replicas which are to
be enforced regardless of latency specified in the above 3 options.
If the number of children falls below this threshold the next
best (chosen by latency) shall be swapped in.
New FUSE mount options:
halo-latency & halo-min-replicas: As descripted above.
This feature combined with multi-threaded SHD support (D1271745) results in some pretty cool geo-replication possibilities.
Operational Notes:
- Global consistency is gaurenteed for synchronous clients, this is provided by the existing entry-locking mechanism.
- Asynchronous clients on the other hand and merely consistent to their region. Writes & deletes will be protected via entry-locks as usual preventing concurrent writes into files which are undergoing replication. Read operations on the other hand should never block.
- Writes are allowed from _any_ region and propagated from the origin to all other regions. The take away from this is care should be taken to ensure multiple writers do not write the same files resulting in a gfid split-brain which will require resolution via split-brain policies (majority, mtime & size). Recommended method for preventing this is using the nfs-auth feature to define which region for each share has RW permissions, tiers not in the origin region should have RO perms.
TODO:
- Synchronous clients (including the SHD) should choose clients from their own region as preferred sources for reads. Most of the plumbing is in place for this via the child_latency array.
- Better GFID split brain handling & better dent type split brain handling (i.e. create a trash can and move the offending files into it).
- Tagging in addition to latency as a means of defining which children you wish to synchronously write to
Test Plan:
- The usual suspects, clang, gcc w/ address sanitizer & valgrind
- Prove tests
Reviewers: jackl, dph, cjh, meyering
Reviewed By: meyering
Subscribers: ethanr
Differential Revision: https://phabricator.fb.com/D1272053
Tasks: 4117827
Change-Id: I694a9ab429722da538da171ec528406e77b5e6d1
Signed-off-by: Kevin Vigor <kvigor@fb.com>
Reviewed-on: http://review.gluster.org/16099
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Summary:
- This diff changes all locations in the code to prefer inet6 family
instead of inet. This will allow change GlusterFS to operate
via IPv6 instead of IPv4 for all internal operations while still
being able to serve (FUSE or NFS) clients via IPv4.
- The changes apply to NFS as well.
- This diff ports D1892990, D1897341 & D1896522 to the 3.8 branch.
Test Plan: Prove tests!
Reviewers: dph, rwareing
Signed-off-by: Shreyas Siravara <sshreyas@fb.com>
Change-Id: I34fdaaeb33c194782255625e00616faf75d60c33
Reviewed-on: http://review.gluster.org/16059
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
Tested-by: Shreyas Siravara <sshreyas@fb.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This allows to ship all glusterfs dependencies to hadoop
machines in a tarball.
Test Plan:
- build tarball: https://phabricator.fb.com/P2848521
- scp to a machine with no gluster installed
echo "Hellow world" | LD_LIBRARY_PATH=glusterfs_libs GLUSTER_LIBDIR=glusterfs_libs ./glfscat $(shuf -n 1 <(smcc ls storage.gluster.gfsops.frc1) | cut -d: -f 1) groot /gfsetlprocstore/adslearner/users/azzolini/hello_world.txt
(code for glfscat follows in a separate diff)
Reviewers: rwareing
Reviewed By: rwareing
Differential Revision: https://phabricator.fb.com/D1009665
Change-Id: I8812929fc127ca291aa66e2430b5633892235915
Signed-off-by: Kevin Vigor <kvigor@fb.com>
Reviewed-on: http://review.gluster.org/16032
Reviewed-by: Shreyas Siravara <shreyas.siravara@gmail.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem
=======
When quota is enabled on 3.6, it will have quota conf version in quota.conf
as v1.1. This node gets upgraded to 3.7 but it will still have quota conf
version as v1.1 until a quota enable/disable/set limit is initiated. When
this is not initiated and when this node tries to peer probe a node which
is a fresh install of 3.7 (which will have quota conf version as v1.2), then this
will result in "Peer rejected" state. This patch fixes the issue.
Solution
========
When an upgrade happens from 3.6 to 3.7, quota.conf file needs
to be modified as well. With 3.6, in quota.conf the version will be
v1.1 and it needs to be changed to v1.2 from 3.7. This is because in
3.7, inode quota feature is introduced. So when an op-version bumpup
happens quota.conf needs to be upgraded with quota conf version v1.2
and all the 16 byte uuid needs to be changed to 17 bytes uuid as well.
Previously, when the cluster version is upgraded to 3.7, the quota.conf
got upgraded as well. But, the upgradation was done only when quota
enable/disable/set limit is done. With this patch, the upgradation is done
during a cluster op version bump up as well.
> Reviewed-on: http://review.gluster.org/15352
> Tested-by: Atin Mukherjee <amukherj@redhat.com>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
(cherry picked from commit 4b2cff614462508eef529c5d128e0974720e3f50)
Change-Id: Idb5ba29d3e1ea0e45c85d87c952c75da9e0f99f0
BUG: 1392716
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/15791
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Manikandan Selvaganesh <manikandancs333@gmail.com>
Tested-by: Manikandan Selvaganesh <manikandancs333@gmail.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Encrypted connections create a pipe, which isn't closed when the
connection disconnects. This leaks fds, and gluster eventually ends up
in a situation with fd starvation which leads to operation failures.
> Change-Id: I144e1f767cec8c6fc1aa46b00cd234129d2a4adc
> BUG: 1336371
> Signed-off-by: Kaushal M <kaushal@redhat.com>
> Reviewed-on: http://review.gluster.org/14356
> Tested-by: MOHIT AGRAWAL <moagrawa@redhat.com>
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Change-Id: I144e1f767cec8c6fc1aa46b00cd234129d2a4adc
BUG: 1336376
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/15703
Tested-by: Atin Mukherjee <amukherj@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: In case of SSL after stopping the volume if client(mount point) is
still trying to write the data on socket then it will throw an EIO
error on that socket and given this log message is captured at every
attempt this would flood the log file.
Solution: To reduce the frequency of stored log message use GF_LOG_OCCASIONALLY
instead of gf_log.
> BUG: 1381115
> Change-Id: I66151d153c2cbfb017b3ebc4c52162278c0f537c
> Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
> Reviewed-on: http://review.gluster.org/15605
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
> (cherry picked from commit 070145750006c87099f945b4990a4460d814c21f)
Change-Id: I08a8a8ae66b80bba2fdb5afbcab19a0950f85104
BUG: 1384356
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: http://review.gluster.org/15631
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: client identifier is not logged in message in ssl_setup_connection
Solutuion: In ssl_setup_connection xl_private is not available in rpc_transport
so changed to this peerinfo.identifier.
> BUG: 1380275
> Change-Id: I05006a3d63e46de8c388298c22faa9a3329eb6f3
> Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
> Reviewed-on: http://review.gluster.org/15596
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
> (cherry picked from commit 2e23c62cc50037c8e61bcd9c04348409e7627181)
Change-Id: Iad08817ee2c2828a08bc22e78c273390562ae9fb
BUG: 1383882
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: http://review.gluster.org/15624
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If connect fails with any other error than EINPROGRESS we cannot get
the error status using getsockopt (... SO_ERROR ... ). Hence we need
to remember the state of connect and take appropriate action in the
event_handler for the same.
As an added note, a event can come where poll_err is HUP and we have
poll_in as well (i.e some status was written to the socket), so for
such cases we need to finish the connect, process the data and then
the poll_err as is the case in the current code.
Special thanks to Kaushal M & Raghavendra G for figuring out the issue.
>Signed-off-by: Shyam <srangana@redhat.com>
>Reviewed-on: http://review.gluster.org/15440
>Smoke: Gluster Build System <jenkins@build.gluster.org>
>NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
>CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
>Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Change-Id: Ic45ad59ff8ab1d0a9d2cab2c924ad940b9d38528
BUG: 1373723
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/15532
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The RPC/XID for callbacks has been hardcoded to GF_UNIVERSAL_ANSWER. In
Wireshark these RPC-calls are marked as "RPC retransmissions" because of
the repeating RPC/XID. This is most confusing when verifying the
callbacks that the upcall framework sends. There is no way to see the
difference between real retransmissions and new callbacks.
This change was verified by create and removal of files through
different Gluster clients. The RPC/XID is increased on a per connection
(or client) base. The expectations of the RPC protocol are met this way.
> Change-Id: I2116bec0e294df4046d168d8bcbba011284cd0b2
> BUG: 1377097
> Signed-off-by: Niels de Vos <ndevos@redhat.com>
> Reviewed-on: http://review.gluster.org/15524
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
(cherry picked from commit e9b39527d5dcfba95c4c52a522c8ce1f4512ac21)
Change-Id: I2116bec0e294df4046d168d8bcbba011284cd0b2
BUG: 1377290
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/15528
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SSL_shutdown shuts down an active SSL connection. But we
are calling this after underlying socket is disconnected.
> Change-Id: Ia943179d23395f42b942450dbcf26336d4dfc813
> BUG: 1362602
> Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
> Reviewed-on: http://review.gluster.org/15072
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
(cherry picked from commit 79e006b31a1e6d71f1af02176f8e8acaed7f8cd2)
Change-Id: I6ce58bd5606278880e44c96d386acaeb0fef6275
BUG: 1371650
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-on: http://review.gluster.org/15359
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of http://review.gluster.org/#/c/14018/
snap status --xml errors out if a brick is down and
doesn't have pid. It is handled in the cli of the snap
status where "N/A" is displayed in such a scenario.
Handled the same in xml
snap status <snapname> --xml fails as the writer is
not initialised for the same. Using GF_SNAP_STATUS_TYPE_ITER
instead of GF_SNAP_STATUS_TYPE_SNAP for all snap's
status to differentiate between the two scenarios.
Added testcase volume-snapshot-xml.t to check
all snapshot commands xml outputs
> Reviewed-on: http://review.gluster.org/14018
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Change-Id: I99563e8f3e84f1aaeabd865326bb825c44f5c745
BUG: 1369372
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/15291
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: Polling failure errors are coming till volume is not come while
SSL is enabled.
Solution: To avoid the message update one condition in socket_poller code
It will not exit from thread in case of received ENODATA from
ssl_do function.
Backport of commit 84e9fc2fb5fabf9d1e553a420854a306cdb8a168
> Change-Id: Ia514e99b279b07b372ee950f4368ac0d9c702d82
> BUG: 1349709
> Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
> Reviewed-on: http://review.gluster.org/14786
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
> (cherry picked from commit 84e9fc2fb5fabf9d1e553a420854a306cdb8a168)
BUG: 1359654
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Change-Id: If1820c0b3d0cd976875137bc1175d4b2008779af
Reviewed-on: http://review.gluster.org/14999
Tested-by: MOHIT AGRAWAL <moagrawa@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of: http://review.gluster.org/13658
PROBLEM:
1. Freeing up rpc_clnt object might lead to crashes. Well,
it was not a necessity to free rpc-clnt object till now
because all the existing use cases needs to reconnect
back on disconnects. Hence timer code was not taking
ref on rpc-clnt object.
Glusterd had some use-cases that led to crash due to
ping-timer and they fixed only those code paths that
involve ping-timer.
Now, since changelog has an use-case where rpc-clnt
need to be freed up, we need to fix timer code to take
refs
2. In changelog, because of issue 1, only mydata was being
freed which is incorrect. And there are races where
rpc-clnt object would access the freed mydata which
would lead to crashes.
Since changelog xlator resides on brick side and is long
living process, if multiple libgfchangelog consumers
register to changelog and disconnect/reconnect mulitple
times, it would result in leak of 'rpc-clnt' object
for every connect/disconnect.
SOLUTION:
1. Handle ref/unref of 'rpc_clnt' structure in timer
functions properly.
2. In changelog, unref 'rpc_clnt' in RPC_CLNT_DISCONNECT
after disabling timers and free mydata on RPC_CLNT_DESTROY.
RPC SETUP IN CHANGELOG:
1. changelog xlator initiates rpc server say 'changelog_rpc_server'
2. libgfchangelog initiates one rpc server say 'libgfchangelog_rpc_server'
3. libgfchangelog initiates rpc client and connects to 'changelog_rpc_server'
4. In return changelog_rpc_server initiates a rpc client and connects back
to 'libgfchangelog_rpc_server'
REF/UNREF HANDLING IN TIMER FUNCTIONS:
Let's say rpc clnt refcount = 1
1. Take the ref before reigstering callback to timer queue
>>>> rpc_clnt_ref (say ref count becomes = 2)
2. Register a callback to timer say 'callback1'
3. If register fails:
>>>> rpc_clnt_unref (ref count = 1)
4. On timer expiration, 'callback1' gets called. So unref rpc clnt at the end
in 'callback1'. This is corresponding to ref taken in step 1
>>>> rpc_clnt_unref (ref count = 1)
5. The cycle from step-1 to step-4 continues....until timer cancel event happens
6. timer cancel of say 'callback1'
If timer cancel fails:
Do nothing, Step-4 would have unrefd
If timer cancel succeeds:
>>>> rpc_clnt_unref (ref count = 1)
Change-Id: I91389bc511b8b1a17824941970ee8d2c29a74a09
BUG: 1359364
Signed-off-by: Kotresh HR <khiremat@redhat.com>
(cherry picked from commit 637ce9e2e27e9f598a4a6c5a04cd339efaa62076)
Reviewed-on: http://review.gluster.org/14994
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A socket_connect failure creates a new pthread which
is not a detached thread. As no pthread_join is called,
the thread resources are not cleaned up causing a memory leak.
Now, socket_connect creates a detached thread to handle failure.
> Change-Id: Idbf25d312f91464ae20c97d501b628bfdec7cf0c
> BUG: 1343374
> Signed-off-by: N Balachandran <nbalacha@redhat.com>
> Reviewed-on: http://review.gluster.org/14875
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
(cherry picked from commit 9886d568a7a8839bf3acc81cb1111fa372ac5270)
Change-Id: I69ef46013c8dbc70cbda2695f12be1f6d3720055
BUG: 1354250
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: http://review.gluster.org/14979
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
socket_spawn.
Problem: Current approach to cleanup threads of socket_poller is not appropriate.
Solution: Enable detach flag at the time of thread creation in socket_spawn.
Fix: Write a new wrapper(gf_create_detach_thread) to create detachable thread
instead of store thread ids in a queue.
Test: Fix is verfied on gluster process, To test the patch followed below procedure
Enable the client.ssl and server.ssl option on the volume
Start the volume and count anon segment in pmap output for glusterd process
pmap -x <glusterd-pid> | grep "\[ anon \]" | wc -l
Stop the volume and check again count of anon segment it should not increase.
Backport of commit 2ee48474be32f6ead2f3834677fee89d88348382
> Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
> Change-Id: Ib8f7ec7504ec8f6f74b45ce6719b6fb47f9fdc37
> BUG: 1336508
> Reviewed-on: http://review.gluster.org/14694
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
BUG: 1354395
Change-Id: Ibdbbae508d9dda2fd36220a9b1e47f7776336929
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: http://review.gluster.org/14891
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of commit d308fb5e152d8c908bf4f5da81f553fbe3d0400a
> Change-Id: I4b463ecafb66de16cbe7ed23fae800bb1204f829
> BUG: 1333912
> Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
> Reviewed-on: http://review.gluster.org/14242
> Tested-by: Vijay Bellur <vbellur@redhat.com>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> (cherry picked from commit d308fb5e152d8c908bf4f5da81f553fbe3d0400a)
Change-Id: Id007d3e28292f504913b7df8b8eb693c0427b22b
BUG: 1351878
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: http://review.gluster.org/14845
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If option "transport.tcp-user-timeout" hasn't been setted, glusterd's
priv->timeout will be -1, which will cause invalid argument when
set TCP_USER_TIMEOUT.
Cherry picked from commit b2c73cbf423de6201f956f522b7429615c88869d:
> Change-Id: Ibc16264ceac0e69ab4a217ffa27c549b9fa21df9
> BUG: 1349657
> Signed-off-by: Zhou Zhengping <johnzzpcrystal@gmail.com>
> Reviewed-on: http://review.gluster.org/14785
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Change-Id: Ibc16264ceac0e69ab4a217ffa27c549b9fa21df9
BUG: 1354405
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/14888
Reviewed-by: Zhou Zhengping <johnzzpcrystal@gmail.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The groupnode->gr_next pointer is not traversed upon free. This is
currently not a problem, because the pointer is never used. However the
correct way to free a groupnode should check the ->gr_next pointer and
free any of the groups that it encounters.
This problem was identified while correcting a problem with the MOUNT
protocol. The change "nfs: build exportlist with multiple groups" starts
to use ->gr_next.
This is backport of below mainline fix -
http://review.gluster.org/#/c/14666/
Change-Id: I9d04eaf4c65bdb8db136321d60e70789da1739d7
BUG: 1343287
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Signed-off-by: Bipin Kunal <bkunal@redhat.com>
Reviewed-on: http://review.gluster.org/14699
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: bipin kunal <kunalbipin@gmail.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of: http://review.gluster.org/#/c/14647/
Issue:
The upcall(cache invalidation/recall) event is sent from the bricks
to clients. In AFR/EC setup, it can so happen that all the bricks
will send the upcall for the same event, and if AFR/EC doesn't filter
out these duplicate notifications, the logic above cluster xlators
can fail.
Solution:
Use transaction id to filter out duplicate notifications.
This patch adds framework for duplicate notifications.
AFR/EC can build up on this patch for deduping the notifications
Change-Id: I66b08e63b8799bc5932f2b2545376138a5701168
BUG: 1337638
Signed-off-by: Poornima G <pgurusid@redhat.com>
Reviewed-on: http://review.gluster.org/14648
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While investigating gfapi memory consumption with valgrind, valgrind
reported several memory access issues.
Also see the timer 'registry' being recreated (shortly) after being
freed during teardown due to the way it's currently written.
Passing ctx as data to gf_timer_proc() is prone to memory access
issues if ctx is freed before gf_timer_proc() terminates. (And in
fact this does happen, at least in valgrind.) gf_timer_proc() doesn't
need ctx for anything, it only needs ctx->timer, so just pass that.
Nothing ever calls gf_timer_registry_init(). Nothing outside of
timer.c that is. Making it and gf_timer_proc() static.
backport mainline:
> http://review.gluster.org/14247
> BUG: 1333925
Change-Id: Ia28454dda0cf0de2fec94d76441d98c3927a906a
BUG: 1342620
Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/14644
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
lk_flags from posix_lock_t structure is the primary key used to
differentiate locks as either advisory and mandatory type. During
lock migration this field is not read in getactivelk() call path.
So in order to copy the exact lock state from source to destination
it is necessary to include lk_flags within lock_migration_info_t
structure to maintain accurate state. This change also includes
minor modifications to setactivelk() call to consider lk_flags
during lock migration.
> Reviewed-on: http://review.gluster.org/14189
> Smoke: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Susant Palai <spalai@redhat.com>
> Reviewed-by: Poornima G <pgurusid@redhat.com>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
(cherry picked from commit deaf8439fc42435988aae6a7b9ab681cc0d36b09)
Change-Id: I20a7b6b6a0f3bdac5734cce8a2cd2349eceff195
BUG: 1337805
Signed-off-by: Anoop C S <anoopcs@redhat.com>
Reviewed-on: http://review.gluster.org/14457
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
current port allocation to various processes (clumsy):
1023 - 1 -> client ports range if bind secure is turned on
49151 - 1024 -> fall back to this, if in above case ports exhaust
65535 - 1024 -> client port range if bind insecure is on
49152 - 65535 -> brick port range
now, we have segregated port ranges 0 - 65535 to below 3 ranges
1023 - 1 -> client ports range if bind secure is turned on
49151 - 1024 -> client port range if bind insecure is on
(fall back to this, if in above case ports exhaust)
49152 - 65535 -> brick port range
so now we have a clean segregation of port mapping
Backport of:
> Change-Id: Ie3b4e7703e0bbeabbe0adbdd6c60d9ef78ef7c65
> BUG: 1335776
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> Reviewed-on: http://review.gluster.org/14326
> Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com>
> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
> Tested-by: Gluster Build System <jenkins@build.gluster.com>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> Smoke: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Change-Id: Ie3b4e7703e0bbeabbe0adbdd6c60d9ef78ef7c65
BUG: 1337127
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-on: http://review.gluster.org/14413
Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The usage of function local variables in the protocol state
machine caused an incorrect behaviour when a partial read
from the socket forced the function to return and restart
later when more data was available. At this point the local
variables contained incorrect data.
> Change-Id: I4db1f4ef5c46a3d2d7f7c5328e906188c3af49e6
> BUG: 1334285
> Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
> Reviewed-on: http://review.gluster.org/14270
> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
> Smoke: Gluster Build System <jenkins@build.gluster.com>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> Tested-by: Raghavendra G <rgowdapp@redhat.com>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Change-Id: I92b7c91b4c0dfc15224aea39308c93b27028dd4f
BUG: 1334287
Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-on: http://review.gluster.org/14293
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Intro:
Currently glusterd maintain the portmap registry which contains ports that
are free to use between 49152 - 65535, this registry is initialized
once, and updated accordingly as an then when glusterd sees they are been
used.
Glusterd first checks for a port within the portmap registry and gets a FREE
port marked in it, then checks if that port is currently free using a connect()
function then passes it to brick process which have to bind on it.
Problem:
We see that there is a time gap between glusterd checking the port with
connect() and brick process actually binding on it. In this time gap it could
be so possible that any process would have occupied this port because of which
brick will fail to bind and exit.
Case 1:
To avoid the gluster client process occupying the port supplied by glusterd :
we have separated the client port map range with brick port map range more @
http://review.gluster.org/#/c/13998/
Case 2: (Handled by this patch)
To avoid the other foreign process occupying the port supplied by glusterd :
To handle above situation this patch implements a mechanism to return EADDRINUSE
error code to glusterd, upon which a new port is allocated and try to restart
the brick process with the newly allocated port.
Note: Incase of glusterd restarts i.e. runner_run_nowait() there is no way to
handle Case 2, becuase runner_run_nowait() will not wait to get the return/exit
code of the executed command (brick process). Hence as of now in such case,
we cannot know with what error the brick has failed to connect.
This patch also fix the runner_end() to perform some cleanup w.r.t
return values.
Backport of:
> Change-Id: Iec52e7f5d87ce938d173f8ef16aa77fd573f2c5e
> BUG: 1322805
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> Reviewed-on: http://review.gluster.org/14043
> Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com>
> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
> Smoke: Gluster Build System <jenkins@build.gluster.com>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Change-Id: Id7d8351a0082b44310177e714edc0571ad0f7195
BUG: 1333711
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-on: http://review.gluster.org/14235
Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
when bind-insecure is 'off', all the clients bind to secure ports,
if incase all the secure ports exhaust the client will no more bind
to secure ports and tries gets a random port which is obviously insecure.
we have seen the client obtaining a port number in the range 49152-65535
which are actually reserved as part of glusterd's pmap_registry for bricks,
hence this will lead to port clashes between client and brick processes.
Solution:
If we can define different port ranges for clients incase where secure ports
exhaust, we can avoid the maximum port clashes with in gluster processes.
Still we are prone to have clashes with other non-gluster processes, but
the chances being very low, but that's a different story on its own,
which will be handled in upcoming patches.
Backportof:
> Change-Id: Ib5ce05991aa1290ccb17f6f04ffd65caf411feaf
> BUG: 1322805
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> Reviewed-on: http://review.gluster.org/13998
> Smoke: Gluster Build System <jenkins@build.gluster.com>
> NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
> Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Change-Id: I2ab9608ddbefcdf5987d817c23dd066010148e19
BUG: 1333711
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-on: http://review.gluster.org/14234
Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I60fe2d59c454095febce4c0fbef87a2dad9636e4
BUG: 1326085
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/14013
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ic2ba77a1fdd27801a6e579e04e6c0dd93cd7127b
BUG: 1326085
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/14011
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ie38198db990f133fe163ba160cdf647e34f83f4f
BUG: 1326085
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/13994
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ifd0ff278dcf43da064021f5c25e5dcd34347fcde
BUG: 1326085
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/13970
Smoke: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
|