| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
While performing the replace-brick operation, we should set
fsid value to the new brick.
fixes: bz#1637196
Change-Id: I9e9a4962fc0c2f5dff43e4ac11767814a0c0beaf
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This patch fixes the unchecked return value, coverity issue.
CID: 1391412
Change-Id: If85f4afdf8c6d37602c62fbf4d7c730e18be81e7
updates: bz#789278
Signed-off-by: Varsha Rao <varao@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
posix_update_utime_in_mdata() unconditionally logs an error if
consistent time attributes features is not enabled. This log
does not add any value, prints an incorrect errno & floods
the log file. Hence nuking this log message in this patch.
fixes: bz#1644129
Change-Id: I9a1f9e7ada3366d2830f18d81f16a1461040092e
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
as key size in xdr can be anything, it can be bigger than the
'NAME_MAX' allowed in the structure, which can allow for service denial
attacks.
Fixes: CVE-2018-14653
Fixes: bz#1644756
Change-Id: I2dc5e99af27ddf44c12c94b07e51adb8674cce80
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
total_allocs of certain type of variables can be 4billion in a
single day depending on load. So, 32 bits for that is not enough.
Also, size_t is good variable size for one allocation, but the
sum of allocations, should be 64bits to make sure we don't
overflow the variable.
Updates: bz#1639599
Change-Id: If3b19687f94425e913a0201ae5d73661eda51f06
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was a problem in commit 7f81067 that caused infinite loop when
full heal was triggered.
The previous commit was made to prevent self-heal to go idle after a
replace brick operation. One of the changes consisted on setting a
flag to force an immediate scan of the dirty directory if a heal on
a directory succeeded (assuming it could have generated newer entries).
However that change was causing an issue with a full self-heal, since
every time an already healed directory was checked and it returned
suceessfully, it was also setting the flag, forcing self-heal to start
over again.
This patch fixes this issue by only setting the flag if the heal is not
full. It's assumed that a full self-heal will already traverse all
entries automatically, so there's no need to force a new scan later.
Change-Id: Id12dbfc04e622b18183e796cc6cc87ccc30a6d55
fixes: bz#1636631
Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Dereferencing NUll pointers this,local and stbuf.
1.Replaced this->name with "dht".
2.Removed GF_VALIDATE_OR_GOTO.
3.Removed the check for "stbuf" and "this".
Updates: bz#1622665
Change-Id: Id2fb2270d5ec37b76fa2aae1f1c8dca72dcc728a
Signed-off-by: Harpreet Lalwani <hlalwani@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As lookup is not a locked fop, we can not trust the
data received in this to be same.
Changing the log level to DEBUG in case lookup finds any
difference.
Change-Id: I39499c44688a2455c7c6c69a798762d045d21b39
updates: bz#1640066
BUG: 1640066
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
|
|
|
|
|
|
| |
Fixes: bz#1637934
Change-Id: I5f95beab62bd2bdde3bbee94c308b0ad03e94379
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Based on the proposal to remove few features as they are not
actively maintained [1], removing stripe translator from the
build. Also make sure there are no regression tests involving
stripe translator.
[1] https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html
Note that this patch aims at removing the translator from build, and
a followup patch is needed to remove the code from repository.
Updates: bz#1364707
Change-Id: I235b305338f138e29e9f30cba65bc0dadbebbbd5
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Geo-rep's automatic error handling does gfid conflict
resolution. But if there are ENOENT errors because the
parent is not synced to slave, it doesn' handle them.
This patch adds the intelligence to create missing
parent directories on slave. It can create the missing
directories upto the depth of 10.
fixes: bz#1643402
Change-Id: Ic97ed1fa5899c087e404d559e04f7963ed7bb54c
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
When 'gluster-mountbroker status' was issued, it
crashes in a corner case with 'str object has not
attribute get'. Fixed the same.
fixes: bz#1643929
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Change-Id: Iaf1a937ed0136b3b2058230c75fa89a215d8a5eb
|
|
|
|
|
|
|
|
|
| |
1. scheduler - Popen
2. syncdutils - corner case on failure
fixes: bz#1643932
Change-Id: I65af97a244a8790e976acedc2728db6ebbf2ae10
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
| |
fixes: bz#1644164
Change-Id: I0ac5aff565b3a30d5ff25ec5a3f20e0bda424a5d
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
| |
Make Popen py2 and py3 compatiable
fixes: bz#1643935
Change-Id: Ife34cb38024dcdc0420436e7d76fd208223f9d86
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: ctx and res can be NULL.
Solution: introduced a VALIDATE_OR_GOTO statement, hence removed
the null check for ctx; added a check for res.
Updates: bz#1622665
Change-Id: Ifee4c73e260530ab44c0a34c5ff5568f38f92c94
Signed-off-by: Shwetha Acharya <sacharya@redhat.com>
|
|
|
|
|
|
|
|
| |
Added a description for auth.ssl-allow
Change-Id: I50cd7c738007c3d7a1b333dae62dbb5e46a7ee67
fixes: bz#1643349
Signed-off-by: Harpreet Kaur Lalwani <hlalwani@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
lvm2(-devel) 2.03.00 no longer has liblvm2app.so. (I expect a
similar change in fedora-30 before too much longer, but for
now fedora-30 still has lvm2 and lvm2-devel 2.02.181
rpcgen has been removed from glibc-common and unbundled rpcgen
is now required.
And I guess nobody has ever built rpms with '--without bd' or we
would have discovered the attempted inclusion of .../storage/bd.so
in the rpm when it hadn't actually been built.
Change-Id: I71e26c3d06af5d329ae89cc249a4ad88664ddf53
updates: bz#1193929
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This will allow proper printing of exact 'fop' type to be logged in
string, not number, during backtraces.
Considering this was not done on brick processes, we have no easy
way to glance and understand which fops were pending.
What gets changed:
After a crash, most of the core-dumps logged were of the form:
```
pending frames:
frame : type(0) op(18)
frame : type(0) op(18)
frame : type(0) op(28)
```
would change to
```
pending frames:
frame : type(1) op(SETXATTR)
frame : type(1) op(SETXATTR)
frame : type(1) op(READDIR)
```
updates: bz#1639599
Change-Id: I0e3d2a8dee9cfde7ed0112a948f5213f546efb80
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
During gfid-conflict-resolution, geo-rep crashes
with 'ValueError: list.remove(x): x not in list'
Cause and Analysis:
During gfid-conflict-resolution, the entry blob is
passed back to master along with additional
information to verify it's integrity. If everything
looks fine, the entry creation is ignored and is
deleted from the original list. But it is crashing
during removal of entry from the list saying entry
not in list. The reason is that the stat information
in the entry blob was modified and sent back to
master if present.
Fix:
Send back the correct stat information for
gfid-conflict-resolution.
fixes: bz#1642865
Change-Id: I47a6aa60b2a495465aa9314eebcb4085f0b1c4fd
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Intentional and unintentional socket closures cannot be identified
Solution:
Log intentional socket closures with at least INFO log level
Change-Id: Ic02c882b16ab2193e57f8c3e6c3a82c4fe0f6875
fixes: bz#1642800
Signed-off-by: Milind Changire <mchangir@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Added a check for "top"
Updates: bz#1622665
Change-Id: I354fdc7150b2f1eb452702ddb653e2d6ed609c10
Signed-off-by: Harpreet Lalwani <hlalwani@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
ctx->active can be null, and is checked elsewhere in the
same function. In another case, where 'ctx->active' gets
dereferenced, it needs to be validated before the loop
is hit.
Updates: bz#1622665
Change-Id: I4ec917e96c0756586fc7a74c76848bb9589a0293
Signed-off-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
For lease operation, we allocate and store child nodes
data in lease structure. Use the same in afr_lease_cbk()
while checking for the quorum.
Change-Id: If1fdd5a0798888afd39ad3df57d96487baf9d1e6
updates: #350
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
data center setups with large number of bricks with replication
causes a flood of connections from bricks and self-heal daemons
to glusterd causing connections to be dropped due to insufficient
listener socket backlog queue length
Solution:
raise default value of transport.listen-backlog to 1024
Change-Id: I879e4161a88f1e30875046dff232499a8e2e6c51
fixes: bz#1642850
Signed-off-by: Milind Changire <mchangir@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
2 domain locking + xattrop for write-txn failures:
--------------------------------------------------
- A post-op wound on TA takes AFR_TA_DOM_NOTIFY range lock and
AFR_TA_DOM_MODIFY full lock, does xattrop on TA and releases
AFR_TA_DOM_MODIFY lock and stores in-memory which brick is bad.
- All further write txn failures are handled based on this in-memory
value without querying the TA.
- When shd heals the files, it does so by requesting full lock on
AFR_TA_DOM_NOTIFY domain. Client uses this as a cue (via upcall),
releases AFR_TA_DOM_NOTIFY range lock and invalidates its in-memory
notion of which brick is bad. The next write txn failure is wound on TA
to again update the in-memory state.
- Any incomplete write txns before the AFR_TA_DOM_NOTIFY upcall release
request is got is completed before the lock is released.
- Any write txns got after the release request are maintained in a ta_waitq.
- After the release is complete, the ta_waitq elements are spliced to a
separate queue which is then processed one by one.
- For fops that come in parallel when the in-memory bad brick is still
unknown, only one is wound to TA on wire. The other ones are maintained
in a ta_onwireq which is then processed after we get the response from
TA.
Change-Id: I32c7b61a61776663601ab0040e2f0767eca1fd64
updates: bz#1579788
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the commit febf5ed4848, during the volume create op,
we are setting volinfo->caps to 0, only if any of the bricks
belong to the same node and brickinfo->vg[0] is null.
Previously, we used to set volinfo->caps to 0, when
either brick doesn't belong to the same node or brickinfo->vg[0]
is null.
With this patch, we set volinfo->caps to 0, when either brick
doesn't belong to the same node or brickinfo->vg[0] is null.
(as we do earlier without commit febf5ed4848).
fixes: bz#1635820
Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
| |
This patch fixes CID: 1356526 and 1382369 : Argument cannot be negative
Change-Id: I1aab5be2d217479db9f67a26b62854a0b38c1747
updates: bz#789278
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Patch https://review.gluster.org/#/c/glusterfs/+/19135/ has
optimised glusterd test cases by clubbing the similar test
cases into a single test case.
https://review.gluster.org/#/c/glusterfs/+/19135/15/tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t
test case has been deleted and added as a part of
tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t
In the original test case, we create a volume with two bricks,
each on a separate node(N1 & N2). From another node in cluster(N3),
we try to detach a node which is hosting bricks. It fails.
In the new test, we created volume with single brick on N1.
and from another node in cluster, we tried to detach N1. we
expect peer detach to fail, but peer detach was success as
the node is hosting all the bricks of volume.
Now, changing the new test case to cover the original test case scenario.
Please refer https://bugzilla.redhat.com/show_bug.cgi?id=1642597#c1 to
understand why the new test case is not failing in centos-regression.
fixes: bz#1642597
Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Removed VALIDATE_OR_GOTO check on "this"
Updates: bz#1622665
Change-Id: Ie0d74525901ebf9daa1a5e788a035db6dc5d8c06
Signed-off-by: Sheetal Pamecha <sheetal.pamecha08@gmail.com>
|
|
|
|
|
|
|
|
| |
This patch fixes CID: 1382374: USE_AFTER_FREE.
Change-Id: If408f52ee291312fb83095126ebd6bb79ae95e26
updates: bz#789278
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes CID:
1394664 : CHECKED_RETURN
1356534 : Macro compares unsigned to 0 (NO_EFFECT)
1356532 : Macro compares unsigned to 0 (NO_EFFECT)
updates: bz#789278
Change-Id: I04d64fd8c007627611710dc56109b76eeb59333a
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
EC volumes can be created without any redundant brick.
Solution:
Updated the conditional check to avoid volume create without
redundant brick.
fixes: bz#1642448
Change-Id: I0cb334b1b9378d67fcb8abf793dbe312c3179c0b
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change check condition from
"[[:space:]+]${mount_point}[[:space:]+]fuse" to
"[[:space:]+]${mount_point}[[:space:]+]fuse.glusterfs". Fix false
postive check result for mount points of other FUSEes, such as "fuse.sshfs".
Change-Id: I13898b50a651a8f5ecc3a94d01b3b5de37ec4cbc
fixes: bz#1640026
Signed-off-by: Han Han <hhan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Glusterfs leases expects lease_id to be set and sent
for each fop to determine conflict resolution with the
existing lease.
Incase if not set (most likely if there is an older
client in a mixed cluster), it makes sense to consider
it as conflicitng fop and recall the lease.
Also fixed the return status check for __remove_lease(),
wherein non-negative value is considered as success case.
Change-Id: I5bcfba4f7c71a5af7cdedeb03436d0b818e85783
updates: #350
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
|
|
|
|
|
|
|
|
| |
The patch fixes CID: 1325520
Change-Id: Ic7d3fac6adabe96d1d44f13b57d6dc67da0476d1
updates: bz#789278
Signed-off-by: Arjun <arjsharm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: In brick_mux environment, while a user has enabled brick-log-level
for anyone volume, it automatically enables for other volumes
also those are attached with same brick.
Solution: A log-level option is automatically enabled for other volumes
because log-level saved in glusterfsd_ctx and ctx is common for
volumes those are attached with same brick. To resolve it
set log level for all children xlator's at the time of the graph
reconfigure at io-stat xlator.
Change-Id: Id9a6efa05d286e0bea2d47f49292d084e7bb2fcf
fixes: bz#1640495
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
dict_get_str_boolean expects a integer, so we need
to set all the boolean variables as integers to
avoid log messages like below:
[2018-09-10 03:55:19.236387] I [dict.c:2838:dict_get_str_boolean] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_reconnect+0xc2) [0x7ff7a83d0452] -->/usr/local/lib/glusterfs/4.2dev/rpc-transport/socket.so(+0x65b0) [0x7ff7a06cf5b0] -->/usr/local/lib/libglusterfs.so.0(dict_get_str_boolean+0xcf) [0x7ff7a85fc58f] ) 0-dict: key transport.socket.ignore-enoent, integer type asked, has string type [Invalid argument]
This patch addresses all such instances in glusterd.
Change-Id: I7e1979fcf381363943f4d09b94c3901c403727da
updates: bz#1193929
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GF_MALLOC was being used to allocate memory which is
not initialized. strcat is used on it which could
result in buffer overflow if it contains garbage before
'\0'. So changed it to GF_CALLOC.
Traceback:
==23427==ERROR: AddressSanitizer: heap-buffer-overflow ...
WRITE of size 5 at 0x6080000083fe thread T3
#0 0x7fb60966991c in __interceptor_strcat ...
#1 0x48adc0 in config_parse ...
#2 0x48cde8 in cli_cmd_gsync_set_parse ...
...
Updates: bz#1633930
Change-Id: I3710f011d8139984b1898265d84d150c9bdc962b
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currently for replica volume, even if only one brick is UP
SHD will keep crawling index entries even if it can not
heal anything.
In thin-arbiter volume which is also a replica 2 volume,
this causes inode lock contention which in turn sends
upcall to all the clients to release notify locks, even
if it can not do anything for healing.
This will slow down the client performance and kills the
purpose of keeping in memory information about bad brick.
Solution: Before starting heal or even crawling, check if
sufficient number of children are UP and available to check
and heal entries.
Change-Id: I011c9da3b37cae275f791affd56b8f1c1ac9255d
updates: bz#1640581
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I5f0667a47ddd24cb00949c875c19f3d1dbd8d603
fixes: bz#1605077
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
trav->saved_at.tv_sec is not initialized.
Calling "list_empty" function before initializing "trav".
Updates: bz#1622665
Change-Id: Ib5c2703a07a9c56ccd115001aca500f7a23c4a2e
Signed-off-by: Harpreet Lalwani <hlalwani@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Ganesha always operate file by filehandle, and translates to
glusterfs's stat/fstat many time.
Change-Id: Idd0dc33c31131331ac948754c8b7f898777c31d3
Updates: bz#1634220
Signed-off-by: Kinglong Mee <mijinlong@open-fs.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing
this .t spuriously. On checking one of the failure logs, I see:
22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful:
22:05:44 Self-heal daemon is not running. Check self-heal daemon log file.
22:05:44 not ok 20 , LINENUM:38
In glusterd log:
[2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file
But the tests which preceed this check whether via a statedump if the shd is
conected to the bricks, and they have succeeded and even started
healing. From glustershd.log:
[2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1 sinks=2
So the only reason I can see launching heal via cli failing is a race where
shd has been spawned but glusterd has not yet updated in-memory that it is up,
and hence failing the CLI.
Fix:
Check for shd up status before launching heal via CLI
Change-Id: Ic88abf14ad3d51c89cb438db601fae4df179e8f4
fixes: bz#1641344
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
| |
Change-Id: I0730a037f96c4386c72ecf2f61c71ec17ffbc1b0
Updates: bz#1634220
Signed-off-by: Kinglong Mee <mijinlong@open-fs.com>
|
|
|
|
|
|
| |
Change-Id: I52f8e13e68528ba9679537ffdddf58ec08f9fd0c
Updates: bz#1634220
Signed-off-by: Kinglong Mee <mijinlong@open-fs.com>
|
|
|
|
|
|
|
|
| |
Fixes: 124759 1288787
Change-Id: Ib8999242fc3ea5f4ea80246659899d2d4f06c506
updates: bz#789278
Signed-off-by: Bhumika Goyal <bgoyal@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This patches fixes the following coverity issues:
CID: 1396101, 1396102 - Dereference null return value.
Change-Id: I7ec783a61c06a1378863e974ff6e0baae418aec2
updates: bz#789278
Signed-off-by: Varsha Rao <varao@redhat.com>
|
|
|
|
|
|
|
|
| |
This patch fixes CID 1124651
Change-Id: I6f33954f08cfdd7cb4236f9a81ec7980f81d19e7
updates: bz#789278
Signed-off-by: Arjun <arjsharm@redhat.com>
|
|
|
|
|
|
|
|
| |
Fixes CID: 1382442 1382415 1382379 1382355
Change-Id: Ia712e37cb5a6db452d3178386394f87f83b85d38
updates: bz#789278
Signed-off-by: Bhumika Goyal <bgoyal@redhat.com>
|