diff options
author | Jeff Darcy <jdarcy@redhat.com> | 2014-06-17 13:42:45 +0000 |
---|---|---|
committer | Vijay Bellur <vbellur@redhat.com> | 2014-07-12 09:20:52 -0700 |
commit | 99685f18f190a73f2a46478cac0b09f4c59834b1 (patch) | |
tree | f5e787ace3038b97876425c5397e91c0a37df04d /xlators/cluster/dht/src/dht-diskusage.c | |
parent | d5ec66032ff96d7d417b5838a6bd1a047d52204c (diff) |
dht: support heterogeneous brick sizes
Calculation of layouts now considers the size of each brick, so that
smaller bricks don't get an "unfair" share of allocations and start
returning ENOSPC while the larger bricks still have plenty of space.
The observation has been made that some clients might get ENOTCONN when
trying to fetch disk-size information, and end up calculating layouts
differently. The following meta-observations can be made.
(1) This scenario is extremely unlikely in configurations with AFR.
(2) The most likely consequence of this scenario is that some files will
be placed sub-optimally by the client with the obsolete (non-weighted)
layout. They'll still be found anyway, so this isn't a show stopper.
(3) Without this patch it's *guaranteed* that some files will be placed
sub-optimally, because any layout that fails to account for brick sizes
is sub-optimal.
(4) We shouldn't be doing fix-layout from two nodes simultaneously
anyway. That's inefficient at best. Any instances of such behavior are
separate bugs, which should be fixed separately.
(5) In the most extreme edge case, two nodes doing weighted and
non-weighted layout fixes could race and end up creating an internally
inconsistent layout. This condition is still transient; it will be
detected and repaired automatically the next time anyone fetches the
layout. (If it's not that's also a preexisting bug that can show up in
other contexts.)
In conclusion, it's not the purpose of this patch to fix bugs elsewhere
in DHT. Its purpose is to make life incrementally better for users who
add new hardware with larger disks etc. than the older equipment. It's
only one part of an ongoing process to improve layout management and
repair, all the way up to support for multiple hash rings or tiering.
Change-Id: I05eb6f9eface9cdaf8622e0260c8c7f29020447f
BUG: 1114680
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/8093
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Diffstat (limited to 'xlators/cluster/dht/src/dht-diskusage.c')
-rw-r--r-- | xlators/cluster/dht/src/dht-diskusage.c | 27 |
1 files changed, 21 insertions, 6 deletions
diff --git a/xlators/cluster/dht/src/dht-diskusage.c b/xlators/cluster/dht/src/dht-diskusage.c index 8664f550ba2..a2dc43c32aa 100644 --- a/xlators/cluster/dht/src/dht-diskusage.c +++ b/xlators/cluster/dht/src/dht-diskusage.c @@ -37,6 +37,8 @@ dht_du_info_cbk (call_frame_t *frame, void *cookie, xlator_t *this, double percent = 0; double percent_inodes = 0; uint64_t bytes = 0; + uint32_t bpc; /* blocks per chunk */ + uint32_t chunks = 0; conf = this->private; prev = cookie; @@ -50,17 +52,28 @@ dht_du_info_cbk (call_frame_t *frame, void *cookie, xlator_t *this, if (statvfs && statvfs->f_blocks) { percent = (statvfs->f_bavail * 100) / statvfs->f_blocks; bytes = (statvfs->f_bavail * statvfs->f_frsize); + /* + * A 32-bit count of 1MB chunks allows a maximum brick size of + * ~4PB. It's possible that we could see a single local FS + * bigger than that some day, but this code is likely to be + * irrelevant by then. Meanwhile, it's more important to keep + * the chunk size small so the layout-calculation code that + * uses this value can be tested on normal machines. + */ + bpc = (1 << 20) / statvfs->f_bsize; + chunks = (statvfs->f_blocks + bpc - 1) / bpc; } if (statvfs && statvfs->f_files) { percent_inodes = (statvfs->f_ffree * 100) / statvfs->f_files; } else { - /* set percent inodes to 100 for dynamically allocated inode filesystems - this logic holds good so that, distribute has nothing to worry about - total inodes rather let the 'create()' to be scheduled on the hashed - subvol regardless of the total inodes. since we have no awareness on - loosing inodes this logic fits well - */ + /* + * Set percent inodes to 100 for dynamically allocated inode + * filesystems. The rationale is that distribute need not + * worry about total inodes; rather, let the 'create()' be + * scheduled on the hashed subvol regardless of the total + * inodes. + */ percent_inodes = 100; } @@ -71,6 +84,7 @@ dht_du_info_cbk (call_frame_t *frame, void *cookie, xlator_t *this, conf->du_stats[i].avail_percent = percent; conf->du_stats[i].avail_space = bytes; conf->du_stats[i].avail_inodes = percent_inodes; + conf->du_stats[i].chunks = chunks; gf_msg_debug (this->name, 0, "subvolume '%s': avail_percent " "is: %.2f and avail_space " @@ -80,6 +94,7 @@ dht_du_info_cbk (call_frame_t *frame, void *cookie, xlator_t *this, conf->du_stats[i].avail_percent, conf->du_stats[i].avail_space, conf->du_stats[i].avail_inodes); + break; /* no point in looping further */ } } UNLOCK (&conf->subvolume_lock); |