mirror of
https://github.com/urbit/shrub.git
synced 2024-12-24 11:24:21 +03:00
vere: increase LMDB mapsize to 1TB on macOS, linux
Many ships have been observed bumping up against the existing mapsize limits. This results in a Vere crash via LMDB, which necessitates compiling a new binary with a higher mapsize if one wants to relaunch. There doesn't seem to be any serious penalty to setting this somewhere in the terabyte range, though. [1] In cases where the mapsize exceeds the size of the disk, I infer from the LMDB docs that the database may simply be permitted to grow until it runs up against the disk limitations, which feels acceptable. I've tested this on macOS and Linux and the binary runs without issue, despite the mapsize being set far in excess of the disks I'm running on. [1]: https://lmdb.readthedocs.io/en/release/
This commit is contained in:
parent
9241ab2ef5
commit
6e0cd4ef1a
@ -41,15 +41,12 @@ MDB_env* u3_lmdb_init(const char* log_path)
|
||||
return 0;
|
||||
}
|
||||
|
||||
// TODO: Start with forty gigabytes on macOS and sixty otherwise for the
|
||||
// maximum event log size. We'll need to do something more sophisticated for
|
||||
// real in the long term, though.
|
||||
// Arbitrarily choosing 1TB as a "large enough" mapsize per the LMDB docs:
|
||||
//
|
||||
#ifdef U3_OS_osx
|
||||
const size_t lmdb_mapsize = 42949672960;
|
||||
#else
|
||||
const size_t lmdb_mapsize = 64424509440;;
|
||||
#endif
|
||||
// "[..] on 64-bit there is no penalty for making this huge (say 1TB)."
|
||||
//
|
||||
const size_t lmdb_mapsize = 1099511627776;
|
||||
|
||||
ret_w = mdb_env_set_mapsize(env, lmdb_mapsize);
|
||||
if (ret_w != 0) {
|
||||
u3l_log("lmdb: failed to set database size: %s\n", mdb_strerror(ret_w));
|
||||
|
Loading…
Reference in New Issue
Block a user