Summary:
Add benchmarks about index sizes, and a benchmark of insertion using key
references.
An example `cargo bench` result running on my devserver looks like:
index insertion (owned key) 3.551 ms
index insertion (referred key) 3.713 ms
index flush 20.648 ms
index lookup (memory) 1.087 ms
index lookup (disk, no verify) 2.041 ms
index lookup (disk, verified) 4.347 ms
index size (owned key) 886010
index size (referred key) 534298
Reviewed By: markbt
Differential Revision: D13027879
fbshipit-source-id: 70644c504026ffee2122d857d5035f5b7eea4f42
Summary:
For checksum values like xxhash, there is no benefit using big endian. Switch
to little endian so it's slightly slightly faster on the major platforms we
care about.
This is a breaking change. However, the format is not used in production yet.
So there is no migration code.
Reviewed By: markbt
Differential Revision: D13015465
fbshipit-source-id: ca83d19b3328370d089b03a33e848e64b728ef2a
Summary:
Previously, the format of an Log entry is hard-coded - length, xxhash, and
content. The xxhash always takes 8 bytes.
For small (ex. 40-byte) entries, xxhash32 is actually faster and takes less
disk space.
Introduce the "entry flags" concept so we can store some metadata about what
checksum function to use. The concept could be potentially used to support
other new format changes at per entry level in the future.
As we're here, also support data without checksums. That can be useful for
content with its own checksum, like a blob store with its own SHA1 integrity
check.
Performance-wise, log insertion is slower (but the majority insertaion overhead
would be on the index part), iteration is a little bit faster, perhaps because
the log can use less data.
Before:
log insertion 15.874 ms
log iteration (memory) 6.778 ms
log iteration (disk) 6.830 ms
After:
log insertion 18.114 ms
log iteration (memory) 6.403 ms
log iteration (disk) 6.307 ms
Reviewed By: DurhamG, markbt
Differential Revision: D13051386
fbshipit-source-id: 629c251633ecf85058ee7c3ce7a9f576dfac7bdf
Summary:
Xxhash result won't usually have leading zeros. So VLQ encoding is not an
efficient choice. Use non-VLQ encoding instead.
Performance wise, this is noticably faster than before:
log insertion 14.161 ms
log insertion with index 102.724 ms
log flush 11.336 ms
log iteration (memory) 6.351 ms
log iteration (disk) 7.922 ms
10.18s user 3.66s system 97% cpu 14.218 total
log insertion 13.377 ms
log insertion with index 97.422 ms
log flush 11.792 ms
log iteration (memory) 6.890 ms
log iteration (disk) 7.139 ms
10.20s user 3.56s system 97% cpu 14.117 total
log insertion 14.573 ms
log insertion with index 94.216 ms
log flush 18.993 ms
log iteration (memory) 7.867 ms
log iteration (disk) 7.567 ms
9.85s user 3.73s system 96% cpu 14.073 total
log insertion 15.526 ms
log insertion with index 98.868 ms
log flush 19.600 ms
log iteration (memory) 7.533 ms
log iteration (disk) 7.150 ms
10.13s user 4.02s system 96% cpu 14.647 total
log insertion 14.629 ms
log insertion with index 100.449 ms
log flush 20.997 ms
log iteration (memory) 7.299 ms
log iteration (disk) 7.518 ms
10.14s user 3.65s system 96% cpu 14.274 total
This is a format-breaking change. Fortunately we haven't really use the old
format in production yet.
Reviewed By: DurhamG, markbt
Differential Revision: D13015463
fbshipit-source-id: 6e7e4f7a845ea8dbf0904b3902740b65cc7467d5
Summary:
Some simple benchmark for "log". The initial result running from my devserver
looks like:
log insertion 33.146 ms
log insertion with index 106.449 ms
log flush 9.623 ms
log iteration (memory) 10.644 ms
log iteration (disk) 11.517 ms
13.75s user 3.61s system 97% cpu 17.778 total
log insertion 27.906 ms
log insertion with index 107.683 ms
log flush 19.204 ms
log iteration (memory) 10.239 ms
log iteration (disk) 11.118 ms
12.89s user 3.55s system 97% cpu 16.924 total
log insertion 31.645 ms
log insertion with index 109.403 ms
log flush 9.416 ms
log iteration (memory) 10.226 ms
log iteration (disk) 10.757 ms
13.07s user 3.02s system 97% cpu 16.423 total
log insertion 31.848 ms
log insertion with index 109.332 ms
log flush 18.345 ms
log iteration (memory) 10.709 ms
log iteration (disk) 11.346 ms
13.12s user 3.70s system 97% cpu 17.276 total
log insertion 29.665 ms
log insertion with index 106.041 ms
log flush 16.159 ms
log iteration (memory) 10.367 ms
log iteration (disk) 11.110 ms
12.99s user 3.27s system 97% cpu 16.717 total
Reviewed By: markbt
Differential Revision: D13015464
fbshipit-source-id: 035fee6c8b6d0bea4cfe194eed3d58ba4b5ebcb8
Summary:
An upcoming diff will need the ability to iterate over all the keys in
the store. So let's expose that functionality.
Reviewed By: quark-zju
Differential Revision: D13062575
fbshipit-source-id: a173fcdbbf44e2d3f09f7229266cca6f3e67944b
Summary:
You can currently iterate over indexlog entries, but there's no way to
iterate over the keys without keeping a copy of the index function with you.
Let's add a key iterator function.
Reviewed By: quark-zju
Differential Revision: D13010744
fbshipit-source-id: 1fcaf959ae82417e5cbafae7c1927c3ae8f8e76a
Summary:
Turn BookmarkStore rust implementation into indexed-log backed.
Note that this no longer matches existing mercurial bookmark store
disk representation.
Reviewed By: DurhamG
Differential Revision: D13133605
fbshipit-source-id: 2e0a27738bcec607892b0edab6f759116929c8e1
Summary:
This is done by running `fix-code.py`. Note that those strings are
semvers so they do not pin down the exact version. An API-compatiable upgrade
is still possible.
Reviewed By: ikostia
Differential Revision: D10213073
fbshipit-source-id: 82f90766fb7e02cdeb6615ae3cb7212d928ed48d
Summary:
The "misc" benchmark requires the base16 module to be public. It was made
private in a previous change. Let's make it public again so the benchmark can
run.
Reviewed By: singhsrb
Differential Revision: D13015031
fbshipit-source-id: 0dc1542803aae290de26651e367898eebfc95e83
Summary: It needs to be Send to be used in cpython.
Reviewed By: ikostia
Differential Revision: D10250289
fbshipit-source-id: ea57e356a0752764e50db9b6872b5cc4a456303f
Summary:
Make it more detailed for public APIs. Hide too detailed information (file
format).
Reviewed By: DurhamG
Differential Revision: D10250140
fbshipit-source-id: d9d9af9d67984b80f07db13e69bbffdf77e6a30e
Summary:
The log module is the "entry point" of other features. Update it so things are
more detailed. I tried to make it more friendly for people without knowledge
about the implementation details.
This could probably be further improved by adding some examples. For now, I'm
focusing on the plain English parts.
To reviewers: Let me know how you feel reading it assuming no prior knowledge
with the implementation. Ways to make sentences shorter, natural to native
speakers without losing important information are also very welcome.
Reviewed By: DurhamG
Differential Revision: D10250141
fbshipit-source-id: 35258c7197c1ce0a1d3d0554fab2f2d2866e123c
Summary:
Make important modules public. Make internal utility (base16) private. Add
some text to the crate-level document. It just refers to important structures.
Will revise document of those structures.
Reviewed By: DurhamG, kulshrax
Differential Revision: D10250143
fbshipit-source-id: c79859ee7d3d9cc4ee9a093ef5d12ec6599f2a42
Summary: This is just the result of running `./contrib/fix-code.py $(hg files .)`
Reviewed By: ikostia
Differential Revision: D10213075
fbshipit-source-id: 88577c9b9588a5b44fcf1fe6f0082815dfeb363a
Summary:
The code block is not a valid Rust program. Mark it as "plain".
This fixes `cargo doc`.
Reviewed By: markbt
Differential Revision: D10137806
fbshipit-source-id: 1197d3a2ebc1450a0738686fa6cfa7c7b79dcb0d
Summary:
The primary log and indexes could be out of sync when mutating the indexes
error out. In that case, mark the indexes as "corrupted" and refuse to
perform index read (lookup) operations, for correctness.
Reviewed By: DurhamG
Differential Revision: D8337689
fbshipit-source-id: 3db9006ea03cfcaba52391f189aa697944b616e5
Summary:
This demonstrates the index definitions can have different orders, as long
as their names do not change, things still work.
Reviewed By: DurhamG
Differential Revision: D8337688
fbshipit-source-id: 2fbbdf711d8edc10fc6d3314532390ea712aca6c
Summary:
This allows us to store arbitrary metadata in the root node. It will be used
by the `Log` structure to store how many bytes the index covers.
Reviewed By: DurhamG
Differential Revision: D8337687
fbshipit-source-id: 159a89d66765fc251a486fd62c1ffd01f625b503
Summary: Implement the dependencies of the "open" public API.
Reviewed By: DurhamG
Differential Revision: D8156518
fbshipit-source-id: 9fed441f520a3b74cbef5bfb815c82943c615fdf
Summary:
The read_entry function takes care of reading an entry from a given offset,
and return internal stats like real data offset (skipping the length and
checksum metadata), and the next entry offset.
It does integrity check and handles offset for both in-memory and on-disk
buffers. The offsets to in-memory entries are fairly simple - they start
from "meta.primary_len" instead of a fixed reserved value. This makes the
"next_offset" work seamlessly.
The public API won't have "offset" exposed, so the API is private.
Reviewed By: DurhamG
Differential Revision: D8156513
fbshipit-source-id: 8661f2f2757de6f3f94defc64f4a8dd5261973b2
Summary:
Partially implement open, append, flush, lookup APIs. This shows how things
work in general, like how locking works. What's in-memory and what's on-disk
etc.
Reviewed By: DurhamG
Differential Revision: D8156514
fbshipit-source-id: 2de23dcde2f63895f3f3e4f67057aa9520fdfa34
Summary: Implemented as the file format specification added by the previous diff.
Reviewed By: DurhamG
Differential Revision: D8156516
fbshipit-source-id: 7153932b9442b3ab5bdb81490f88c40346128afc
Summary: The public interface and its dependencies.
Reviewed By: DurhamG
Differential Revision: D8156509
fbshipit-source-id: c6f3e4b88851683a5d8804b80f689282e3f582d4
Summary:
Without this change, code doing `index.get(...).values().collect()` might
end up with an infinite loop.
Reviewed By: DurhamG
Differential Revision: D8156510
fbshipit-source-id: 5497aa354de7d49cfc4308a025856608ce981a1e
Summary:
Previously, the index API optionally takes a root offset. This is
inconvenient for the caller since they probably need to record both
valid file length and root offsets. Since root nodes are always at
the end of the index. Let's just simplify the API to take a logical
file length instead of a root offset.
Reviewed By: DurhamG
Differential Revision: D8156512
fbshipit-source-id: 7029272a61c9990e6484bca7ebbff64e2233c6cd
Summary:
Previously, `mmap_readonly` always reads file length, and uses that for mmap
length. In many cases we do know the desired file length and it's cleaner to
not `mmap` unused bytes. So let's add a parameter to do that.
Note: The `stat` call is still needed. Since `mmap` wouldn't return an error
of the requested length is greater than the file length.
Reviewed By: DurhamG
Differential Revision: D8156523
fbshipit-source-id: 991aa28f3542eaff24387dcc6a7302122fb6962f
Summary: The function will be reused in another module.
Reviewed By: DurhamG
Differential Revision: D8156522
fbshipit-source-id: 2aff6f2e4b8fc9b5d2c000e12ac2d940f7fab407
Summary:
`rand` 0.5 has too many breaking changes that the code is not ready to
migrate yet. So let's ping rand to 0.4. Ideally all dependencies in
Cargo.toml should avoid using "*". But for now `rand` is the only
troublemaker.
Note `rand 0.4` is a dependency of `quickcheck 0.6.2` so it's available.
Reviewed By: phillco, singhsrb
Differential Revision: D8158406
fbshipit-source-id: 417ae6807a2efc650acb8d82370964fab6531fdb
Summary:
Add a test that bitflips the index content, and make sure reading the index
would trigger an error.
Due to run-time performance difference, the release version tests 2-byte key
while the debug version only tests 1-byte key.
The header byte was not verified. Now it is verified.
Reviewed By: DurhamG
Differential Revision: D7517134
fbshipit-source-id: b3d8665ff4ac08c1a70db8d21122ba241913a2ed
Summary:
In "split_leaf" "Example 3" case, the old leaf entry (and its key) becomes
unused. Writing them to disk is unnecessary. This patch adds "unused" marker
so they could be marked and skipped inside flush().
No visible performance change:
index insertion 3.710 ms
index flush 3.717 ms
index lookup (memory) 1.128 ms
index lookup (disk, no verify) 1.993 ms
index lookup (disk, verified) 7.866 ms
Reviewed By: DurhamG
Differential Revision: D7517139
fbshipit-source-id: 253c878bc4b3762382c424777dfa779b3868e851
Summary: Since we now have the ability to store multiple values. Add a test.
Reviewed By: DurhamG
Differential Revision: D7472880
fbshipit-source-id: 85b1c69245ac7f0c4702daf22a02f5e5072f0924
Summary:
The value type is a linked list of u64 integers. Add an API to expose that.
Using iterator framework has benefits about flexibility - the caller can
take the first value, or convert it to a vector, or count the values, etc.
easily.
Reviewed By: DurhamG
Differential Revision: D7472881
fbshipit-source-id: d31e81770e069734b54fa08729c0cd45a699aae2
Summary:
This is caught by a later test. Looking up a non-existed child (jumptable
value is 0) returns InvalidData error, while it should return Offset(0).
The added if condition does not seem to have noticeable performance impact:
index insertion 3.840 ms
index flush 3.740 ms
index lookup (memory) 1.085 ms
index lookup (disk, no verify) 1.972 ms
index lookup (disk, verified) 7.752 ms
Reviewed By: DurhamG
Differential Revision: D7472882
fbshipit-source-id: 1cc51e9afa248e123cca9c561d7bb2128fd898b1
Summary:
Previously, the code was focusing on getting the hardest (index) part right,
but less about the value part. There is no way to get all values in the
linked list, as designed, yet. This diff starts the work.
Similar to `KeyOffset::key_and_link_offset`, change the internal API of
LinkOffset to return both value and the next link offset.
Reviewed By: DurhamG
Differential Revision: D7472879
fbshipit-source-id: 4a4512d7c63abbb667146de582e0f8cd04c9c04a
Summary:
`Index::open` now takes too many parameters, which is not very convenient to
use. Inspired by `fs::OpenOptions`, use a dedicated strut for specifying
open options.
Motivation: To test checksum ability more confidently, I'd like to write
something that randomly mutates 1 byte from a sane index. To make sure the
checksum coverage is "correct", checksum chunk size is another parameter.
Reviewed By: DurhamG
Differential Revision: D7464182
fbshipit-source-id: 469ce7d1cfa5de3946028418567a9f3e2bc303fa
Summary:
Address DurhamG's review comment on D7422832.
Previously, `OffsetMap::get` expects a dirty offset. That's because it was
changed from `HashMap` and we don't control `HashMap::get`. It's cleaner to
let `OffsetMap` do the `is_dirty` check.
Reviewed By: DurhamG
Differential Revision: D7461707
fbshipit-source-id: 9f2abdf6c6f993d98d9443f16bafcc6154ee0dbb
Summary:
The new test covers the `else` branch inside `LeafOffset::set_link`
previously not covered.
Coverage was checked by the following script:
```
from __future__ import absolute_import
import glob
import os
import shutil
os.system('cargo rustc --lib --profile test -- -Ccodegen-units=1 -Clink-dead-code -Zno-landing-pads')
path = max((os.stat(path).st_mtime, path) for path in glob.glob('./target/debug/*-????????????????'))[1]
shutil.rmtree('target/kcov')
os.system('kcov --include-path $PWD/src --verify target/kcov %s' % path)
```
Reviewed By: DurhamG
Differential Revision: D7446902
fbshipit-source-id: 293da2ff53b83c8f11534f0f8e5e7fd102216a01
Summary:
Change `insert_advanced` to accept an enum that could be either a key, or an
(offset, len) that refers to the external key buffer.
Insertion becomes slower due to new flexibility overhead. For some reason,
"index lookup (no verify)" becomes faster (restores pre-D7440248 performance):
index insertion 6.434 ms
index flush 3.757 ms
index lookup (memory) 1.068 ms
index lookup (disk, no verify) 1.969 ms
index lookup (disk, verified) 7.805 ms
With 2M 20-byte keys, the non-external key version generates a 105MB index:
seconds operation
1.247 insert
0.622 flush
1.859 flush done
0.702 lookup (without checksum)
1.395 lookup (with checksum)
Using external keys,the index is 70MB, and time for each operation:
seconds operation
1.086 insert
0.702 flush
0.665 lookup (without checksums)
1.602 lookup (with checksums)
The external key will have more space wins for longer keys, ex. file path.
`Index` module was made public so `InsertKey` type is usable.
Reviewed By: DurhamG
Differential Revision: D7444907
fbshipit-source-id: b89d95246845799c2c55fb73ad203a7e6724b85e
Summary:
Previously, a leaf entry can only have a `KeyOffset`. This diff makes it
possible to be either `KeyOffset`, or `ExtKeyOffset`. The API didn't change
much since `LeafOffset::key_and_link_offset` handles the difference
transparently.
Latest benchmark result:
index insertion 4.879 ms
index flush 3.620 ms
index lookup (memory) 1.827 ms
index lookup (disk, no verify) 3.508 ms
index lookup (disk, verified) 7.861 ms
Reviewed By: DurhamG
Differential Revision: D7444909
fbshipit-source-id: 5441e1ae187d42931377d7213dcb77156b2af714
Summary:
The leaf entry has a `key_and_link_offset` method. Previously it returns a
`KeyOffset`, since we now have `ExtKeyOffset`, it's friendly to handle the
key entry type difference at the leaf entry level, instead of requiring the
caller to handle it.
Reviewed By: DurhamG
Differential Revision: D7444905
fbshipit-source-id: 56d87641a2a5a50ddca8b1e4c74c9aaa3891b542
Summary:
Previously, I thought there is only one index that will use "commit hash" as
keys, that is the nodemap, and other indexes (like childmap) would just use
shorter integer keys (ex. revision number, or offsets). So the space overhead
of storing full keys only applies to one index and seems acceptable.
But that implies strict topo order for the source of truth data (ex. to use
integers as keys in childmap, you have to know how to translate parent
revisions from hashes to integers at the time writing the revision).
Thinking about it again, it seems the topo-order requirement would make a lot
of things less flexible. It's much easier to just use hashes as keys in the
index. Then it's worthwhile to address the space efficiency problem by
introducing an "external key buffer" concept. That's actually what `radixbuf`
does.
This is the start. It adds the type to the strcut. The feature is not completed
yet.
Reviewed By: DurhamG
Differential Revision: D7444904
fbshipit-source-id: 60a83c9e6e8b0734450f0c5827928a7c5bd111d5
Summary:
It further slows down lookups, even when checksum is disabled, since even a
`is_none()` check is not free:
index insertion 4.697 ms
index flush 3.764 ms
index lookup (memory) 2.878 ms
index lookup (disk, no verify) 3.564 ms
index lookup (disk, verified) 7.788 ms
The "verified" version basically needs 2x time due to more memory lookups.
Unfortunately this means eventual lookup performance will be slower than
gdbm, but insertion is still much faster. And the index still has a better
locking properties (lock-free read) that gdbm does not have.
With correct time complexity (no O(len(changelog)) index-only operations for
example), I'd expect it's rare for the overall performance to be bounded by
index performance. Data integrity is more important.
With a larger number of nodes, ex. 2M 20-byte strings: inserting to memory
takes 1.4 seconds, flushing to disk takes 0.9 seconds, looking up without
checksum takes 0.9 seconds, looking up with checksum takes 1.7 seconds.
Reviewed By: DurhamG
Differential Revision: D7440248
fbshipit-source-id: 020e5204606f9f0a4f68843a491009a6a6f75751
Summary:
This is in the critical path for lookup, and has very visible performance
penalty:
index insertion 3.923 ms
index flush 3.921 ms
index lookup (memory) 1.070 ms
index lookup (disk, no verify) 1.980 ms
index lookup (disk, verified) 5.206 ms
Reviewed By: DurhamG
Differential Revision: D7440252
fbshipit-source-id: 49540f974faff1cdd0603a72328f141ccd054ee2
Summary:
Previously checksum is only for `MemRoot`, now it's for all `Mem` structs.
Since `Mem*` structs are not frequently used in the normal lookup code path,
there is no visible performance change.
Reviewed By: DurhamG
Differential Revision: D7440253
fbshipit-source-id: 945f5a8c38d228f59190a487b0cf6dbc5daac4f7
Summary:
The type will be used all over the place and may make `rustfmt` wrap lines.
Use a shorter type to make it slightly cleaner.
Reviewed By: DurhamG
Differential Revision: D7436338
fbshipit-source-id: ecaada23916a22658f65669b748632a077e60df2
Summary:
This only affects `Index::open` right now. So it's a one time check and does
not affect performance.
Reviewed By: DurhamG
Differential Revision: D7436341
fbshipit-source-id: 30313064bf2ea50320ac744fc18c03bff4b12c89
Summary:
Add `ChecksumTable` to the `Index` struct. But it's not functional yet.
The checksum will mainly affect "index lookup (disk)" case. Add another
benchmark for showing the difference with checksum on and off. They do not
have much difference right now:
index insertion 3.756 ms
index flush 3.469 ms
index lookup (memory) 0.990 ms
index lookup (disk, no verify) 1.768 ms
index lookup (disk, verified) 1.766 ms
Reviewed By: DurhamG
Differential Revision: D7436339
fbshipit-source-id: 60a6554a2c96067a53ce9e1753cd51d0d61c0bea
Summary:
The minibench framework does not provide benchmark filtering. So let's
separate benchmarks using different entry points.
Reviewed By: DurhamG
Differential Revision: D7440250
fbshipit-source-id: 11e7790a5074ebf4c08e33c312a490a66a921926
Summary:
The "clone" benchmarks were added to be subtracted from "lookup" to
workaround the test framework limitation.
The new minibench framework makes it easier to exclude preparation cost.
Therefore the clone benchmarks are no longer needed.
index insertion 3.881 ms
index flush 3.286 ms
index lookup (memory) 0.928 ms
index lookup (disk) 1.685 ms
"index lookup (memory)" is basically "index lookup (memory)" minus
"index clone (memory)" in previous benchmarks.
Reviewed By: DurhamG
Differential Revision: D7440251
fbshipit-source-id: 0e6a1fb7ee64f9a393ee9ada4db6e6eb052e20bf
Summary:
See the previous minibench diff for the motivation.
"failure" was removed from build dependencies since it's not used yet.
Run benchmark a few times. It seems the first several items are less stable
due to possibly warming up issues. Otherwise the result looks good enough.
The test also compiles and runs much faster.
```
base16 iterating 1M bytes 0.921 ms
index insertion 4.804 ms
index flush 5.104 ms
index lookup (memory) 2.929 ms
index lookup (disk) 1.767 ms
index clone (memory) 2.036 ms
index clone (disk) 0.010 ms
base16 iterating 1M bytes 0.853 ms
index insertion 4.512 ms
index flush 4.717 ms
index lookup (memory) 2.907 ms
index lookup (disk) 1.755 ms
index clone (memory) 1.856 ms
index clone (disk) 0.010 ms
base16 iterating 1M bytes 1.525 ms
index insertion 4.577 ms
index flush 4.901 ms
index lookup (memory) 2.800 ms
index lookup (disk) 1.790 ms
index clone (memory) 1.794 ms
index clone (disk) 0.010 ms
base16 iterating 1M bytes 0.768 ms
index insertion 4.486 ms
index flush 4.918 ms
index lookup (memory) 2.658 ms
index lookup (disk) 1.721 ms
index clone (memory) 1.763 ms
index clone (disk) 0.010 ms
base16 iterating 1M bytes 0.732 ms
index insertion 4.489 ms
index flush 4.792 ms
index lookup (memory) 2.689 ms
index lookup (disk) 1.739 ms
index clone (memory) 1.850 ms
index clone (disk) 0.009 ms
base16 iterating 1M bytes 1.124 ms
index insertion 7.188 ms
index flush 4.888 ms
index lookup (memory) 2.829 ms
index lookup (disk) 1.609 ms
index clone (memory) 2.642 ms
index clone (disk) 0.010 ms
base16 iterating 1M bytes 1.055 ms
index insertion 4.683 ms
index flush 4.996 ms
index lookup (memory) 2.782 ms
index lookup (disk) 1.710 ms
index clone (memory) 1.802 ms
index clone (disk) 0.009 ms
```
Reviewed By: DurhamG
Differential Revision: D7440249
fbshipit-source-id: 0f946ab184455acd40c5a38cf46ff94d9e3755c8
Summary:
The dirty -> non-dirty offset mapping can be optimized using a dedicated
"map" type that is backed by `vec`s, because dirty offsets are continuous
per type.
This makes "flush" significantly faster:
```
index flush time: [5.8808 ms 6.1800 ms 6.4813 ms]
change: [-62.250% -59.481% -56.325%] (p = 0.00 < 0.05)
Performance has improved.
```
Reviewed By: DurhamG
Differential Revision: D7422832
fbshipit-source-id: 9ab8a70d1663155941dae5b4f02f7452f5e3cadf
Summary:
It seems to improve the performance a bit:
```
index insertion time: [5.4643 ms 5.6818 ms 5.9188 ms]
change: [-24.526% -17.384% -10.315%] (p = 0.00 < 0.05)
Performance has improved.
```
Reviewed By: DurhamG
Differential Revision: D7422831
fbshipit-source-id: fc1c72f402258db7e189cd8724583757d48affb7
Summary:
For key entries, the key is immutable once stored. So just use `Box<[u8]>`.
It saves a `usize` per entry. On 64-bit platform, that's a lot.
Performance is slightly improved and it catches up with D7404532 before
typed offset refactoring now:
index insertion time: [6.1852 ms 6.6598 ms 7.2433 ms]
index flush time: [15.814 ms 16.538 ms 17.235 ms]
index lookup (memory) time: [3.7636 ms 3.9403 ms 4.1424 ms]
index lookup (disk) time: [1.9413 ms 2.0366 ms 2.1325 ms]
index clone (memory) time: [2.6952 ms 2.9221 ms 3.0968 ms]
index clone (disk) time: [5.0296 us 5.2862 us 5.5629 us]
Reviewed By: DurhamG
Differential Revision: D7422837
fbshipit-source-id: 4aabfdc028aefb8e796803e103f0b2e4965f84e6
Summary:
Previously, both `value` and `link` are optional in `insert_advanced`.
This diff makes `value` required.
`maybe_create_link_entry` becomes unused and removed.
No visible performance change.
Reviewed By: DurhamG
Differential Revision: D7422838
fbshipit-source-id: 8d7d3cc1cc325f6fea7e8ce996d0a43d3ee49839
Summary:
This is a large refactoring that replaces `u64` offsets with strong typed
ones.
Tests about serialization are removed since they generate illegal data that
cannot pass type check.
It seems to slow down the code a bit, comparing with D7404532. But there are
still room to improve.
index insertion time: [6.9395 ms 7.3863 ms 7.7620 ms]
index flush time: [15.949 ms 17.965 ms 20.246 ms]
index lookup (memory) time: [3.6212 ms 3.8855 ms 4.1923 ms]
index lookup (disk) time: [2.2496 ms 2.4649 ms 2.8090 ms]
index clone (memory) time: [2.7292 ms 2.9399 ms 3.2055 ms]
index clone (disk) time: [4.9239 us 5.5928 us 6.3167 us]
Reviewed By: DurhamG
Differential Revision: D7422833
fbshipit-source-id: 7357cb0f4f573f620e829c5e300cd423619dbd62
Summary: This makes it clear the code has different code paths for on-disk entries.
Reviewed By: DurhamG
Differential Revision: D7422836
fbshipit-source-id: 018fa0e2c20682d4e1beba99f3307550e1f40388
Summary:
Add benchmarks inserting / looking up 20K entries.
Benchmark results on my laptop are:
index insertion time: [6.5339 ms 6.8174 ms 7.1805 ms]
index flush time: [15.651 ms 16.103 ms 16.537 ms]
index lookup (memory) time: [3.6995 ms 4.0252 ms 4.3046 ms]
index lookup (disk) time: [1.9986 ms 2.1224 ms 2.2464 ms]
index clone (memory) time: [2.5943 ms 2.6866 ms 2.7749 ms]
index clone (disk) time: [5.2302 us 5.5477 us 5.9518 us]
Comparing with highly optimized radixbuf:
index insertion time: [991.89 us 1.1708 ms 1.3844 ms]
index lookup time: [863.83 us 945.69 us 1.0304 ms]
Insertion takes 6x time. Lookup from memory takes 1.4x time, from disk takes
2.2x time. Flushing is the slowest - it needs 16x radixbuf insertion time.
Note: need to subtract "clone" time from "lookup" to get meaningful values
about "lookup". This cannot be done automatically due to the limitation of the
benchmark framework.
Although it's slower than radixbuf, the index is still faster than gdbm and
rocksdb. Note: the index does less than gdbm/rocksdb since it does not return
a `[u8]`-ish which requires extra lookups. So it's not a very fair comparison.
gdbm insertion time: [69.607 ms 75.102 ms 79.334 ms]
gdbm lookup time: [9.0855 ms 9.8480 ms 10.637 ms]
gdbm prepare time: [110.35 us 120.40 us 135.63 us]
rocksdb insertion time: [117.96 ms 123.42 ms 127.85 ms]
rocksdb lookup time: [24.413 ms 26.147 ms 28.153 ms]
rocksdb prepare time: [3.8316 ms 4.1776 ms 4.5039 ms]
Note: Subtract "prepare" from "insertion" to get meaningful values.
Code to benchmark rocksdb and gdbm:
```
extern crate criterion;
extern crate gnudbm;
extern crate rand;
extern crate rocksdb;
extern crate tempdir;
use criterion::Criterion;
use gnudbm::GdbmOpener;
use rand::{ChaChaRng, Rng};
use rocksdb::DB;
use tempdir::TempDir;
const N: usize = 20480;
/// Generate random buffer
fn gen_buf(size: usize) -> Vec<u8> {
let mut buf = vec![0u8; size];
ChaChaRng::new_unseeded().fill_bytes(buf.as_mut());
buf
}
fn criterion_benchmark(c: &mut Criterion) {
c.bench_function("rocksdb prepare", |b| {
b.iter(move || {
let dir = TempDir::new("index").expect("TempDir::new");
let _db = DB::open_default(dir.path().join("a")).unwrap();
});
});
c.bench_function("rocksdb insertion", |b| {
let buf = gen_buf(N * 20);
b.iter(move || {
let dir = TempDir::new("index").expect("TempDir::new");
let db = DB::open_default(dir.path().join("a")).unwrap();
for i in 0..N {
db.put(&&buf[20 * i..20 * (i + 1)], b"v").unwrap();
}
});
});
c.bench_function("rocksdb lookup", |b| {
let dir = TempDir::new("index").expect("TempDir::new");
let db = DB::open_default(dir.path().join("a")).unwrap();
let buf = gen_buf(N * 20);
for i in 0..N {
db.put(&&buf[20 * i..20 * (i + 1)], b"v").unwrap();
}
b.iter(move || {
for i in 0..N {
db.get(&&buf[20 * i..20 * (i + 1)]).unwrap();
}
});
});
c.bench_function("gdbm prepare", |b| {
let buf = gen_buf(N * 20);
b.iter(move || {
let dir = TempDir::new("index").expect("TempDir::new");
let _db = GdbmOpener::new().create(true).readwrite(dir.path().join("a")).unwrap();
});
});
c.bench_function("gdbm insertion", |b| {
let buf = gen_buf(N * 20);
b.iter(move || {
let dir = TempDir::new("index").expect("TempDir::new");
let mut db = GdbmOpener::new().create(true).readwrite(dir.path().join("a")).unwrap();
for i in 0..N {
db.store(&&buf[20 * i..20 * (i + 1)], b"v").unwrap();
}
});
});
c.bench_function("gdbm lookup", |b| {
let dir = TempDir::new("index").expect("TempDir::new");
let mut db = GdbmOpener::new().create(true).readwrite(dir.path().join("a")).unwrap();
let buf = gen_buf(N * 20);
for i in 0..N {
db.store(&&buf[20 * i..20 * (i + 1)], b"v").unwrap();
}
b.iter(move || {
for i in 0..N {
db.fetch(&&buf[20 * i..20 * (i + 1)]).unwrap();
}
});
});
}
criterion_group!{
name=benches;
config=Criterion::default().sample_size(20);
targets=criterion_benchmark
}
criterion_main!(benches);
```
Reviewed By: DurhamG
Differential Revision: D7404532
fbshipit-source-id: ff39f520b78ad1b71eb36970506b313bb2ff426b
Summary:
This will be useful for benchmarks - prepare an index as a template, and
clone it in the tests.
Reviewed By: DurhamG
Differential Revision: D7422835
fbshipit-source-id: 190bbdee7cb7c1526274b4d4dab07af4984b5df6
Summary:
The latest rustfmt disagrees about the order of `std::io` imports. Move the
troublesome line to a separate group so both the old and new rustfmt agress
on the format.
Reviewed By: DurhamG
Differential Revision: D7422834
fbshipit-source-id: 9f5289ef2af1a691559fe691e121190f6d845162
Summary:
Radix entries need to be written in an reversed order given the order they
are added to the vector.
Reviewed By: DurhamG
Differential Revision: D7404530
fbshipit-source-id: 403189b5c0fa6f21183e62eea04ce4ce7c4e1129
Summary: Those little read and write helpers are used in the next diff.
Reviewed By: DurhamG
Differential Revision: D7377214
fbshipit-source-id: c6e2d240334c11a0b08b15cd7d5c114b6f4d8ace
Summary:
Add a helper function `peek_key_entry_content` that checks key type and
return the key content.
Reviewed By: DurhamG
Differential Revision: D7377211
fbshipit-source-id: 0ce509aba30309373a709cf5fbcb909dd80471dc
Summary:
Implement insertion when there is no need to split a leaf entry.
The API may be subject to change if we want other value types. For now, it's
better to get something working and can be benchmarked so we have data about
performance impact with new format changes.
Reviewed By: DurhamG
Differential Revision: D7343423
fbshipit-source-id: 9761f72168046dbafcb00883634aa7ad513a522b
Summary:
Like the `peek_` family of helper methods. Those methods handles writing
data for both dirty (in-memory) and non-dirty (on-disk) cases. They will
be used in the next diff.
Reviewed By: DurhamG
Differential Revision: D7377208
fbshipit-source-id: f458a20da4bb7808f37daeed3077be2f7e90a9df
Summary:
Add code to print out Index's on-disk and in-memory entries in
human-friendly form. This is useful for explaining its internal state, so it
could be used in tests.
Reviewed By: DurhamG
Differential Revision: D7343427
fbshipit-source-id: 706a35404ea42c413657b389166729f8dd1315a3
Summary:
Offset stored in it needs to be translated, as done in other types of
entries. I forgot it.
Reviewed By: DurhamG
Differential Revision: D7404528
fbshipit-source-id: fb09a9c3052ddfe8f8016440290062084d5d8b03
Summary:
This is a low-level API that follows the base16 sequence of a key, and
return potentially matched `LinkOffset`.
Reviewed By: DurhamG
Differential Revision: D7343424
fbshipit-source-id: 38f260064d1a23695a28dda6f7dc921f88c7fccc
Summary:
Add a bunch of helper methods to "peek" data inside all kinds of entries.
They will be used in the next diff.
The benefit of those helper methods is they handle both dirty offsets and
non-dirty offsets transparently. Previously I have tried to always parse
on-disk entries into in-memory ones and stored them in a hashmap cache.
But that turned to have too much overhead so always reading from disk is
more desirable. It seems to provide at least 2x perf improvement from my
previous quick test.
Reviewed By: DurhamG
Differential Revision: D7377207
fbshipit-source-id: 1b393f1fe64c1d54b986ba7c3b03c790adb694d4
Summary:
The `non_dirty` helper method enforces the offset to be a non-dirty one.
It will be used frequently for checking offsets read from the disk, since
the on-disk offsets shouldn't have any reference to dirty (in-memory)
entries.
Reviewed By: DurhamG
Differential Revision: D7377209
fbshipit-source-id: c6c381c065d3ba8aaa65698224e4778b86edbc4a
Summary:
The flush method will write buffered data to disk.
A mistake in Root entry serialization is fixed - it needs to translate dirty
offsets to non-dirty ones.
Reviewed By: DurhamG
Differential Revision: D7223729
fbshipit-source-id: baeaab27627d6cfb7c5798d3a39be4d2b8811e5f
Summary:
Add the main `Index` structure and its constructor.
The structure focus on the index logic itself. It does not have the checksum
part yet.
Some notes about choices made:
- The use of mmap: mmap is good for random I/O, and has the benefit of
sharing buffers between processes reading the same file. We may be able to
do good user-space caching for the random I/O part. But it's harder to
share the buffers between processes.
- The "read_only" auto decision. Common "open" pattern requires the caller
to pass whether they want to read or write. The index makes the decision
for the caller for convenience (ex. running "hg log" on somebody else's
repo).
- The "load root entry from the end of the file" feature. It's just for
convenience for users wanting to use the Index in a standalone way. We
probably
Reviewed By: DurhamG
Differential Revision: D7208358
fbshipit-source-id: 14b74d7e32ef28bd5bc3483fd560c489d36bf8e5
Summary:
`mmap_readonly` will be reused in `index.rs` so let's moved it to a shared
utils module.
Reviewed By: DurhamG
Differential Revision: D7208359
fbshipit-source-id: d98779e4e21765ce0e185281c9560245b59b174c
Summary:
Add ScopedFileLock. This is similar to Python's contextmanager.
It's easier to use than the fs2 raw API, since it guarantees the file is
unlocked.
Reviewed By: jsgf
Differential Revision: D7203684
fbshipit-source-id: 5d7beed99ff992466ab7bf1fbea0353de4dfe4f9
Summary: They are simpler than radix entry and similar.
Reviewed By: DurhamG
Differential Revision: D7191652
fbshipit-source-id: b516663567267a2e354748396b44c2ac8ebb691f
Summary: These are Rust structures that map to the file format.
Reviewed By: DurhamG
Differential Revision: D7191366
fbshipit-source-id: 23a4431383be9713e955b74306cd68108eb80536
Summary: Document the format. Actual implementation in later diffs.
Reviewed By: DurhamG
Differential Revision: D7190575
fbshipit-source-id: 243992fd052ca7a9688d54d20694e65daebb9660
Summary:
The append-only index is too different so it's cleaner to cherry-pick code
from radixbuf, instead of modifying radixbuf which would break code
depending on it.
Started by picking the base16 iterator part.
`rustc-test` does not work with buck, and seems to be in an unmaintained
state, so benchmark tests are migrated to criterion.
Reviewed By: DurhamG
Differential Revision: D7189143
fbshipit-source-id: 459a79b4cf16f35d2ff86f11a5980ba1fc627951
Summary:
Filesystem is hard. Append-only sounds like a safe way to write files, but it
only really helps with process crashes. If the OS crashes, it's possible that
other parts of the file gets corrupted. As source control, data integrity check
is important. So bytes not logically touched by appending also needs to be
checked.
Implement a `ChecksumTable` which adds integrity check ability to append-only
files. It's intended to be used by future append-only indexes.
Reviewed By: DurhamG
Differential Revision: D7108433
fbshipit-source-id: 16daf6b8d04bba464f1ee9221716beba69c1d47b
Summary:
First step of a storage-related building block that is in Rust. The goal is
to use it to replace revlog, obsstore and packfiles.
Extern crates that are likely useful are added to reduce future churns.
Reviewed By: DurhamG
Differential Revision: D7108434
fbshipit-source-id: 97ebd9ba69547d876dcecc05e604acdf9088877e