Summary:
Move expected_item_size_byte into CachelibSettings, seems like it should be there.
To enable its use also exposes a parse_and_init_cachelib method for callers that have different defaults to default cachelibe settings.
Reviewed By: krallin
Differential Revision: D24761024
fbshipit-source-id: 440082ab77b5b9f879c99b8f764e71bec68d270e
Summary:
We moved to APFS subvolumes a while back, so it's time
to kill this bit.
Reviewed By: chadaustin
Differential Revision: D25260825
fbshipit-source-id: 7487fb6d477f2650e2aedb0c4dfc4f45a10c9807
Summary:
Introduce `edenapithin` crate, which offers C bindings to EdenAPI Client.
There are two top-level `owned` types, `EdenApiClient` and `TreeEntryFetch`, which represent `Box`ed results from calls to EdenAPI's blocking client. The C / C++ code is responsible for calling the associated `free` functions for these types, as well as `OwnedString`, which is used to represent error variants of a `Result` as a string.
Most functionality is provided through functions which operate on and return references into these top-level owned types, providing access into Rust `Result`s and `Vec`s (manually-monomorphized), and EdenApi's `TreeEntry` and `TreeChildEntry`.
Additional non-pointer types are defined in the `types` module, which do not require manual memory management.
C++ bindings are not included currently, but will be introduced soon.
Reviewed By: fanzeyi
Differential Revision: D24866065
fbshipit-source-id: bb15127b84cdbc6487b2d0e1798f37ef62e5b32d
Summary:
Introduce a new API type, `TreeAttributes`, corresponding to the existing type `WireTreeAttributesRequest`, which exposes which optional attributes are available for fetching. An `Option<TreeAttributes>` parameter is added to the `trees` API, and if set to `None`, the client will make a request with TreeAttributes::default().
The Python bindings accept a dictionary for the attributes parameter, and any fields present will overwrite the default settings from TreeAttributes::default(). Unrecognized attributes will be silently ignored.
Reviewed By: kulshrax
Differential Revision: D25041255
fbshipit-source-id: 5c581c20aac06eeb0428fff42bfd93f6aecbb629
Summary: Basic observability for how the segmeted changelog update process is performing.
Reviewed By: krallin
Differential Revision: D25108739
fbshipit-source-id: b1f406eb0c862464b186f933d126e0f3a08144e4
Summary:
The update of the segmented changelog is light weight enough that we can
consider all repositories sharing a common tailer process. With all
repositories sharing a single tailer the the maintenance burden will be lower.
Things that I am particularly unsure about are: tailer configuration setup and
tailer structure. With regards to setup, I am not sure if this is more or less
than what production servers do to instantiate. With regards to structure, I
think that it makes a lot of sense to have a function that takes a single repo
name as parameter but the configuration setup has an influence on the details.
I am also unsure how important it is to paralelize the instantiation of the
blobrepos.
Finally, it is worth mentioning that the SegmentedChangelogTailer waits for
`delay` after an update finishes rather than on a period. The benefit is that
we don't have large updates taking down a process because we schedule the same
large repo update too many timer. The drawback is that scheduling gets messed
up over time and multiple repo updates can end up starting at the same time.
Reviewed By: farnz
Differential Revision: D25100839
fbshipit-source-id: 5fff9f87ba4dc44a17c4a7aaa715d0698b04f5c3
Summary:
Stop implementing the legacy glob() Thrift function, since it's
deprecated, and just noise at this point.
Reviewed By: xavierd
Differential Revision: D25247641
fbshipit-source-id: a022fee169ad54c886d8f300b57bef233453fe8b
Summary: This makes it easier to enable all derived data for scrubbing
Reviewed By: ikostia
Differential Revision: D25188963
fbshipit-source-id: e9c981e33273d6b2eeadcce0d0a341b33e91e42d
Summary:
Show the options for specifying node and edge types in the --help output
These changes removed the last use of lazy_static in the walker, so updated TARGETS as well.
Reviewed By: krallin
Differential Revision: D25188964
fbshipit-source-id: c5ccb4f5a0f3be1b8cb7d51cd5f99236d60d3029
Summary:
It has a build() method and later in stack it will build a mononoke
specific type rather than the clap::App
Differential Revision: D25216827
fbshipit-source-id: 24a531856405a702e7fecf54d60be1ea3d2aa6e7
Summary:
Mercurial is incompatible with TSAN at the moment, due to Rust/C++
compilation issues, Python multiprocess, and Tokio false positives. So
disable our HG tests when running under TSAN.
Reviewed By: fanzeyi
Differential Revision: D25109413
fbshipit-source-id: 86e51ebd287e10f92d6e3b8e7bf191e7946c709a
Summary: Adding `full_idmap_clone` to edenapi and usign that in `debugsegmentclone`.
Reviewed By: quark-zju
Differential Revision: D25139730
fbshipit-source-id: 682055f7c30a94a941acd16e2b8e61b9ea1d0aef
Summary:
This method reconstructs a dag from clone data.
At the moment we only have a clone data construction method in Mononoke. It's
the Dags job to construct and import the clone_data. We'll consolidate that at
a later time.
Reviewed By: quark-zju
Differential Revision: D24954823
fbshipit-source-id: fe92179ec80f71234fc8f1cf7709f5104aabb4fb
Summary:
The server expects post requests. At this time we don't want to cache this data
so POST.
Reviewed By: singhsrb
Differential Revision: D24954824
fbshipit-source-id: 433672189ad97100eee7679894a894ab1d8cff7b
Summary:
The config is something that makes sense for all commands to have access to.
Commands that don't use a repo don't have access to the config that is prepared
by the dispatches. This change is a stop-gap to allow new commands that don't
require a repository to receive the config as an argument.
The construction of the config is something that we should iterate on. I see
the current implementaiton as a workaround.
Reviewed By: quark-zju
Differential Revision: D24954822
fbshipit-source-id: 42254bb201ba8838e7cc107394e8fab53a1a95c7
Summary:
At the moment if we try to sync a bookmark entry but from_cs_id of bookmark
entry doesn't match the value of the bookmark on hg servers then the sync will
fail.
Let's add an option that in the case of this mismatch sets from_cs_id to the
current value on hg servers.
Reviewed By: krallin
Differential Revision: D25242172
fbshipit-source-id: 91180fb86f25d10c9ba2b78d7aa18ed0a52d13a5
Summary: convert mercurial_derived_data to new type futures
Reviewed By: ahornby
Differential Revision: D25220329
fbshipit-source-id: c2532a12e915b315fe6eb72f122dbc37822bbb2a
Summary:
This diff prepares the Mononoke codebase for composition-based extendability of
`ScubaSampleBuilder`. Specifically, in the near future I will add:
- new methods for verbose scuba logging
- new data field (`ObservabilityContext`) to check if verbose logging should
be enabled or disabled
The higher-level goal here is to be able to enable/disable verbose Scuba
logging (either overall or for certain slices of logs, like for a certain
session id) in real time, without restarting Mononoke. To do so, I plan to
expose the aforementioned verbose logging methods, which will run a check
against the stored `ObservabilityContext` and make a decision of whether the
logging is enabled or not. `ObservabilityContext` will of course hide
implementation details from the renamed `ScubaSampleBuilderExt`, and just provide a yes/no
answer based on the current config and sample fields.
At the moment this should be a completely harmless change.
Reviewed By: krallin
Differential Revision: D25211089
fbshipit-source-id: ea03dda82fadb7fc91a2433e12e220582ede5fb8
Summary:
cross repo bookmark validation alarm fired a few times, and looks like it fired
because of the following:
1) find_bookmark_diff compared boomarks and found an inconsistency for bookmark
BM which points to commit A in large repo. Next step is to check bookmark history
2) While find_bookmark_diff was running a new commit B was landed in a large repo
and was backsynced to the small repo, so BM now points to commit B.
3) check_large_bookmark_history is called and it fetches latest bookmark log entries, and
it gets entries for commit A and commit B. check_large_bookmark_history checks
if any of the fetched entries points to a commit in the small repo and if yes then
it also checks if this bookmark update happened not so long ago. And the
problem is in the way it checks the "not so long ago" part. It does so by
finding the time difference between latest bookmark update log entry and any
other bookmark update log entry.
Now, if time difference between these two log entries (for commit B and for
commit A) is more than max_delay_secs (which happens only
if commit rate is low e.g. during the weekends), then the alarm would fire
because the delay between latest bookmark update log entry (the one that moved
BM to commit B) and previous log entry (the one that moved BM to commit A) is too large.
This diff fixes this race by skipping newest entries until we found a bookmark
update log entry which points to the large commit that find_bookmark_diff
returned.
Reviewed By: ikostia
Differential Revision: D25196760
fbshipit-source-id: dfa0dca0001b1c38759ec9f4f790cfa3197ae2cf
Summary: remove dependency on old futures from derived data filenodes
Reviewed By: ahornby
Differential Revision: D25218521
fbshipit-source-id: 4d7eaf42c3ba15ea09276a7f3567128d5216e814
Summary: `derived_data:derived_data` had already been almost converted, I've cleaned up some test so it would be possible to completely remove old futures dependency
Reviewed By: StanislavGlebik
Differential Revision: D25197406
fbshipit-source-id: 064439f42a15f715befc019e5350dda0a2975f2b
Summary:
- convert save_bonsai_changesets to new type futures
- `blobrepo:blobrepo` is free from old futures deps
Reviewed By: StanislavGlebik
Differential Revision: D25197060
fbshipit-source-id: 910bd3f9674094b56e1133d7799cefea56c84123
Summary: Add support for walking fast entries for directories so we can scrub and inspect them
Differential Revision: D25187551
fbshipit-source-id: 812f9fd82459ef49dcd781c318fbe5c398daad21
Summary: Add FastlogFile to walker so it can be inspected and scrubbed. Directory and batch components of fastlog covered in following diffs.
Differential Revision: D25187549
fbshipit-source-id: c046cbf2561cdbbc9563497119e34d1b09d0ebef
Summary:
On Python 3 we only support utf8 files. Python 3 has a way of
representing non-utf8 strings in utf8 format by utilizing surrogateescape, but
these strings cause issues in other parts of the codebase that don't expect
suorrageescaped strings (like the Rust code). Since we simply do not support
these paths, let's filter them out as soon as we get them from Watchman.
Reviewed By: quark-zju
Differential Revision: D25134079
fbshipit-source-id: 8be6893a6114b816097422f4469ac317fa3795d1
Summary: As HG<->Mononoke will go through proxygen, testing showed that Proxygen forces us to use 'Upgrade: websocket' and confirm with the Websocket handshake. Adjust accordingly to do so.
Reviewed By: ahornby
Differential Revision: D25197395
fbshipit-source-id: ca00ac31be92817c6f1a99d7d492469b17b46286
Summary:
This diff adds bundle combining to hg sync job. See motivation for doing that in D25168877 (cebde43d3f).
Main struct here is BookmarkLogEntryBatch. This struct helds a vector of BookmarkUpdateLogEntry that were combined (they are used mostly for logging) and also it contains parameters necessary for producing the bundle, notably from/to changeset ids and bookmarks. This struct has try_append method that decides whether it's possible to combine bundles or not.
Reviewed By: mitrandir77
Differential Revision: D25186110
fbshipit-source-id: 77ce91915f460db73d8a996efe415954eeea2476
Summary:
Use of `RootSkeletonManifest::batch_derive` is wrong, as it doesn't take into
account manifests that are already derived. Just use the normal derivation
method.
Reviewed By: StanislavGlebik
Differential Revision: D25218197
fbshipit-source-id: 60f2faad610d507a9659dc37ba72516431a9c036
Summary: The verify-manifests command can take a while. Add logging to show progress is being made.
Reviewed By: StanislavGlebik
Differential Revision: D25099918
fbshipit-source-id: 2b659f1a6921ed111441b2808ce29998bd2285dd
Summary:
If files are only modified, and not added or deleted, the skeleton manifest
doesn't change. Avoid duplicate writes by skipping the write if the skeleton
manifest we derive has the same ID as one of its parents.
Reviewed By: ahornby
Differential Revision: D25099917
fbshipit-source-id: 62900406711becea88491e706a09c6032109c964
Summary:
Improve backfill_derived_data logging to make it clear when a chunk is started
and when already-derived changesets have been skipped.
Fix the counters to take into account skiped changesets, and make the time
remaining human-readable.
Reviewed By: StanislavGlebik
Differential Revision: D25099919
fbshipit-source-id: b39d4b99ef0e79d253de2aa0c91f93276b561680
Summary:
Add skeleton manifests to `derived-data verify-manifests` and add a
`skeleton-manifests` subcommand that allows listing and traversing of skeleton
manifests.
Since skeleton manfiests don't contain the content id or file type, they are
valid simply if they contain the same set of files as the manifest we are
comparing against.
To allow testing of these items while backfilling is still in progress, add an
`--if-derived` option to these commands that work even if derivation is
disabled, provided the manifest has already been derived.
Reviewed By: ikostia
Differential Revision: D25064785
fbshipit-source-id: a6f923bfc53262a5b2118917f8fdd3e99407e036
Summary:
Avoid clients needing to make a secondary look-up for the commit title or message, which
we already have, by returning it in the blame response if the clients request it.
To do this, we add `format_options` to the blame request parameters, which allows customization
of the returned response. The default value if not specified corresponds to the old
behaviour for backwards compatibility.
Reviewed By: mitrandir77
Differential Revision: D25057231
fbshipit-source-id: 84faad7b2db1f684beb94d9fa799c8cef1a89ce8
Summary: Use these in next diff from walker so we can start to scrub fastlog
Reviewed By: aslpavel
Differential Revision: D25187548
fbshipit-source-id: 4188a0c9bdb51a898817b6f5801fe473b6712385
Summary: The path isn't needed to load the UnodeManifest, it's only needed as route information for compression etc
Reviewed By: aslpavel
Differential Revision: D25187547
fbshipit-source-id: b8199a81c5dae4caceed5d455fa6a9bbc3f037ac
Summary: The path is not needed to load the unode, only for route tracking
Reviewed By: aslpavel
Differential Revision: D25187553
fbshipit-source-id: baf6df3b7f56f5ca5a0ba086097acba4ecd82e75
Summary: Change from the walk_blame boolean to a bitflags struct so can also represent fastlog. Using bitflags crate so as not to grow the node size.
Reviewed By: aslpavel
Differential Revision: D25185267
fbshipit-source-id: b45574cd6c4d048aec321357ae54e2ec404f84fc
Summary: Few things were not on lexical order, clean them up. Makes it easier to read next diff without this mixed in
Reviewed By: aslpavel
Differential Revision: D25188965
fbshipit-source-id: 38b7ea6f0e2c0583e6701550d6db16d85ba1a455
Summary:
Previously, the commitcloud reverse filler would only run for a single
repo. This is fine for some of the high throughput repos (e.g. fbsource).
However, for small repos it doesn't make sense to use a server per repo.
Most of support was already completed with forward filler moving to multirepo
jobs but the bundle prefix could be set once per job which prevents
reversefiller from using it. For each mononoke repo the bundles have different
prefix (repo id is in the prefix) that's why we now take a list of bundle ids
together with a list of repos.
NOTE: If I were implementing this functionality from scratch I'd probably teach
the filler to just read the right bundle prefixed from Mononoke configs but
this way of doing it is just faster to pull and it's backwards compatible with
current tw jobs so the rollout will be easy.
Reviewed By: ikostia
Differential Revision: D25188994
fbshipit-source-id: 1c25e52404d5ec45d721f047c8d71b002cdf0994