Summary:
Adds a `WireTreeQuery` enum for query method, with a single `ByKeys(WireTreeKeyQuery)` available currently, to request a specific set of keys.
Leave the API struct alone for now.
Reviewed By: kulshrax
Differential Revision: D23402366
fbshipit-source-id: 19cd8066afd9f14c7e5f718f7583d1e2b9ffac02
Summary:
This diff introduces Mysql client for Rust to Mononoke as a one more backend in the same row with raw xdb connections and myrouter. So now Mononoke can use new Mysql client connections instead of Myrouter.
To run Mononoke with the new backend, pass `--use-mysql-client` options (conflicts with `--myrouter-port`).
I also added a new target for integration tests, which runs mysql tests using mysql client.
Now to run mysql tests using raw xdb connections, you can use `mononoke/tests/integration:integration-mysql-raw-xdb` and using mysql client `mononoke/tests/integration:integration-mysql`
Reviewed By: ahornby
Differential Revision: D23213228
fbshipit-source-id: c124ccb15747edb17ed94cdad2c6f7703d3bf1a2
Summary: We use hyphens in other bypass strings. Make this consistent in `test-hooks.t` to avoid confusion.
Reviewed By: mitrandir77
Differential Revision: D23964799
fbshipit-source-id: e300bad091aa6c50f5921507117c1019b9863bd5
Summary:
Let's start actually fixing what commit sync config version is used to remap a commit i.e. we
should use a commit sync config version that was used to remap a parent instead
of using a current version. See more details in
https://fb.quip.com/VYqAArwP0nr1
This diff fixes one particular case and also leaves a few TODOs that we need to
do later
Reviewed By: krallin
Differential Revision: D23953213
fbshipit-source-id: 021da04b0f431767fec5d1c4408287870cb83de1
Summary: Instead of always fetching the current version name to verify working copy let's instead fetch whatever the version was actually used to create this commit.
Reviewed By: krallin
Differential Revision: D23936503
fbshipit-source-id: 811e427eb62741401b866970b4a0de0c1753edb3
Summary:
Turned out validation didn't report an error if source repo contained an entry
that was supposed to be present in target repo but was actually missing.
This diff fixes it.
Reviewed By: krallin
Differential Revision: D23949909
fbshipit-source-id: 17813b4ad924470c2e8dcd9d3dc0852c79473c61
Summary:
This diff makes blobstore healer to use MyAdmin to get replication lag for a DB shard and removes "laggable" interface for connections.
The old "laggable" API worked this way: we maintained potential connections to each possible region, then tried to query replica status on all of them. If there was no replica hosts in some of the regions, we just wanted to ignore it by handling a specific error type.
This is legacy and makes the logic more complicated. We want for the new code to use Myadmin instead.
Reviewed By: krallin
Differential Revision: D23767442
fbshipit-source-id: 9f85f07bd318ad020d203d2bcd1c8898061f7572
Summary:
Our configerator configs store both "current" mapping version and also they
store all versions that were used before.
These integration tests were updating just current version, but weren't
updating "all" versions. This is incorrect but it worked by accident.
But it will stop working in the next diffs, so this diff fixes it
Reviewed By: ikostia
Differential Revision: D23908970
fbshipit-source-id: 10f96bd02987d9195aff4855241efbd9a065a761
Summary:
The remaining test failures are mostly around bundle support, which
I'll fix in a later diff.
Reviewed By: quark-zju
Differential Revision: D23664037
fbshipit-source-id: 2bdde3cb4fcded6e0cf3afdc23269662544821df
Summary:
ConnectionSecurityChecker now supports per repository ACL checking.
PermissionCheckers are created in constructor for each repo.
Later when there is a need to check permissions, they're retrieved using a hash map.
Reviewed By: HarveyHunt
Differential Revision: D23678515
fbshipit-source-id: 3d2880fc9df137872ea64a47636f1142d0b36fc1
Summary:
In D23845720 (5de500bb99) I described what changes we need to make in our commit syncer. One
part of it is that we should remove get_mover() method, as this method always
uses current version of commit sync map even, and that's incorrect.
This diff removes it from commit validator
Reviewed By: ikostia
Differential Revision: D23864350
fbshipit-source-id: 3f650a32835dda9f82949002d63b52cc36cf04e0
Summary:
CommitSyncer is a struct that we use to remap a commit from one repo to another. It uses commit sync map to figure out which paths need to be changed. Commit sync mapping might change, and each commit sync mapping has a version associated with it.
At the moment CommitSyncer doesn't work correctly if a commit sync mapping is changed. Consider the following DAG
```
large repo
A' <- remapped with mapping V1
|
O B' <- remapped with mapping V1
| /
...
small repo
A
|
O B
| /
...
```
We have commit A and B from a small repo remapped into a large repo into commits A' and B'. They were remapped with commit sync mapping V1, which for example remaps files in "dir/" into "smallrepo/dir".
Now let's say we start to use a new mapping v2 which remaps "dir/" into "otherdir/". After this point every commit will be created with new mapping. But this is incorrect - if we create a commit on top of B in a small repo that touches "dir/file.txt" then it will be remapped into "otherdir/file.txt" in the large repo, even though every other file is still in "smallrepo/dir"!
The fix for this issue is to always use the same mapping as commit parent was using (there are a few tricky cases with merge commits and commits with no parents, but those will be dealt with separately).
This diff is the first step - it threads through LiveCommitSyncConfig all the way to the CommitSyncer object, so that CommitSyncer can always fetch whatever mapping it needs.
Reviewed By: ikostia
Differential Revision: D23845720
fbshipit-source-id: 555cc31fd4ce09f0a6fa2869bfcee2c7cdfbcc61
Summary:
Introduce separate wire types to allow protocol evolution and client API changes to happen independently.
* Duplicate `*Request`, `*Entry`, `Key`, `Parents`, `RepoPathBuf`, `HgId`, and `revisionstore_types::Metadata` types into the `wire` module. The versions in the `wire` module are required to have proper `serde` annotations, `Serialize` / `Deserialize` implementations, etc. These have been removed from the original structs.
* Introduce infallible conversions from "API types" to "wire types" with the `ToWire` trait and fallible conversions from "wire types" to "API types" with the `ToApi`. API -> wire conversions should never fail in a binary that builds succesfully, but wire -> API conversions can fail in the case that the server and client are using different versions of the library. This will cause, for instance, a newly-introduced enum variant used by the client to be deserialized into the catch-all `Unknown` variant on the server, which won't generally have a corresponding representation in the API type.
* Cleanup: remove `*Response` types, which are no longer used anywhere.
* Introduce a `map` method on `Fetch` struct which allows a fallible conversion function to be used to convert a `Fetch<T>` to a `Fetch<U>`. This function is used in the edenapi client implementation to convert from wire types to API types.
* Modify `edenapi_server` to convert from API types to wire types.
* Modify `edenapi_cli` to convert back to wire types before serializing responses to disk.
* Modify `make_req` to use `ToWire` for converting API structs from the `json` module to wire structs.
* Modify `read_res` to use `ToApi` to convert deserialized wire types to API types with the necessary methods for investigating the contents (`.data()`, primarily). It will print an error message to stderr if it encounters a wire type which cannot be converted into the corresponding API type.
* Add some documentation about protocol conventions to the root of the `wire` module.
Reviewed By: kulshrax
Differential Revision: D23224705
fbshipit-source-id: 88f8addc403f3a8da3cde2aeee765899a826446d
Summary:
At the moment CommitSyncConfig can be set in two ways:
1) Set in the normal mononoke config. That means that config can be updated
only after a service is restarted. This is an outdated way, and won't be used
in prod.
2) Set in separate config which can be updated on demand. This is what we are
planning to use in production.
create_commit_syncer_from_matches was used to build a CommitSyncer object via
normal mononoke config (i.e. outdated option #1). According to the comments it
was done so that we can read configs from the disk instead of configerator, but
this doesn't make sense because you already can read configerator configs
from disk. So create_commit_syncer_from_matches doesn't look particularly
useful, and besides it also make further refactorings harder. Let's remove it.
Reviewed By: ikostia
Differential Revision: D23811090
fbshipit-source-id: 114d88d9d9207c831d98dfa1cbb9e8ede5adeb1d
Summary:
When running the repo import tool, it's possible that we need to do additional setup steps before being able to run the tool, which otherwise would only come up when we run it.
Firstly, if the repo we import into doesn't have a callsign (e.g. FBS, WWW...), but we want to check Phabricator, our tool would hang when checking Phabricator, because we need the callsign for checking. Therefore, we need to inform the user to set the callsign for the repo.
Secondly, in case the repo push-redirects to a larger repo, we generate a bookmark for the commits imported into the large. However, we need to inform the Phabricator team to include the large repo's bookmark before we can import the commits, because this bookmark publishes the imported commits on Phabricator.
This diff adds a subcommand to check these additional steps, so we wouldn't find these out during the actual import run.
Reviewed By: StanislavGlebik
Differential Revision: D23783462
fbshipit-source-id: 3cdf4035548213d8cee9717fb985c22741a6749b
Summary:
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/60
For the tests that output different data to stdout in OSS vs FB create helpers that remove the differences.
Reviewed By: farnz
Differential Revision: D23814134
fbshipit-source-id: c6656528021c9a90b98e3c89a9bbe8c5178c6919
Summary:
Add `scsc land-stack` to facilitate testing of stack landing via the source control service.
Use this to test that landing of stacks works.
Reviewed By: aslpavel
Differential Revision: D23813366
fbshipit-source-id: 1f7b682fa5e33a232cb1da5c702a703223658942
Summary:
Update the conversion of `BookmarkMovementError` to `MononokeError` to reflect
that most movement errors are caused by invalid requests.
Reviewed By: aslpavel
Differential Revision: D23814794
fbshipit-source-id: 48503353aaae7b3cd03e5221a8ad014eef2e9414
Summary:
I saw this throw the LFS server into an infinite loop when I tested it. We're
not using this right now, so I'm not investing time into root-causing the
issue, and instead let's just take this out.
Reviewed By: StanislavGlebik
Differential Revision: D23782757
fbshipit-source-id: f320fc72c3ff279042c2fe9fcb9c4904e9e1bfdf
Summary:
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/51
This diff extends capabilities of CargoBuilder in getdeps so that individual manifests can be build even without workspaces. Thanks to that a build for edenapi/tools can be made and its artifacts can be used in mononoke integration tests.
Reviewed By: StanislavGlebik
Differential Revision: D23574887
fbshipit-source-id: 8a974a6b5235d36a44fe082aad55cd380d84dd09
Summary:
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/58
This makes the test-bookmarks-filler.t pass. Additionally remove few tests from exclusion lists as they started to pass.
Reviewed By: ikostia
Differential Revision: D23757401
fbshipit-source-id: eddcda5fd1806d77d0046b6ced3695df6b3d775d
Summary:
We are running out of space on integration tests runs on Linux. In order to avoid that this change is adding some cleanups.
1. Adding `docker rmi $(docker image ls -aq)` frees up 4 GB.
2. Cleaning up `eden_scm` build directory frees up 3 GB.
3. Cleaning up `mononoke` build directory frees up 1 GB.
This diff also includes a fix for run_tests_getdeps.py where we run all the "PASSING" tests when --rerun flag is passed instead of only the failed ones.
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/57
Reviewed By: krallin
Differential Revision: D23742159
Pulled By: lukaspiatkowski
fbshipit-source-id: 3b5e89ad29c753d585c1a6f01a9a1d6c1e616fbf
Summary: fixes build and test errors on OSS introduced by D23596262 (deb57a25ed)
Reviewed By: ikostia
Differential Revision: D23757086
fbshipit-source-id: 7973ce36b3589cbe21590bd7e19a9828be72128f
Summary: Since repo_import tool is automated, we need a way to recover the process when the tool break without restarting the whole process. To do this, I defined a new struct (RecoveryFields) that allows us to keep track of the state. The most important fields are the import_stage (ImportStage) we need for keeping track of the process and to indicate the first stage of recovery, and the cs_ids we use throughout the process. For each stage in importing, we save the state after we have finished that part. This way we can also recover from interrupts. To do process recovery, we only need to use `recover-process <file_path>` subcommand, where file_path stores the saved state of importing. For normal run we will use `import [<args>]` subcommand.
Reviewed By: krallin
Differential Revision: D23678367
fbshipit-source-id: c0e0b270ea2ccc499368e54f37550cfa58c03970
Summary:
When sending trees and files we try to avoid sending trees that are
available from the main server. To do so, we currently check to see if the
tree/file is from the local store (i.e. .hg/store instead of $HGCACHE).
In a future diff we'll be moving trees to use the Rust store, which doesn't
expose the difference between shared and local stores. So we need to stop
depending on logic to test the local store.
Instead we can test if the commit is public or not, and only send the tree/file
is the commit is not public. This is technically a revert of the 2018 D7992502 (5e95b0e32e)
diff, which stopped depending on phases because we'd receive public commits from
svn there were not public on the server yet. Since svn is gone, I think it's
safe go back to that way.
This code was usually to help when the client was further ahead than another
client and in some commit cloud edge cases, but 1) we don't do much/any p2p
exchange anymore, and 2) we did some work this year to ensure clients have more
up-to-date remote bookmarks during exchange (as a way of making phases and
discovery more reliable), so hopefully we can rely on phases more now.
Reviewed By: quark-zju
Differential Revision: D23639017
fbshipit-source-id: 34c13aa2b5ef728ea53ffe692081ef443e7e57b8
Summary:
There was a bug. If an entry was skipped then we haven't updated the counter.
That means we might skip the same entry over and over again.
Let's fix it
Reviewed By: ikostia
Differential Revision: D23728790
fbshipit-source-id: f323d14c4deba5736ceb8ada7cb7ee48a69c1272
Summary:
In preparation of moving away from SSH as an intermediate entry point for
Mononoke, let Mononoke work with newly introduced Metadata. This removes any
assumptions we now make about how certain data is presented to us, making the
current "ssh preamble" no longer central.
Metadata is primarily based around identities and provides some
backwards-compatible entry points to make sure we can satisfy downstream
consumers of commits like hooks and logs.
Simarly we now do our own reverse DNS resolving instead of relying on what's
been provided by the client. This is done in an async matter and we don't rely
on the result, so Mononoke can keep functioning in case DNS is offline.
Reviewed By: farnz
Differential Revision: D23596262
fbshipit-source-id: 3a4e97a429b13bae76ae1cdf428de0246e684a27
Summary:
As it says in the title, this adds support for receiving compressed responses
in the revisionstore LFS client. This is controlled by a flag, which I'll
roll out through dynamicconfig.
The hope is that this should greatly improve our throughput to corp, where
our bandwidth is fairly scarce.
Reviewed By: StanislavGlebik
Differential Revision: D23652306
fbshipit-source-id: 53bf86d194657564bc3bd532e1a62208d39666df
Summary:
This adds support for compressing responses in the LFS Server, based on what
the client sent in `Accept-Encoding`. The compression changes are fairly
simple. Most of the codes changes are around the fact that when we compress,
we don't send a Content-Length (because we don't know how long the content will
be).
Note that this is largely implemented in StreamBody. This means it can be used
for free by the EdenAPI server as well. The reason it's in there is because we
need to avoid setting the Content-Length when compression is going to be used
(`StreamBody` is what takes charge for doing this). This also exposes a
callback to get access to the stream post-compression, which also needs to be
exposed in `StreamBody`, since that's where compression happens.
Reviewed By: aslpavel
Differential Revision: D23652334
fbshipit-source-id: 8f462d69139991c3e1d37f392d448904206ec0d2
Summary:
This would let us allow only a certain bookmarks to be remapped from a small
repo to a large repo.
Reviewed By: krallin
Differential Revision: D23701341
fbshipit-source-id: cf17a1a21b7594a94c5fb117065f7d9298c8d1af
Summary:
This completely converts mercurial changeset to be an instance of derived data:
- Custom lease logic is removed
- Custom changeset traversal logic is removed
Naming scheme of keys for leases has been changed to conform with other derived data types. This might cause temporary spike of cpu usage during rollout.
Reviewed By: farnz
Differential Revision: D23575777
fbshipit-source-id: 8eb878b2b0a57312c69f865f4c5395d98df7141c
Summary:
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/56
Additionally to adding a flag as in the commit title the "test-scs*" tests are being excluded via pattern rather than listed out. This will help in future when more tests using scs_server will be added as the author won't have to list it in exclusion list.
Reviewed By: farnz
Differential Revision: D23647298
fbshipit-source-id: f5c263b9f68c59f4abf9f672c7fe73b63fa74102
Summary:
lookup has special support for requests that start with _gitlookup. It remaps
hg commit to git commit and vice versa. Let's support that in mononoke as well.
Reviewed By: krallin
Differential Revision: D23646778
fbshipit-source-id: fcde58500b5956201e718b0a609fc3fee1bbdd28
Summary:
One test was fixed earlier by switching MacOS to use modern version of bash, the other is fixed here by installing "nmap" and using "ncat" from within it on both linux and mac.
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/50
Reviewed By: krallin
Differential Revision: D23599695
Pulled By: lukaspiatkowski
fbshipit-source-id: e2736cee62e82d1e9da6eaf16ef0f2c65d3d8930
Summary:
Fixes a few issues with Mononoke tests in Python 3.
1. We need to use different APIs to account for the unicode vs bytes difference
for path hash encoding.
2. We need to set the language environment for tests that create utf8 file
paths.
3. We need the redaction message and marker to be bytes. Oddly this test still
fails with jq CLI errors, but it makes it past the original error.
Reviewed By: quark-zju
Differential Revision: D23582976
fbshipit-source-id: 44959903aedc5dc9c492ec09a17b9c8e3bdf9457
Summary:
The Mac integration test workflow already installs a modern curl that fixes https://github.com/curl/curl/issues/4801, but it does so after "hg" is built, so "hg" uses the system curl libraries, which fails when used with a certificate not present in keychain.
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/53
Reviewed By: krallin
Differential Revision: D23597285
Pulled By: lukaspiatkowski
fbshipit-source-id: a7b8b6ae55ce338bfb9946a852cbb6b929e73203
Summary:
There are blobs that fail to scrub and terminate the process early for a variety of reasons; when this is running as a background task, it'd be nice to get the remaining keys scrubbed, so that you don't have a large number of keys to fix up later.
Instead of simply outputting to stdout, write keys to one of three files in the format accepted on stdin:
1. Success; you can use `sort` and `comm -3` to remove these keys from the input dat, thus ensuring that you can continue scrubbing.
2. Missing; you can look at these keys to determine which blobs are genuinely lost from all blobstores, and fix up.
3. Error; these will need running through scrub again to determine what's broken.
Reviewed By: krallin
Differential Revision: D23574855
fbshipit-source-id: a613e93a38dc7c3465550963c3b1c757b7371a3b
Summary:
The test-blobimport.t creates few files that are conflicting in a case insensitive file system, so make them differ by changing number of underscores in one of the files.
test-pushrebase-block-casefolding.t is directly testing a feature of case sensitive file system, so it cannot be really tested on MacOS
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/49
Reviewed By: farnz
Differential Revision: D23573165
Pulled By: lukaspiatkowski
fbshipit-source-id: fc16092d307005b6f0c8764c1ce80c81912c603b
Summary:
See D23538897 for context. This adds a killswitch so we can rollout client
certs gradually through dynamicconfig.
Reviewed By: StanislavGlebik
Differential Revision: D23563905
fbshipit-source-id: 52141365d89c3892ad749800db36af08b79c3d0c
Summary:
Like it says in the title, this updates remotefilelog to present client
certificates when connecting to LFS (this was historically the case in the
previous LFs extension). This has a few upsides:
- It lets us understand who is connecting, which makes debugging easier;
- It lets us enforce ACLs.
- It lets us apply different rate limits to different use cases.
Config-wise, those certs were historically set up for Ovrsource, and the auth
mechanism will ignore them if not found, so this should be safe. That said, I'd
like to a killswitch for this nonetheless. I'll reach out to Durham to see if I
can use dynamic config for that
Also, while I was in there, I cleaned up few functions that were taking
ownership of things but didn't need it.
Reviewed By: DurhamG
Differential Revision: D23538897
fbshipit-source-id: 5658e7ae9f74d385fb134b88d40add0531b6fd10
Summary:
This test fail on sandcastle because the last two lines are not showing up.
I have a hunch that the last two lines just weren't flushed, and this diff
attempts to fix it.
Reviewed By: krallin
Differential Revision: D23570321
fbshipit-source-id: fd7a3315582c313a05e9f46b404e811384bd2a50
Summary:
Hooks have been recently made public. Remove from list of excluded tests the ones that were blocked by missing hooks and fix them up.
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/48
Reviewed By: farnz
Differential Revision: D23564883
Pulled By: lukaspiatkowski
fbshipit-source-id: 101dd093eb11003b8a4b4aa4c5ce242d9a9b9462
Summary:
The "meaningfulparents" concept is coupled with rev numbers.
Remove it. This changes default templates to not show parents, and `{parents}`
template to show parents.
Reviewed By: DurhamG
Differential Revision: D23408970
fbshipit-source-id: f1a8060122ee6655d9f64147b35a321af839266e
Summary:
Those are new tests that use functionality not compatible yet with OSS.
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/47
Reviewed By: chadaustin
Differential Revision: D23538921
Pulled By: lukaspiatkowski
fbshipit-source-id: c512a1b2359f9ff772d0e66d2e6a66f91e00f95c
Summary:
A few things here:
- The heads must be bytes.
- The arguments to wireproto must be strings (we used to encode / decode them,
but we shouldn't).
- The bookmark must be a string (otherwise it gets serialized as `"b\"foo\""`
and then it deserializes to that instead of `foo`).
Reviewed By: StanislavGlebik
Differential Revision: D23499846
fbshipit-source-id: c8a657f24c161080c2d829eb214d17bc1c3d13ef