Commit Graph

129 Commits

Author SHA1 Message Date
Kostia Balytskyi
7ee657f124 mononoke: asyncify signatures of two fns in getbundle_response
Summary:
## Wider goal
See D20068839

## This diff
Asyncifying only singatures allows us to independently work on function bodies, without touching the callsites later in the diff.

Reviewed By: StanislavGlebik

Differential Revision: D20097804

fbshipit-source-id: f1391a055947c7802f719bc99b9eae71a4ac39cd
2020-02-27 05:01:52 -08:00
Kostia Balytskyi
bd90a843a7 mononoke: asyncify diff_with_parents in getbundle_response
Summary:
## Wider goal
See D20068839

## This diff
Let's modernize this particular fucntion

Reviewed By: StanislavGlebik

Differential Revision: D20097800

fbshipit-source-id: a919b5ad1b544a7b784668ca265e24c375100fa3
2020-02-27 05:01:51 -08:00
Kostia Balytskyi
90b03f5a0d mononoke: call old-style Future OldFuture in getbundle_response
Summary:
## Wider goal
See D20068839

## This diff
This file contains a mix of old and new-style futures. It even has futures,
which have items composed of futures. To be able to convert on one of the
levels and not the other, we need to deal with the confusion.

Let's have old things have `Old` in the name.

Reviewed By: StanislavGlebik

Differential Revision: D20097803

fbshipit-source-id: fedb3669ef34a8328ec389a30ff2c512ab363818
2020-02-27 05:01:51 -08:00
Kostia Balytskyi
4f2993c765 mononoke: move bundle generation bits from hg_sync_job into getbundle_response
Summary:
## Wider goal
We want the flexibility to return hydrated responses for `getbundle` wireproto
requests for draft commits. This means that the responses will contain not
only the commit data (as they do now), but also trees and files.
For context, when an "unhydrated" response is returned for the `getbundle`
request for a draft commit, we expect one of two things to happen later
in the e2e scenario:
- either `hg` client would immediately make another wireproto request
  (`gettreepack`, `getpackv1`) within the same client `hg` command execution
- or a subsequent `hg update` call will cause another wireproto request

In any case, another request is needed before the pulled commit can be used.
This request can hit a different server, sometimes it can even be Mercurial
instead of Mononoke. Specifically, it can Mercurial instead of Mononoke if the
`fallback` path markers are configured incorrectly. In that case we have a
problem, as Mercurial is incapable of serving `gettreepack` or `getpackv1` for
infinitepush commits.

One way to deal with this is to always have correct path markers, which is
prone to human mistakes. Another way is to guarantee that Mononoke returns
everything in the original `getbundle` request. We don't want to do this for
public commits, as `pull`s of public commits typically fetch thousands of those
commits and never care about tree or file data for all but one of them. Draft
commits are different however, as they are usually exactly what the client
intends to use, so hydrating those is fine. Still, we want this behavior to
be gated behind a config flag.

## This diff
A lot of the needed code is already implemented in the hg-sync job, bundle
generating variant. So prior to implementing the actual behavior described
above, let's move the relevant bits to `getbundle_response`. Later we can comb
them up a bit (asyncify) and use to implement the needed behavior.

Reviewed By: StanislavGlebik

Differential Revision: D20068839

fbshipit-source-id: 0ab63d57b2d167401b7ee8864fe7760f5f65f8ec
2020-02-27 05:01:51 -08:00
Kostia Balytskyi
aac7bff59d mononoke: pull config schema changes from configerator
Summary:
This is the moral equivalent of D20115877 in fbcode. See that diff for
motivation.

Reviewed By: StanislavGlebik

Differential Revision: D20118575

fbshipit-source-id: 8f77f572068e611003b1344be3434f2d04ec56ca
2020-02-27 05:01:50 -08:00
Stanislau Hlebik
d5d3061168 mononoke: distinguish derived data waits with derived data generation
Summary:
Previously it was hard to tell whether the process were actually responsible
for generating derived data or it was just waiting for it to be generated.

Let's make this distinction clearer.

Reviewed By: johansglock

Differential Revision: D20138284

fbshipit-source-id: 52ae12679db2f61869f048baf2a603b456710a71
2020-02-27 03:15:39 -08:00
David Tolnay
de96589260 autocargo: Strip line comments
Summary:
These comments end up being a source of churn as we roll out D20125635, and anyway are not particularly meaningful after the transformations performed by autocargo. For example:

```
bytes = { version = "0.4", features = ["serde"] } # todo: remove
```

^ This doesn't mean the generated Cargo.toml intends to drop its bytes dependency altogether, but just that will be migrated to a different version that is present in the third-party/rust/Cargo.toml but not visible in the generated Cargo.toml.

Reviewed By: jsgf

Differential Revision: D20128612

fbshipit-source-id: a9e7b29ddc4b26bc47a626dd73bdaa4771ee7b18
2020-02-26 16:31:52 -08:00
Stanislau Hlebik
98f6d5d1a8 mononoke: fix walker filenode walks
Summary:
Since Mononoke's filenodes were migrated to derived data framework
hg_linknode_populated alarm has been firing. The main reason was that there's
now a delay between hg changeset being generated and filenodes being generated.

This diff fixes it by making sure walker won't visit hg changesets without
generated filenodes (note that walker will visit these changesets later after filenodes will be
generated).

Reviewed By: ahornby

Differential Revision: D20067615

fbshipit-source-id: 285e9a3d8c89b85441491c889a8458c86ca0e3a8
2020-02-26 15:21:53 -08:00
Aida Getoeva
585899f419 mononoke/scs: use last change in file history
Summary:
There is no need to generate expensive file history stream if only one node is requested.

I refactored code that generated stream of history commits, so it'd first yield the nodes and only then prefetch their parents. That will help to solve latency problem for the history request for only a single commit.

I removed BFS queue and added two state variables: ready nodes and already processed:
* The last are the nodes that were return as a part of a history stream on the last iteration and now can be used to construct next BFS layer: prefetch fastlog batches, fill the commit graph, take parents in BFS order to form new bunch of nodes.
* First are used if it's the first iteration - there is no processed nodes yet but there are some that are ready to be returned.

I believe removing the queue I simplified the code and logic a little bit.

Reviewed By: StanislavGlebik

Differential Revision: D19818100

fbshipit-source-id: c30d28c623464ba3552a00e8542552f7655076ef
2020-02-26 08:09:12 -08:00
Alex Hornby
04e011525a mononoke: walker: test validate scuba logging for non-public commits
Summary: add test for scuba logging for non-public commits

Reviewed By: StanislavGlebik

Differential Revision: D20093721

fbshipit-source-id: eb0792bcae8ea27c11709181390efb0ac0c817ee
2020-02-26 06:16:29 -08:00
Stanislau Hlebik
7076fac933 mononoke: add exponential backoff
Summary:
During our tests we noticed that we can send too many blobstore read requests to the
mapping. Let's add exponential backoff to prevent that

Reviewed By: ikostia

Differential Revision: D20116043

fbshipit-source-id: 6fecbda4c36a5065b77ba9df561c6d9c6a969089
2020-02-26 05:05:33 -08:00
Thomas Orozco
4ca1333b8a mononoke/hooks: use a smaller test group for faster tests
Reviewed By: ikostia

Differential Revision: D20115985

fbshipit-source-id: 4f69fc84eee352bcc689918527c6d460fcf672ba
2020-02-26 04:44:39 -08:00
Thomas Orozco
c14a88bbef mononoke: convert places that talk to Memcache to Bytes 0.5
Summary:
Memcache doesn't care (because both old and new Bytes to `Into<IOBuf>`), but
Thrift is Bytes 0.5. We have our caching ext layer in the middle, which wants
Bytes 0.4. This means we end up copying things we don't need to copy.

Let's update to fewer copies. I didn't update apiserver, because a) it's going
away, and b) those bytes go into Actix, and Actix isn't upgrading to Bytes 0.5
any time soon! Besides, this doesn't actually need updating besides tests anyway.

Reviewed By: dtolnay

Differential Revision: D20006062

fbshipit-source-id: 42766363a0ff8494f18349bcc822b5238e1ec0cd
2020-02-26 03:30:47 -08:00
Jeff Zhang
33140b117c Push compat down one level in eden/mononoke/cmds/admin/main.rs
Summary: Moving `compat` one level down to the call sites of subcommand functions.

Reviewed By: farnz

Differential Revision: D20085398

fbshipit-source-id: 461e147d2ae6e560b3a75fb92fa6b23f9f54d13e
2020-02-25 10:22:03 -08:00
Stanislau Hlebik
19e1e94984 mononoke: add lease renewing to derived data
Summary:
During S196197 lease expired and we were rederiving the same derived data over and over again for a big commit.
this diff adds lease renewal that should help with this problem.

Reviewed By: HarveyHunt

Differential Revision: D20093323

fbshipit-source-id: d139abf6659722f47ea40d9b2f279daa03623ff4
2020-02-25 09:22:46 -08:00
Stanislau Hlebik
4bd758289b mononoke: async/await derive_may_panic() function
Reviewed By: HarveyHunt

Differential Revision: D20092945

fbshipit-source-id: 70ec1a8e5b9c99f3853a13bebe3657ece5ff9e9e
2020-02-25 09:22:46 -08:00
Stanislau Hlebik
3418318883 mononoke: do not generate hgchangesets unnecessarily in FilenodesOnlyPublicMapping
Summary:
fetch_root_filenode is called by FilenodesOnlyPublicMapping to figure out if
filenodes were already derived. Previously it first derived hg changeset and
then fetched looked up root manifest in db. However if hg changeset is not
derived then filenodes couldn't possible be derived either and we can return an
answer faster.

This is useful in the next diff where I change walker

Reviewed By: ahornby

Differential Revision: D20068819

fbshipit-source-id: 17f066c437e0b1f7bbeb8f6e247eadc9afe94f90
2020-02-25 08:07:07 -08:00
Thomas Orozco
f8fcbc9723 mononoke/blobstore_healer: wait for MyRouter properly
Summary:
The blobstore_healer has never waited for MyRouter before querying for slave
status, but it ended up implicitly working because creating a blobstore
required a SQL factory, and creating a SQL factory would result in waiting for
MyRouter.

Now that creating a blobstore doesn't require SQL factory unless you're going
to actually use it (which the healer isn't: it doesn't use a multiplexblob, it
uses the underlying blobstores instead), we no longer wait properly for
MyRouter, so if MyRouter isn't there when we boot, we crash.

This fixes that.

Reviewed By: ahornby

Differential Revision: D20094829

fbshipit-source-id: 82b7e8d893a01049d1f434ee8dff36a877a0d2f4
2020-02-25 07:03:28 -08:00
Alex Hornby
693e8dee0a mononoke: walker: add support for loading by GitSha1 Aliases
Summary:
Add support for loading by GitSha1 Aliases.  This relies on the change to
Alias::GitSha1 earlier in stack.

Reviewed By: ikostia

Differential Revision: D19903577

fbshipit-source-id: 73cdccc04af61fa524c3683851d8af9ae90d31dc
2020-02-25 03:36:06 -08:00
Thomas Orozco
2a12e2beb6 mononoke/derived_data: log when we start deriving
Summary:
This should give us a slightly better idea of what hosts are doing to
troubleshoot duplicate derivation.

Also, let's make the logging a bit less confusing.

Reviewed By: StanislavGlebik

Differential Revision: D20070619

fbshipit-source-id: 91cc264b7043b8fc8c21c007832fba328ef0017d
2020-02-24 12:03:41 -08:00
Thomas Orozco
b3bebee0b4 mononoke: include DB config in multiplexed blobstore configuration
Summary:
This updates our multiplexed blobstore configuration to carry its own DB
config. The upshot of this change is that we can move the blobstore sync queue
(a fairly unruly table) to its own DB.

Another nice side effect of this is that it cleans up a bunch of other code, by
finally decoupling the blobstore config from the DB config. For examples,
places that need to instantiate a blobstore can now to do even without a DB
config (such as wireproto logging).

Obviously, this cannot land until we update the configs to include this. I'll
do so in Configerator prior to landing the diff.

Reviewed By: HarveyHunt

Differential Revision: D19973905

fbshipit-source-id: 79e4ff92cdb989aab4532decd3fe4fd6c55e2bb2
2020-02-24 11:54:45 -08:00
Thomas Orozco
b7185f0f13 mononoke/metaconfig: tidy up blobstore creation
Summary:
I'd like to refactor our multiplex blob to store its DB using a different
shard. In preparation of doing so, let's:

- Extract parsing DB configs from storage configs
- Tidy up some related places that take a reference when they actually need
  ownership (which is sort of wasteful).

Reviewed By: StanislavGlebik

Differential Revision: D19973906

fbshipit-source-id: 82baceb892e9e24e5fd0349ffa5503884c177a7a
2020-02-24 11:54:44 -08:00
Xavier Deguillard
401d44916b add lfs_protocol to autocargo
Summary: Now it no longer depends on mononoke_types, we can build it with cargo

Reviewed By: krallin

Differential Revision: D20070438

fbshipit-source-id: 1b2f9cc3640c58fd38e962c7c738d08cbb22a71d
2020-02-24 11:12:45 -08:00
Xavier Deguillard
934b64397b convert to bytes 0.5
Summary:
The bytes 0.5 is a depencency of newer tokio, it's also newer, and thus better.
Staying on 0.4 means that copies between Bytes 0.4 and 0.5 need to be done,
this will be especially bad in the LFS code since 10+MB buffer will have to be
copied...

One main API change is for the configparser. The code used to take Into<Bytes>
for the keys, I switched it to AsRef<[u8]>.

For hg_memcache_client, an extra copy is performed to build a Delta, since this
code uses an old tokio, and is being replaced right now, the effort of
switching to a new tokio and new bytes was not deemed worth it, the copy will
do for now.

Reviewed By: dtolnay

Differential Revision: D20043137

fbshipit-source-id: 395bfc3749a3b1bdfea652262019ac6a086e61e0
2020-02-24 10:28:46 -08:00
Lukas Piatkowski
4aea99df4e mononoke/blobstore: remove rocksdb blobstore and replace its usages with sqliteblob
Summary:
This is the second (and last) step on removing RocksDB as a blobstore.
Check the task for more description.

Context for OSS:
> The issue with rocksblob (and to some extent sqlite) is that unless we
> introduce a blobstore tier/thift api (which is something I'm hoping to avoid
> for xdb blobstore) we'd have to combine all the mononoke function like hg,
> scs, LFS etc into one binary for it to have access to rocksdb, which would be
> quite a big difference to how we deploy internally

(Note: this ignores all push blocking failures!)

Reviewed By: farnz

Differential Revision: D20001261

fbshipit-source-id: c4b2b2a393b918d17680ad483aa1d77356f1d07c
2020-02-24 05:23:07 -08:00
Lukas Piatkowski
278ac5e1f9 mononoke: make mononoke_types OSS-buildable
Summary: (Note: this ignores all push blocking failures!)

Reviewed By: farnz

Differential Revision: D19948740

fbshipit-source-id: 9d0cfc4ccbcb3c08bb969f23229ed3096470fa86
2020-02-24 05:23:07 -08:00
Alex Hornby
87112798b7 mononoke: walker: add option to start from non-bookmarks
Summary:
Add option to start the roots of the walk from any graph node, rather than just bookmarks.

This is useful when reproducing issues loading a key,  validating a changeset/filenode etc,  or to get consistent results on things like sizing where specifying root by bookmark would result in changes between runs.

Reviewed By: farnz

Differential Revision: D19886707

fbshipit-source-id: b7361cbec894aba08b6f702ff0731b9b201224d3
2020-02-24 03:49:23 -08:00
Mark Thomas
70ffdc7293 add export
Summary:
Add `scsc export`.  Analogous to `svn export`, this exports the contents of a
directory within a commit to files on disk, without a local checkout.

Reviewed By: mitrandir77

Differential Revision: D20006307

fbshipit-source-id: 5870712172cd8a030e85dbff75273c28ab0c332c
2020-02-24 03:00:22 -08:00
Thomas Orozco
5b07c8285e mononoke: test-mononoke-admin.t: fixup replication lag match
Summary: It's not always 0! (sometimes it's 1)

Reviewed By: farnz

Differential Revision: D20065610

fbshipit-source-id: b546befbf824713811fd7c011bbf4c246d3c696d
2020-02-24 02:57:18 -08:00
Stanislau Hlebik
ec76ba93c6 mononoke: convert some fastlog functions to async/await
Reviewed By: farnz

Differential Revision: D20059447

fbshipit-source-id: fa4a70b238ebc85ad5e589b06ee8a1ca6c0ea509
2020-02-24 00:53:56 -08:00
Xavier Deguillard
33020829b1 lfs_protocol: remove dependency on mononoke_types
Summary:
Mercurial wishes to use this crate, but pulling in mononoke_types brings way
too many dependencies. Since the only reason mononoke_types is brought in is
for the Sha256 type, let's just hardcode it to [u8; 32].

Reviewed By: krallin

Differential Revision: D20003596

fbshipit-source-id: 53434143c61cd1a1275027200e1149040d30beae
2020-02-21 12:26:19 -08:00
Harvey Hunt
0ecac65ac4 mononoke: Remove restrict_users hook
Summary:
This hook was implemented to prevent incorrect users from moving a
bookmark. However, it doesn't work and the functionality is now implemented by
`is_allowed_user` in the pushrebase pipeline.

Remove the unused hook.

Reviewed By: johansglock

Differential Revision: D20030479

fbshipit-source-id: bcbc9508eebe77cffbc7936382ba4d345b76f74f
2020-02-21 09:46:38 -08:00
Thomas Orozco
8086dc29c7 mononoke: add a limit_commit_message_length hook
Summary:
We're working towards sharding Bonsais. Let's make them easier to cache by also
not allowing arbitrarily large commit messages.

Reviewed By: StanislavGlebik

Differential Revision: D20002994

fbshipit-source-id: b2319ac9d5709e968121d4299396e03a90df4a06
2020-02-21 07:18:15 -08:00
Mateusz Kwapich
42bfba7c99 add git mappings import option
Summary: Let's import the info about corresponding git commits on blobimport whenever possible.

Reviewed By: ikostia

Differential Revision: D19877929

fbshipit-source-id: ba03d5de8ae8a9bd80084a8e858cd05e8f621193
2020-02-21 05:41:46 -08:00
Mateusz Kwapich
6111067524 add git mapping pushrebase hook
Summary:
Let's populate the bonsai<->git mapping on pushrebase of the commits that are
coming from git. By this being a pushrebase hook we can have the accuare mappings
being available as soon as the bonsai commit is available.

Corresponding configerator change: D19951607

Reviewed By: krallin

Differential Revision: D19949472

fbshipit-source-id: b957cbcdd0f14450ceb090539814952db9872576
2020-02-21 05:41:45 -08:00
Mateusz Kwapich
38f7a24364 add a way to update git mappings inside SQL transaction
Summary: During the pushrebase hook phase we'll need to reuse existing transaction.

Reviewed By: krallin

Differential Revision: D19949473

fbshipit-source-id: 7c53308724bec6df6d40933405f703c86be15a7a
2020-02-21 05:41:45 -08:00
Mateusz Kwapich
c2be00c45e add git mappings to blobrepo
Summary:
By having it in blobrepo we can ensure that all parts of mononoke can access it
easily

Reviewed By: StanislavGlebik

Differential Revision: D19949474

fbshipit-source-id: ac3831d61177c4ef0ad7db248f2a0cc5edb933b1
2020-02-21 05:41:44 -08:00
Mateusz Kwapich
5a53415bcb add git mapping crate
Summary:
We need a table to store git<->bonsai mappings and a crate that would abrstract operations on it:
 * it's going to be useful immediately to store git hashes for configerator
   commits and doing the hash translations via SCS.
 * it's going to be useful further down the line for real git support.

NOTE: I'm explicitly using the name `SHA1` all over the place to minimize the
confusion if we'll ever want to support other hashing schemes for git commits.
(Git Community is working on SHA256 support nowdays).

The corresponding AOSC diff: D19835975

Reviewed By: krallin

Differential Revision: D19835974

fbshipit-source-id: 113640f4db9681b060892a8cedd93092799ab732
2020-02-21 05:41:44 -08:00
Mark Thomas
a9490441b2 add blame --parent
Summary:
Add the `--parent` flag to `scsc blame`.  This runs blame against the first
parent of the specified commit, rather than the commit itself.  This allows
users to copy and paste commit hashes from previous blame output in order to
skip the commit, rather than having to look up the parent commit hash
themselves.

Reviewed By: StanislavGlebik

Differential Revision: D20006308

fbshipit-source-id: d1c25aad8f236fe27e467e29f6a96c957b6c8c8f
2020-02-20 13:03:54 -08:00
Thomas Orozco
4a29fe400d mononoke/blobstore_healer: migrate replication lag polling to async / await
Summary:
The former implementation here was a little difficult to work with, and
resulted in a whole lot of cloning of closures, etc.

This updates the implementation to be a little simpler on the whole (async /
await is nicer for while loops, since you can use, well, loops)

It does slightly change a few parts of the behavior:

- The old implementation would wait for the replication lag duration. That's
  not really correct. As we've observed several time this weeks, replication
  lag usually drops quickly once it starts dropping. I.e. if the replication
  lag is 10 seconds, it doesn't take 10 seconds to catch up. This gets more
  important with big lag durations.
- I updated replication lag to be u64 instead of usize. usize doesn't really
  make sense for something that has absolutely nothing to do with our pointer
  size.

I also split out the logic for calculating how long we wait in a part that
cares about whether we are busy and one that cares about replication lag
(whereas the older one kinda mixed the two together). We wait for our own
throttling (i.e. sleep for a sec if we didn't do anything) before we wait for
replication lag, so the new behavior should have the desired behavior of:

- If we don't have much work to do, we sleep 1 second between each iteration
  (but if we do have work, we don't).
- No matter what, if we have replication lag, we wait until that passes before
  doing any work.

The old one did that too, but it mixed the two calculations together, and was
(at least in my opinion) kinda hard to reason about as a result.

Reviewed By: StanislavGlebik

Differential Revision: D19997587

fbshipit-source-id: 1de6a9f9c1ecb56e26c304d32b907103b47b4728
2020-02-20 12:26:51 -08:00
Thomas Orozco
be5d7343ce mononoke/blobstore_healer: check for replication lag _before_ starting work
Summary:
We had crahsloops on this (which I'm fixing earlier in this stack), which
resulted in overloading our queue as we tried to repeatedly clear out 100K
entries at a time, rebooted, and tried again.

We can fix the root cause that caused us to die, but we should also make sure
crashloops don't result in ignoring lag altogether.

Also, while in there, convert some of this code to async / await to make it
easier to work on.

Reviewed By: HarveyHunt

Differential Revision: D19997589

fbshipit-source-id: 20747e5a37758aee68b8af2e95786430de55f7b1
2020-02-20 12:26:51 -08:00
Thomas Orozco
6da3dc939a mononoke/blobstore_sync_queue: delete in smaller batches
Summary:
Our blobstore_sync_queue selects entries with a limit on the number of unique
keys it's going to load. Then, it tries to delete them. However, the number of
entries might be (much) bigger than the number of keys. When we try to delete
them, we time out waiting for MySQL because deleting 100K entries at once isn't
OK.

This results in crashlooping in the healer, where we start, delete 100K
entries, then time out.

This is actually double bad, because when we come back up we just go wihhout
checking replication lag first, so if we're crashlooping, we disregard the
damage we're doing in MySQL (I'm fixing this later in this stack).

So, let's be a bit more disciplined, and delete keys 10K at a time, at most.

Reviewed By: HarveyHunt

Differential Revision: D19997588

fbshipit-source-id: 2262f9ba3f7d3493d0845796ad8f841855510180
2020-02-20 12:26:50 -08:00
Thomas Orozco
ef1ffa31e8 mononoke/sql_ext: log which shard we are waiting for in myrouter
Summary:
MyRouter needs to be told which shards to watch. Since I'm adding a new shard,
it'll be easier for everyone to know that they need to update their MyRouter
configuration if we start logging the shard name we're trying to hit.

Reviewed By: ikostia

Differential Revision: D20001704

fbshipit-source-id: 8a9ff3521bc7e3c9b7ed39c6ae33d0ddc1d467b7
2020-02-20 07:55:04 -08:00
Thomas Orozco
614fa33af2 mononoke: add a limit_path_length hook
Summary:
This adds a file hook to limit the file length we are willing to allow in
commits. This is necessary for now since Mercurial does have a limit on its
end, and we shouldn't allow commits that we cannot sync to Mercurial.

Reviewed By: HarveyHunt

Differential Revision: D19969689

fbshipit-source-id: 1da8a62d54e98b047d381a9d073ac148c9af84b0
2020-02-20 02:49:38 -08:00
Thomas Orozco
58126d90d6 mononoke: log input size
Summary:
This adds some basic logging for input size for Gettreepack and Getpack. This
might make it easier to understand "poison pill" requests that take out the
host before it has a chance to finish the request.

Reviewed By: StanislavGlebik

Differential Revision: D19974661

fbshipit-source-id: deae13428ae2d1857872185de2b6c0a8bcaf3334
2020-02-20 02:24:10 -08:00
Stanislau Hlebik
74a8eb4968 fastlog: convert derive_parents to async/await
Summary:
I'm going to modify it in the next diff, so let's make it async.

Note that we used `spawn_future()` before which I replaced with tokio::spawn()
here. It's not really clear if we need it at all - I'll experiment with later.
Removing it will make the code cleaner.

Reviewed By: krallin

Differential Revision: D19973315

fbshipit-source-id: cbbb9a88f4424e6e717caf1face6807ab6c32438
2020-02-19 21:28:21 -08:00
Kostia Balytskyi
02cafa9997 mononoke: fix blake2 error formatting
Summary: Not very valuable, if it just prints the constant name.

Reviewed By: StanislavGlebik

Differential Revision: D19978690

fbshipit-source-id: ae2b648f50098b479cb3719fd9b9d4b82bac3d3c
2020-02-19 15:22:06 -08:00
Thomas Orozco
c899ed7249 test-gitimport-octopus: don't expect a specific number of commits to verify
Summary:
bonsai_verify occasionally visits the same commit twice (I found out by adding
logging and noting that it occasionally visits the same commit twice). Let's
allow this here.

Reviewed By: StanislavGlebik

Differential Revision: D19951390

fbshipit-source-id: 3e470476c6bc43ffd62cf24c3486dfcc7133de6c
2020-02-19 10:16:38 -08:00
Arun Kulshreshtha
9ec04f9639 edenapi_server: move handlers to submodule
Summary: We're about to start adding more handlers to the server. Rather than putting them all in the same file, let's create a submodule for them.

Reviewed By: krallin

Differential Revision: D19957012

fbshipit-source-id: 38192664371f0b0ef5eadb4969739f7cb6e5c54c
2020-02-19 09:59:14 -08:00
Arun Kulshreshtha
44ded80beb edenapi_server: Add request context middleware
Summary: Add a `RequestContext` type that stores per-request state, along with a `Middleware` implementation that injects a `RequestContext` into Gotham's `State`  object for each request. This is essentially a stripped-down version of the `RequestContextMiddleware` used in the LFS server. Given that the RequestContext contains application-specific functionality, this Middleware lives alongside the rest of the EdenAPI server code rather than in the `gotham_ext` crate (where all of the generic Middleware lives).

Reviewed By: krallin

Differential Revision: D19957013

fbshipit-source-id: 6fad2b92aea0b3662403a69e6a6598e4cd26f083
2020-02-19 09:59:14 -08:00