Commit Graph

62772 Commits

Author SHA1 Message Date
Xavier Deguillard
c2236c3bc2 nfs: plumbing for running an RPC server
Summary:
The NFS protocol is comprised of several different RPC "programs": the mount,
nlm and nfs programs. Since all of these 3 needs to independently register with
rpcbind, let's have some common scafolding to read/write from the socket and to
simplify the implementation of these programs.

This code was written by wez

Reviewed By: chadaustin

Differential Revision: D25986691

fbshipit-source-id: 15c5fdc68323fd964ed79aa06392a83bf964ab4a
2021-02-01 09:28:40 -08:00
Xavier Deguillard
1b0d345774 nfs: add a portmap client
Summary:
The portmap protocol allows for service discovery and registration against the
per-host rpcbind daemon. An NFS server will need to register against it to be
mountable.

The portmap_util binary is here for testing purposes and will not be used in
EdenFS.

This code was written by wez.

Reviewed By: kmancini

Differential Revision: D25986694

fbshipit-source-id: 1eee7238fdf70c8c4937e685da91ad08d46befe4
2021-02-01 09:28:40 -08:00
Xavier Deguillard
7c7f50ef9d nfs: add basic RPC infrastructure
Summary:
Built on top of XDR, the Remote Procedure Call (RPC) protocol allows for
structured client/server communication. NFS being built on top of this
protocol, this adds some basic infrastructure and types defined in the RPC RFC.

A basic client is also being added.

This code was written by wez.

Reviewed By: chadaustin

Differential Revision: D25986693

fbshipit-source-id: a5feffcb22607bcd2c7fa2cede1b70dd8aa48caf
2021-02-01 09:28:40 -08:00
svcscm
05e323f2a5 Updating submodules
Summary:
GitHub commits:

649ded0472

Reviewed By: wittgenst

fbshipit-source-id: 1f463d031daecb5556e745c136ab822105a440c3
2021-02-01 09:18:45 -08:00
Stanislau Hlebik
956270fa33 mononoke: fix warm bookmark cache bug when no bookmark history is found
Summary:
It's possible that a bookmark has no history in its bookmark log. It shouldn't
happen normally, but it might happen in a few cases:
1) If bookmarks were imported before bookmark update log was added to mononoke
(unusual, but happens)
2) if bookmark update log was cleaned for some reason.

So if a bookmark wasn't warm at start, then single_bookmark_updater would never
warm it unless this bookmark gets new entry in boomkark update log. This diff
fixes it.

Reviewed By: krallin

Differential Revision: D26149435

fbshipit-source-id: 5bba8764050349adf106c0e68488981cf21055c4
2021-02-01 08:32:45 -08:00
Thomas Orozco
90e4bbca0a mononoke/server: add a trivial control API and use it to fix some tests
Summary:
We have some tests that are a bit racy because they write bookmarks from
one process then look at them from another process, but that can fail because
we have a cache on bookmarks that holds them for 2 seconds.

This is normally not a huge issue because we don't access said bookmarks, but
now that as of my earlier diff we run a warm bookmarks cache, it *is* a
problem. This fixes that. We can expand it later to do things like reload
tunables, but for now this satisfies one basic use case.

Reviewed By: ahornby

Differential Revision: D26149371

fbshipit-source-id: 11c7f64b1ae45f6a0de142be25ab367cb25df567
2021-02-01 07:53:17 -08:00
Thomas Orozco
e1dca89dca mononoke/mononoke_api: make WBC configurable + take fewer parameters
Summary:
Right now, if you enable Mononoke API, you always get a WBC, for all derived
data kinds, and with a delay. This isn't great for a few reasons:

- Some places don't really care about bookmarks being warm to being with. This
  is the case in the bookmarks filler (note that right now, it still does
  polling to satisfy the WBC API, it's just not "warm").
- Some places don't want a delay, or don't want all kinds. This is the case for
  Mononoke Server (which doesn't use Mononoke API right now, but that's what
  I'm working towards), or EdenAPI, which uses a WBC sort-of-de-facto but
  doesn't really care (but likely will in the future, and will want to follow
  Mononoke Server's behavior).

As of this diff, we can now configure this when initializing `Mononoke`. I also
split out all those args into a `MononokeEnvironment` because the args list
was getting out of hand. One thing I did not do is make a way to instantiate
`MononokeEnvironment` from args (but we should do it at some point in the
future).

Reviewed By: StanislavGlebik

Differential Revision: D26100706

fbshipit-source-id: 1daa6335f3ce2b297929a84788bc5b4d9ad6432f
2021-02-01 07:53:17 -08:00
Thomas Orozco
d907878221 mononoke/repo_client: bring back mod tests
Summary:
This test module accidentally got lost when I added a `mod tests { ... }` in
the containing module. This brings it back and modernizes the tests that could
be. The push redirection test has way too much boilerplate to be manageable so
for now I removed it. I'll see if I can bring it back after some refactoring
I'm doing.

I'll try to see if there's a way we can try to lint / warn against inline
modules shadowing other files.

Reviewed By: ahornby

Differential Revision: D26124354

fbshipit-source-id: 7b24c4fe635bf8197142ab9ee087631ed49f10be
2021-02-01 07:53:17 -08:00
Thomas Orozco
05d54fcb46 mononoke/mononoke_api: split hg parts into mononoke_api_hg
Summary:
I'd like to be able to use mononoke_api from repo_client, but the hg parts
depend on repo_client, so that's a circular dependency.

This splits out the hg parts of Mononoke API so that places that don't want
them don't have to import them. This is similar to what we did with blobrepo.

Reviewed By: StanislavGlebik

Differential Revision: D26099495

fbshipit-source-id: 73a9c7b5dc95feceb35b5eabccf697e9aa0a27de
2021-02-01 07:53:16 -08:00
Thomas Orozco
b5d8d5697d mononoke/repo_listener: default to allowlist_checker if no ACL
Summary:
Right now, if we got no Hipster ACL in a repo listener, we default to denying
all access.

This is kind of annoying when using Mononoke locally but behind a trusted HTTP
proxy, because that means you cannot access the repo at all (if you're not
behind a proxy, then your TLS identities are used instead and everything is
fine).

If we trust a given proxy to impersonate literally anyone, we probably trust
them to access the repo to begin with (since the set of people with access is
not empty and therefore there is always at least someone they could impersonate
that has access), so this is what this diff does.

Reviewed By: johansglock

Differential Revision: D26073274

fbshipit-source-id: 0ef06cb6283d7f69072b712d3cb5a8383a493998
2021-02-01 07:53:16 -08:00
Thomas Orozco
012fd73d79 mononoke/repo_listener: ignore errors in poll_shutdown
Summary:
If the socket is already shutdown for writes, then not being able to shut it
down is fine.

Reviewed By: ahornby

Differential Revision: D26052499

fbshipit-source-id: 2da6c34f657317419df00a0b7ba615e0eb351e0d
2021-02-01 07:53:16 -08:00
Thomas Orozco
ae8f56a799 mononoke/server: convert HTTP stack to Hyper
Summary:
Like it says in the title, this updates our HTTP stack to Hyper. There are a
few reasons to do this here:

- We can remove all the manual parsing & generation of HTTP, and instead let
  Hyper (i.e. a HTTP framework) handle HTTP for us.
- We can use / reuse more pre-existing abstractions for things where we have to
  implement HTTP handling (rather than just try to update to a websocket ASAP),
  like the net speed test.
- And finally, my main motivation for this is that this will make it much
  easier to load EdenAPI into Mononoke Server as a service. At this point, we
  have a `Request` to pass into a function that returns a `Response`, which is
  exactly what EdenAPI is, so hooking it in will be trivial.

There's a lot going on in this diff, so here is an overview. Overall, our
connection handling process is:

- Accept connection
- Perform TLS handshake
- Check if the remote is trusted
- Check ALPN:
  - If hgcli, then read the preamble then run wireproto over the socket.
  - If http, hand off the socket to Hyper. Hyper will call into our
    MononokeHttpService (which is a Hyper Service) once per request.
    - If websocket upgrade, accept the upgrade, then run wireproto over the
      resulting I/O (i.e. the upgraded connection). An upgrade takes over the
      connection, so implicitly this means there won't be further requests.
    - If health check or net speed test, handle it. There might be multiple
      requests here via connection reuse.
    - This is where hooking EdenAPI will happen. We can instantiate Gotham
      here: it also is a Hyper Service, so we just need to call it.

While in there, I've modelled those various states using structs instead of
passing a handful of arguments here or there.

Reviewed By: johansglock

Differential Revision: D26018641

fbshipit-source-id: dd757d72fe0f17f7c98c1948a6aa34d0c4d95cbf
2021-02-01 07:53:15 -08:00
Thomas Orozco
2f47e9263e mononoke: allow pushes in globalrev repos to ancestors of globalrev bookmark
Summary:
Like it says in the title, this updates our implementation of Globalrevs to
be a little more relaxed, and allows you to create and move bookmarks as long as
they are ancestors of the "main" Globalrevs bookmark (but NOT to pushrebase to
them later, because we only want to allow ONE globalrevs-publishing bookmark
per repo).

While in there, I also deduplicated how we instantiate pushrebase hooks a
little bit. If anything, this could be better in the pushrebase crate, but
that'd be a circular dependency between pushrebase & bookmarks movement.
Eventually, the callsites should probably be using bookmarks movement anyway,
so leaving pushrebase as the low-level crate and bookmarks movement as the high
level one seems reasonable.

Reviewed By: StanislavGlebik

Differential Revision: D26020274

fbshipit-source-id: 5ff6c1a852800b491a16d16f632462ce9453c89a
2021-02-01 05:30:57 -08:00
Stanislau Hlebik
da87622777 mononoke: make bg session class tunable per-repo
Reviewed By: ahornby

Differential Revision: D26164309

fbshipit-source-id: bf489bbc75fdd9bcaeb07eb9d9f27249577a64df
2021-02-01 04:34:22 -08:00
Kostia Balytskyi
43d406a808 make get_scuba_sample_builder use observability_context
Summary:
Prior to this diff, only mononoke server initialized
`MononokeScubaSampleBuilder` in a way that used observability context, and
therefore respected vebosity settings.

Let's make a generic sample initializing function use this config too.

Reviewed By: ahornby

Differential Revision: D26156986

fbshipit-source-id: 632bda279e7f3905367b82db5b36f81264156de3
2021-02-01 03:57:06 -08:00
Kostia Balytskyi
ae344fe043 tunables: make per-repo getters take &str instead of &String
Summary: This is more flexible.

Reviewed By: StanislavGlebik

Differential Revision: D26168559

fbshipit-source-id: 5946b8b06b3a577f1a8398a228467925f748acf7
2021-02-01 02:29:09 -08:00
Kostia Balytskyi
7d23e203a0 tunables: add support for per-repo strings and ints
Summary: Just as we have global strings/ints, let's have per-repo ones.

Reviewed By: StanislavGlebik

Differential Revision: D26168541

fbshipit-source-id: f31cb4d556231d8f13f7d7dd521086497d52288b
2021-02-01 02:29:08 -08:00
Kostia Balytskyi
f0f9bc10ba tunables: fix ByRepoBool to allow more than 1 tunable
Summary:
Please see added test. Without this diff such test does not even compile,
as `new_values_by_repo` is moved out by `self.#names.swap(Arc::new(new_values_by_repo));` after processing the first tunable (line 202).

Reviewed By: StanislavGlebik

Differential Revision: D26168371

fbshipit-source-id: 3cd9d77b72554eb97927662bc631611fa91eaecb
2021-01-31 23:31:28 -08:00
svcscm
1b71b6af3d Updating submodules
Summary:
GitHub commits:

f6c7a03315
4c70ed1c40
d13fba361b
e537b6b8d5
2c64e3f12c
43dbb06f25

Reviewed By: wittgenst

fbshipit-source-id: 7d4651e3bd55d613ca6e98f724c944729c9d9d29
2021-01-31 23:31:28 -08:00
svcscm
fe40bd3cae Updating submodules
Summary:
GitHub commits:

beb247efdf

Reviewed By: wittgenst

fbshipit-source-id: c77644617f52b35e7c313a8e1f679f40d1a62a87
2021-01-31 13:20:13 -08:00
svcscm
93d5f0eaa4 Updating submodules
Summary:
GitHub commits:

7c9f69f9c7
e4de812b8b
7921e9706f

Reviewed By: wittgenst

fbshipit-source-id: 152b9f798f1eea6206e5498808c1d7bab587ed18
2021-01-30 13:19:48 -08:00
svcscm
11db30db2a Updating submodules
Summary:
GitHub commits:

01db58986a

Reviewed By: wittgenst

fbshipit-source-id: 24c36a995e2cdbb6253604e66eadc79fed9f89e6
2021-01-30 13:19:47 -08:00
svcscm
2b445d36dd Updating submodules
Summary:
GitHub commits:

2ac24ae7e4

Reviewed By: wittgenst

fbshipit-source-id: 23ce7266be6b003d43e71629dc0c81c7fe7144b8
2021-01-29 22:36:29 -08:00
Chad Austin
ae16da6f5a create subvolumes on disk backing repo
Summary:
If your disk1 is an external HFS-formatted disk, then
eden_apfs_mount_helper will fail to create apfs subvolumes on
it. Instead, use the disk backing the mount.

Reviewed By: fanzeyi

Differential Revision: D26096296

fbshipit-source-id: baa45181afb6610a095c864eb3183e5af76ec4e0
2021-01-29 20:43:23 -08:00
svcscm
34d5f849e2 Updating submodules
Summary:
GitHub commits:

19a842075f

Reviewed By: wittgenst

fbshipit-source-id: 3dad1c440f7788591b0829e830ee517f822c41c6
2021-01-29 20:36:10 -08:00
Jun Wu
59f4d938b4 revset: remove branchpoint()
Summary:
It is already broken with segmented changelog (it assumes 0..len(repo) are
valid revs). It is super slow and cannot be optimized efficiently. The _only_
non-zero-exit-code usage in the past month is like:

  hg log -r 'reverse(children(ancestors(remote/master) and branchpoint()) and draft() and age("<4d"))'

which takes 40 to 100s and can be rewritten using more efficient queries like `parents(roots(draft()))`.

Reviewed By: singhsrb

Differential Revision: D26158011

fbshipit-source-id: 7957710f27af8a83920021a228e4fa00439b6f3d
2021-01-29 20:21:29 -08:00
svcscm
a302af8879 Updating submodules
Summary:
GitHub commits:

cd33bbb0bb

Reviewed By: wittgenst

fbshipit-source-id: 0d2f78a4752124c4d019eccc060d989d5c57186f
2021-01-29 18:21:53 -08:00
Jun Wu
1753f5403d lib: upgrade most crates to tokio 1.0
Summary:
Migrate most crates to tokio 1.0. The exception is edenfs-client, which has
some dependencies on `//common/rust/shed/fbthrift_ext` and seems non-trivial
to upgrade. It creates a separate tokio runtime so it shouldn't be affected
feature-wise.

Reviewed By: singhsrb

Differential Revision: D26152862

fbshipit-source-id: c84c43b1b1423eabe3543bccde34cc489b7805be
2021-01-29 18:18:17 -08:00
svcscm
152373391f Updating submodules
Summary:
GitHub commits:

e53c2672e0
a7020e896b
dbb9f08cac
29fe6a5b25

Reviewed By: wittgenst

fbshipit-source-id: 31d047f8fb716df4c62ed6af496c85b96c75357b
2021-01-29 17:40:31 -08:00
Stefan Filip
549ba8cac3 segmented_changelog: setup caching at the application layer
Summary: Configure segmented changelog to use caching when caching is requested.

Reviewed By: krallin

Differential Revision: D26121496

fbshipit-source-id: d0711a5939b5178b3a93d081019cfab47996da40
2021-01-29 16:41:42 -08:00
Stefan Filip
5bf8012412 segmented_changelog: add caching to IdMap
Summary:
Caching for the IdMap to speed things up.
Values for a key do not change. The IdMap is versioned for a given Repository.
We use the version of the IdMap in the generation of the cache keys. We set the
"site version" to be the IdMap version.

Reviewed By: krallin

Differential Revision: D26121498

fbshipit-source-id: 7e82e40b818d1132a7e86f4cd7365dd38056348e
2021-01-29 16:41:42 -08:00
Stefan Filip
be35ea0a6c dag: add Abomonation implementation for Id
Summary: This implementation is used for all things that are cached in Monononoke.

Reviewed By: quark-zju

Differential Revision: D26121497

fbshipit-source-id: a0088b539f3c3656921ab9a7a25c6442996aed18
2021-01-29 16:41:42 -08:00
svcscm
aee4dab76a Updating submodules
Summary:
GitHub commits:

0c4e92a54d
056afa94d7
a74e767dc7

Reviewed By: wittgenst

fbshipit-source-id: bc226f71fc5db9e8614193171e29857aa23a6e48
2021-01-29 16:41:42 -08:00
svcscm
259efc72c8 Updating submodules
Summary:
GitHub commits:

7bd8fe32bb
339e7e6dc5
1d77c70962

Reviewed By: wittgenst

fbshipit-source-id: 74589c5c8d2e32c3cccafd18256105d56ff685d6
2021-01-29 14:37:32 -08:00
Katie Mancini
757ac0028e rate limit logging
Summary:
Logging all these throttling notifications is not necessary. There can
sometimes be big batches of fetches (like 100s of K). Lets reduce this by a
factor of 1000.

Note we also would like to add logging of what process triggered these fetches
what endpoint they use etc. This will help us identify the workflows causing it,
so we could address them or skip aux data fetching in these code paths.
But this requires some fiddling with ObjectFetchContext and the logging
code, so its gonna take a bit longer :(

Reviewed By: genevievehelsel

Differential Revision: D25505654

fbshipit-source-id: e7c40164db86fadf4baf0afd0c52879e0cb2568b
2021-01-29 14:34:23 -08:00
svcscm
019056eeb8 Updating submodules
Summary:
GitHub commits:

0af2ca9e8e
a62df07564
e5311a8ea4

Reviewed By: wittgenst

fbshipit-source-id: de9c4cf2208e72dcb9be8305d936a7cbaf26fc42
2021-01-29 14:34:23 -08:00
svcscm
b3998ea8f9 Updating submodules
Summary:
GitHub commits:

2e2c039b54
0690fe6bf2

Reviewed By: wittgenst

fbshipit-source-id: 75189c3da51211273e7a7d07f7c743eaba32136c
2021-01-29 12:40:04 -08:00
Jun Wu
358466ee0f transaction: record transaction name to metalog
Summary:
For `repo.transaction("tr-name")`, this records `Transaction: tr-name` to
metalog commit description.

It can be helpful to narrow things down for commands with multiple
transactions.

In the future we might want to attach more data to the logging (ex. what the
commit cloud local, remote states at the time of syncing). However I didn't
do it now since metalog is designed to hold repository data, not too much
logging data. With a better logging infra we might want to move `config` out
from metalog, associated with metalog root ids.

Reviewed By: DurhamG

Differential Revision: D25984805

fbshipit-source-id: 59c074272cff555c6ff11dd755f7e3ce9a292eb6
2021-01-29 12:36:08 -08:00
svcscm
41faf9dcad Updating submodules
Summary:
GitHub commits:

8bf8c86c25
755786ed42
32ac55363f
7c23fc75cc

Reviewed By: wittgenst

fbshipit-source-id: b8364766a851d15932da7e2bea8dfbd866603909
2021-01-29 12:36:08 -08:00
svcscm
f13f040805 Updating submodules
Summary:
GitHub commits:

88aff37df5
f3b7cf269d

Reviewed By: wittgenst

fbshipit-source-id: 0f6a840c406b8562f91253bee79509346b5927cb
2021-01-29 10:53:58 -08:00
Aida Getoeva
db3dbff5d3 mononoke/skiplists: spawn skiplist index fetching
Summary:
On setup SCS initializes the repos concurrently, warming up derived data for each repo, warming bookmark cache and fetching skiplists.

Fetching skiplists is an expensive operation and includes two steps: async get a large blob from the Blobstore and then sync deserialization of the blob.
While running on the same task as warming the bookmark cache, it takes all CPU and the other futures have to wait and can't process results returned by MySQL queries or connect to the DB. Thus SCS eventually fail to acquire a new connection or to perform a query in a reasonable time and terminates.

Spawning skiplists in a separate task helps to unlock the thread where the warm is running.

This was first noticed in TW tasks because after the MySQL rollout some of the SCS tasks started to take an hour to start.
To debug this and localize the issue, we put debug output to see what exactly blocks the repo initialization and, turned out it, when skiplists fetching started the rest was blocked.

Reviewed By: StanislavGlebik

Differential Revision: D26128171

fbshipit-source-id: fe9e1882af898950cf16d8e939dc6bc6be56510e
2021-01-29 10:40:41 -08:00
Jun Wu
b6b68257be setup: fix make local build
Summary:
The `async_common.py` needs to be ignored for Python 2 build since it
cannot be parsed by Python 2.

Reviewed By: xavierd

Differential Revision: D26142682

fbshipit-source-id: f921e7a35781b3336ba745e886380afc26d5ca36
2021-01-29 10:26:07 -08:00
Kostia Balytskyi
5bc36ed39c blobstore_healer: use buffered_weight_limited to avoid OOMs
Summary:
`blobstore_healer` works by healing blobs in turn, with some level of concurrency. Healing each blob consumes at least `O(blob_size)` or memory, so healing multiple blobs consumes their combined size of memory. Because blob sizes are not distributed uniformly, we cannot just calculate the desired concurrency level once and for all. Prior to this diff, this is what we did and whenever a few multi-gb blobs ended up in the same concurrently-healed batch, the healer OOMed. To help with this problem, this diff starts using dynamic concurrency - it assigns weight to each healed blob and only concurrently heals up to a certain total weight of blobs. This way, we can limit the total amount of memory consumed by our healer.

This solution is not perfect for a variety of reasons:
- if a single blob is larger than the total allowed weight, we'll still let it through. It's better than never healing it, but it means that OOMs are still possible in theory.
- we do not yet know the sizes of all the blobs in the queue. To mitigate that, I took a look at the known sizes distribution and saw that between 0 and 2KB is the most common size range. I defaulted to 1KB size of the unknown blob

Note 1: I had to make `heal_blob` consume it's args instead of borrowing them because `buffered_weight_limited` needs `'static` lifetime for the futures.

Note 2: When using `futures_ext`, I explicitly rename them to `futures_03_ext`, despite the fact that `blobstore_healer` does not depend on the older version. This is because `Cargo.toml` uses the same `[dependencies]` section for the combined dependencies of all the targets in the same `TARGETS` file. As there are other targets that claim the name of `futures_ext` for 0.1 version, I decided that it's easier to just use `_03_` name here than fix in other places. We can always change that of course.

Reviewed By: krallin

Differential Revision: D26106044

fbshipit-source-id: 4931d86d6e85d055ed0eefdd357b9ba6266a1c37
2021-01-29 10:12:26 -08:00
Alex Hornby
eb566b5157 mononoke: remove open_sql_with_config_and_mysql_options
Summary: This was just a thin wrapper around with_metadata_database_config and it was using old futures, so remove it.

Differential Revision: D26100512

fbshipit-source-id: 22aa40ed73df2555645ba1d639fee3ae3dd38a09
2021-01-29 10:01:15 -08:00
Durham Goode
7f555d2d06 http: improve error messages from http failures
Summary:
Currently the data layer eats all errors from remote stores and treats
them as KeyErrors. This hides connection issues from users behind obscure
KeyErrors. Let's make it so that any non-key error reported by the remote store
is propagated up as a legitimate error.

This diff makes Http errors from EdenApi show up with a nicer error message,
suggesting that the user run fixmyserver.

Further fixes will probably be necessary to categorize other errors from the
remote store more nicely.

Reviewed By: quark-zju

Differential Revision: D26117726

fbshipit-source-id: 7d7dee6ec101c6a1d226185bb27423d977096050
2021-01-29 09:40:19 -08:00
Jan Mazur
c2fcf857bd identity set throttling
Summary:
Before this change we could throttle only based on one identity matching one of the identities from user's set of identities.

Now we'll be able to match a subset of user's identities.

Depends on D26125638.

Reviewed By: krallin

Differential Revision: D26125637

fbshipit-source-id: 534326264b9093e46fbdda846516fdaceb40c931
2021-01-29 07:43:56 -08:00
Mark Juggurnauth-Thomas
f0eb35b86f derived_data: support gaps in derivation
Summary:
For fsnodes and skeleton manifests it should be possible to allow gaps in the
commits that are backfilled.  Any access to a commit in one of these gaps can
be quickly derived from the nearest ancestor that is derived.  Since each
commit's derived data is independent, there is no loss from omitting them.

Add the `--gap-size` option to `backfill_derived_data`.  This allows `fsnodes`,
`skeleton_manifests` and other derived data types that implement it in the
future to skip some of the commits during backfill.  This will speed up
backfilling and reduced the amount of data that is stored for infrequently
accessed historical commits.

Reviewed By: StanislavGlebik

Differential Revision: D25997854

fbshipit-source-id: bf4df3f5c16a913c13f732f6837fc4c817664e29
2021-01-29 06:36:20 -08:00
Stanislau Hlebik
9a62b227a3 mononoke: pre-allocate buffer in sqlblob get method
Summary:
While looking into scs performance we noticed that fetching skiplist from
blobstore takes a lot of cpu. And looks like the slow part comes from lots of
allocation that we are doing in sql blob. Even though we might have successfully
fetched the skiplist from manifold, we'd still waste cpu time trying to assemble
object in xdb blobstore.

This diff fixes it by doing pre-allocation - however note that we don't have the precise size we need to
allocate, only an upper bound (number of chunks * max chunk size).
I opted to allocate less so that we don't waste memory on small requests (i.e.
it's not great to allocate 1mb buffer for 100 bytes object).

Note that I suspect that underlying problem might be in BytesMut extend_from_slice()
implementation - it might be unoptimized. However this is just a hunch, I haven't investigated it.For now let's do preallocation on our side.

Reviewed By: krallin

Differential Revision: D26145078

fbshipit-source-id: a50ba72656ffe6053af993fdec07ce55ddddacf3
2021-01-29 05:40:29 -08:00
svcscm
c9fc880923 Updating submodules
Summary:
GitHub commits:

46fd91f29e
952a53ff27

Reviewed By: wittgenst

fbshipit-source-id: 4a021687f1eca2202fa399e4054a968943c2d653
2021-01-29 05:26:14 -08:00
Iván Yesid Castellanos
e58c8e819c Removed static lifetime constants
Summary: removed the static lifetime constants in mononoke code source base

Reviewed By: krallin

Differential Revision: D26123507

fbshipit-source-id: 9e1689c42603bd17d44924f92219378340ab082b
2021-01-29 04:40:27 -08:00