Commit Graph

65858 Commits

Author SHA1 Message Date
Katie Mancini
2e6ea72e9a thread ObjectFetchContext thrift applyToInodes
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally, we generally should not be using these singletons
in our production code.

Some thrift methods used by watchman to get metadata implicitly are not threaded.
These can cause fetches, so let's thread the fetch context here too.

Reviewed By: genevievehelsel

Differential Revision: D28842300

fbshipit-source-id: b1e4b3aea879d6ed7b92afa26184616dedad5935
2021-06-04 14:57:46 -07:00
Katie Mancini
96da8df402 thread ObjectFetchContext symlink
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally, we generally should not be using these singletons
in our production code.

This change is for symlink

Reviewed By: genevievehelsel

Differential Revision: D28841453

fbshipit-source-id: 080eb62f0b562f8e0995c34e9a8302238fc59ed8
2021-06-04 14:57:46 -07:00
Katie Mancini
f2133297b6 thread ObjectFetchContext remaining parts of rm
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally we generally should not be using these singletons
in our production code.

Most of rmdir is already threaded, not sure if this case can actuall cause
fetches in production, but might as well thread.

Reviewed By: genevievehelsel

Differential Revision: D28840211

fbshipit-source-id: 8dea08e775be470dd1730e2d32750a6912650ee0
2021-06-04 14:57:46 -07:00
Katie Mancini
526ced1f54 thread ObjectFetchContext rename
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally, we generally should not be using these singletons
in our production code.

this change is for rename

Reviewed By: genevievehelsel

Differential Revision: D23467437

fbshipit-source-id: e9d79c65fb5c4d686f0597550e43a0e87c4792cb
2021-06-04 14:57:46 -07:00
Katie Mancini
82a43119ba update eden doctor for macFUSE
Summary:
we now want macFUSE to be installed instead of osxfuse. eden doctor will spew
errors if osxfuse is not installed or loaded, so it perpetually complains.
Update these checks for macFUSE now.

Reviewed By: xavierd

Differential Revision: D28751766

fbshipit-source-id: 4bc61349e33492aebe888a4e869ef7620c74768e
2021-06-04 14:04:50 -07:00
Liubov Dmitrieva
f5007a93d9 implement filenode lookup and tree lookup in blobstore
Summary:
implement filenode and tree lookup in edenapi

simple lookup in blobstore without any additional validation of blob content

assuming all the validation will be inside upload logic

here, assuming if key is known that content of blob is valid

Reviewed By: markbt

Differential Revision: D28868028

fbshipit-source-id: 590cc404f33adbec69f8adafd33365a0249d3241
2021-06-04 10:11:25 -07:00
Liubov Dmitrieva
7fc42817cb edenapi: create a call to the lookup API for different types
Summary:
create end to end intergation for the lookup API on the client

Start prototyping of `hg cloud upload` command.

Currently, it just performs lookup for existing heads.

This way we can end to end test the new APIs.

Reviewed By: markbt

Differential Revision: D28848205

fbshipit-source-id: 730c1ed4a21c1559d5d9b54d533b0cf551c41b9c
2021-06-04 10:11:25 -07:00
Liubov Dmitrieva
e32102a1f1 skeleton lookup and upload API for files
Summary:
Files upload will be executed in 2 stages:

* check if content is already present
* upload missing files

The check api is generic, it could be used for any id type. Called 'lookup' API.

Reviewed By: markbt

Differential Revision: D28708934

fbshipit-source-id: 654c73b054790d5a4c6e76f7dac6c97091a4311f
2021-06-04 10:11:25 -07:00
Durham Goode
9d994c93aa build: set SDKROOT via make local as well
Summary:
Previously we set this in the rpm spec, but we need to set it in make
local as well since sometimes hgbuild invokes make local directly.

Ideally we'd put this in setup.py, since make and rpmspecs go through that, but
we need this environment also set for the dulwich build, which we don't really
control the setup.py for.

Reviewed By: singhsrb

Differential Revision: D28902015

fbshipit-source-id: bfc170c3027cc43b24c6a517512a63a71f433d23
2021-06-04 09:51:48 -07:00
Egor Tkachenko
47f7da159d Make copy_blobstore_keys tool copy keys between inner blobstores
Summary:
Recently we had an issue with `connectivity-lab` repo where 3 keys P416141335 had different values because of parent ordering P416094337.
Walker can detect difference between keys in the multiplex inner blobstores and repair them, however it doesn't have notion of the copy keys (there isn't concept of source and the target). We have a copy_blobstore_keys tool, which is used for restoring keys from the backup and with small modification it can handle copy between innerstore.

Reviewed By: StanislavGlebik

Differential Revision: D28707364

fbshipit-source-id: 3d5a4f39999623023539b9159fa7310d430f0ee4
2021-06-04 09:27:26 -07:00
Stanislau Hlebik
5394be2939 mononoke: fix typos
Reviewed By: HarveyHunt

Differential Revision: D28896946

fbshipit-source-id: 770671c5103bcab883b58bd96cdd8d1672c43c8f
2021-06-04 08:57:20 -07:00
Stanislau Hlebik
8f52b55042 mononoke: check if write is allowed before calling megarepo methods
Summary:
All megarepo methods write to a repository, so we need to check if write is
allowed to a given repo, and previously we weren't checking that.
Let's fix it and Let's start doing so now by trying to get RepoWriteContext
for the target repo. If writes are not allowed then RepoWriteContext fails.

Reviewed By: farnz

Differential Revision: D28838994

fbshipit-source-id: e45d4fe72603e7fe2755141874fc4125998bfed8
2021-06-04 08:28:08 -07:00
Katie Mancini
f1d0de859f thread ObjectFetchContext mkdir
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally, we generally should not be using these singletons
in our production code.

This change is for mkdir

Reviewed By: genevievehelsel

Differential Revision: D23458622

fbshipit-source-id: f3914a4f692490434882143664a5d5f1701e93ba
2021-06-03 16:33:35 -07:00
Katie Mancini
88cb4ec5ba thread ObjectFetchContext create
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally we generally should not be using these singletons
in our production code.

This change is for create

Reviewed By: genevievehelsel

Differential Revision: D23457862

fbshipit-source-id: d4c9cc658c26b3119b2b2a1da061e299eaf510c9
2021-06-03 16:33:35 -07:00
Katie Mancini
ee923324d2 thread ObjectFetchContext lookup
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally we generally should not be using these singletons
in our production code.

Most of lookup is already threaded. This finishes the threading for lookup.

Reviewed By: xavierd

Differential Revision: D23456910

fbshipit-source-id: fab7397caeee19f921d8fba1fb6528baa5cf2960
2021-06-03 16:33:35 -07:00
Durham Goode
ffa6c61481 run-tests: fix allow/deny list of tests with multiple cases
Summary:
The recent change to make run-tests work with Python 3 broke the
allow/deny list functionality because it started testing the full test name
instead of the base. This fixes that.

Reviewed By: quark-zju

Differential Revision: D28885125

fbshipit-source-id: 586a71e66e0f094b79e6a3e07e27813db6f662d3
2021-06-03 15:54:03 -07:00
Eric Chen (swe)
0ba2c44d90 uncopy: new command to mark files as not copied
Summary: create `uncopy` command to unmark files that were copied using `copy`.

Reviewed By: quark-zju

Differential Revision: D28821574

fbshipit-source-id: c1c15f6fb2837cec529860aba70b516ddd794f10
2021-06-03 13:54:48 -07:00
Jeremy Fitzhardinge
c652f9a11f third-party/rust: update time to 0.2
Summary:
Time 0.2 is current, and 0.1 is long obsolete. Unfortunately there's a
large 0.1 -> 0.2 API change, so I preserved 0.1 and updated the targets of its
users. Also unfortunate that `chrono` has `oldtime` as a default feature, which
makes it use `time-0.1`'s `Duration` type. Excluding it from the features
doesn't help because every other user is specifying it by default.

Reviewed By: dtolnay

Differential Revision: D28854148

fbshipit-source-id: 0c41ac6b998dfbdcddc85a22178aadb05e2b2f2b
2021-06-03 13:52:54 -07:00
Mateusz Kwapich
d2530263b3 obtain repo without doing ACL checks
Summary:
The worker should be able to process the requests from the queue no matter
which repo it is and what are it ACLs. It's during the request scheduling when
we should check the identity of the entity scheduling request.

Reviewed By: StanislavGlebik

Differential Revision: D28866807

fbshipit-source-id: 5d57eb9ba86e10d477be5cfc51dfb8f62ea16b9e
2021-06-03 12:32:54 -07:00
Chad Austin
8a44512e1f remove FakeBackingStore::getTreeForManifest
Summary:
BackingStore and LocalStore are no longer tied at the hip, so decouple
FakeBackingStore from LocalStore.

Reviewed By: kmancini

Differential Revision: D28615431

fbshipit-source-id: ee6bc807da6de4ed8fba8ab6d52ff5aeff34e8ae
2021-06-03 11:07:14 -07:00
Chad Austin
894eaa9840 move root ID parsing and rendering into BackingStore
Summary:
The meaning of the root ID is defined by the BackingStore, so move
parsing and rendering into the BackingStore interface.

Reviewed By: xavierd

Differential Revision: D28560426

fbshipit-source-id: 7cfed4870d48016811b604348742754f6cdbd842
2021-06-03 11:07:14 -07:00
Chad Austin
ebbcef7605 start writing the v2 SNAPSHOT
Summary:
Start writing the v2 format (single, variable-width parent) into the
SNAPSHOT file.

Reviewed By: xavierd

Differential Revision: D28528419

fbshipit-source-id: 245bd4f749c45f4de2912b0406fc0d1b52278987
2021-06-03 11:07:14 -07:00
Yan Soares Couto
f005f37d74 Use InnerRepo on WarmBookmarksCache instead of BlobRepo
Summary:
Currently bookmark warmers receives a `BlobRepo`. This diff makes it receive an `InnerRepo`, but does no logic changes.

This will be useful as fields are moved from `BlobRepo` to `InnerRepo`, and will also be useful for accessing skiplists from warmers, as I plan to do on the next diff.

Reviewed By: StanislavGlebik

Differential Revision: D28796543

fbshipit-source-id: dbe5bec9fc34da3ae51e645ea09b03e2bb620445
2021-06-03 10:53:24 -07:00
Yan Soares Couto
c5666c4db8 Move InnerRepo to different target
Summary:
This diff simply extracts `InnerRepo` object type to a separate target.

This will be used on the next diff where we need to access `InnerRepo` from `BookmarkWarmer`, which needed to be split to a different target in order to not create a circular dependency in buck.

Reviewed By: StanislavGlebik

Differential Revision: D28796406

fbshipit-source-id: 4fdadbbde31719b809abb6b8a9ba8fa24b426299
2021-06-03 10:53:24 -07:00
Yan Soares Couto
e4b6fd3751 Create InnerRepo container
Summary:
This diff creates a new `InnerRepo` container, that contains `BlobRepo` as well as the skiplist index.

The plan here is:
- As of code organisation, `InnerRepo` will eventually contain most of the fields currently in `Repo`, as well as the fields of `BlobRepo` that are only used in binaries that use `Repo`. This way each binary will only build the "attribute fields" it needs to, but the code to build them can still be neatly shared.
- As for `SkiplistIndex`, the plan is to be able to modify it inside `WarmBookmarksCache`, that's why I'm moving it to `InnerRepo` as well. I'll make bookmark warmers receive `InnerRepo` instead of `BlobRepo`, so they can access the skiplist index if wanted, and then modify it (this is an attempt to try to make skiplists faster on bookmarks).

Reviewed By: StanislavGlebik

Differential Revision: D28748221

fbshipit-source-id: bca31c14a6789a715a215cc69ad0a69b5e73404c
2021-06-03 10:53:24 -07:00
Katie Mancini
5a16819fb8 thread ObjectFetchContext mknod
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally we generally should not be using these singletons
in our production code.

this change is for mknod

Reviewed By: chadaustin

Differential Revision: D23452153

fbshipit-source-id: 7b9bc6b624fbe81b91770bc65a0d27bc9d397032
2021-06-03 09:46:25 -07:00
Katie Mancini
4594776ada thread ObjectFetchContext getxattr
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally we generally should not be using these singletons
in our production code.

this change is for getxattr

Reviewed By: chadaustin

Differential Revision: D23451954

fbshipit-source-id: bae73878754d59661cddf7c0b001e506bbc88d13
2021-06-03 09:46:25 -07:00
Katie Mancini
c98620da53 thread ObjectFetchContext readlink
Summary:
There are a few remaining holes where we are not passing a full fetch context.
We will need a full fetch context to do all data fetch logging needed for the
intern project. Additionally we generally should not be using these singletons
in our production code.

this change is for readlink

Reviewed By: chadaustin

Differential Revision: D23451821

fbshipit-source-id: 1f8ee369a992ab3489a9366f9a972f67461970de
2021-06-03 09:46:25 -07:00
Alex Hornby
ad2c2a02e5 mononoke: log checkpoint_name in walker pack info logging
Summary: The run_start in the pack info logging is the run start for a given checkpoint name when checkpointing, so let's include that to provide context.

Reviewed By: farnz

Differential Revision: D28867962

fbshipit-source-id: 113b9e10f5b8e1869702b3ea83374d0d08a8792e
2021-06-03 08:27:50 -07:00
Stanislau Hlebik
530f8279b8 mononoke: put the name of the source in the move and merge commits
Summary: This makes it easier to understand what each move and merge commits are for.

Reviewed By: mitrandir77

Differential Revision: D28839677

fbshipit-source-id: 1a42205c164224b64c773cff80b690b251a48381
2021-06-03 05:24:40 -07:00
Simon Farnsworth
7156f5cd9f Don't look at filenode history when they're both the same
Summary:
When deriving `hgchangesets` for merge commits, we try to get filenodes right.

Speed this up a little by recognising that history checks are unnecessary if both parents have the same filenode.

Reviewed By: StanislavGlebik

Differential Revision: D28866024

fbshipit-source-id: 6da4c162abce5b426269630f82e9e0b84eea2b33
2021-06-03 04:11:04 -07:00
Ilia Medianikov
a743d3d484 mononoke/hooks: add bonsai hash logging
Summary: Sometimes it is useful to search/group by commit hash.

Reviewed By: krallin

Differential Revision: D28834644

fbshipit-source-id: 93f650e19ae512450e33542cf74b8aa3333c6c35
2021-06-03 02:51:11 -07:00
Simon Farnsworth
103b3116d4 Use filenodes where available to speed up derived data for merges
Summary:
Fetching all history of both filenodes to see if there's common history either side of a merge is wasteful, and in some megarepo work is causing long delays deriving merge changesets.

Where we have already derived filenodes for a given merge's ancestors, we can go faster; we can use the linknodes to determine the older of the two filenodes, and fetch only history for the newer of the two.

This is imperfect for the import use case, since filenodes depend on hgchangesets, and the batching in use at the moment prefers to generate all derived data of a given type before moving onto another type, but it's an improvement for cases where some filenodes are already derived (e.g. due to import of a repo with a similar history).

Reviewed By: StanislavGlebik

Differential Revision: D28796253

fbshipit-source-id: 5384b5d2841844794a518c321dbf995891374d3a
2021-06-03 02:33:40 -07:00
Stanislau Hlebik
12969d5738 mononoke: derive data in add_sync_target
Summary:
This is a precaution. add_sync_target can create a very branchy repository, and
I'm not sure how well Mononoke is going to handle deriving of these commits by
all mononoke hosts at once (in particular, I'm worried about traversal that has
to be done by all hosts in parallel). In theory it should work fine, but
deriving data during add_sync_target call should be a reasonable thing to do
anyway.

While working on that I noticed that "git" derived data is not  supported by derived data utils, and it was causing test failures. I don't think there's any reason to not have TreeHandle in derived data utils, so I'm adding it now.

Reviewed By: farnz

Differential Revision: D28831052

fbshipit-source-id: 5f60ac5b93d2c38b4afa0f725c6908edc8b98b18
2021-06-03 02:23:15 -07:00
Stanislau Hlebik
1458f5a7f4 mononoke: allow deriving data in parallel for independent commits
Summary:
Previously even if two commits are unrelated (e.g. they are in two branches
that are not related to each other) we'd still derive them sequentially. This
diff makes it possible to derive them in parallel to speed up derivation.

A few notes:
1) There's a killswitch to disable parallel derivation
2) By default we derive at most 10 commits at the same time, but it can be
configured.

Reviewed By: farnz

Differential Revision: D28830311

fbshipit-source-id: b5499ad5ac179f73dc94ca09927ec9c906592460
2021-06-03 02:23:15 -07:00
Jun Wu
94c1c510d7 tests: remove some hgsql tests
Summary:
They are breaking and hgsql is not relevant (hg server repo was forked). So
let's just remove the tests.

Reviewed By: andll

Differential Revision: D28852159

fbshipit-source-id: 04a47ea489b3f190cffe7f714a9f4161847a2c86
2021-06-02 16:08:06 -07:00
Jun Wu
13ab29ce76 runtests: fix remaining issues
Summary:
Fix remaining issues like encoding and `bname` vs `name` difference
(bname was deleted by a previous change, but it's not just encoding
difference from name, bname does not have " (case x)" suffix).

Differential Revision: D28852092

fbshipit-source-id: df013b284414600deb6f20a5c0883f09906bf976
2021-06-02 16:08:06 -07:00
Jun Wu
be797a960b test-run-tests: remove pygments test
Summary:
The test might fail with wrong pygments detection (ex. hghave runs with
different python that runs run-tests.py). It's not critical. Remove it.

Example failure:

   #if no-windows pygments
     $ rt test-failure.t --color always
  +  warning: --color=always ignored because pygments is not installed

  -  \x1b[38;5;124m--- test-failure.t\x1b[39m (esc)
  -  \x1b[38;5;34m+++ test-failure.t.err\x1b[39m (esc)
  -  \x1b[38;5;90;01m@@ -1,4 +1,4 @@\x1b[39;00m (esc)
  +  --- test-failure.t
  +  +++ test-failure.t.err
  +  @@ -1,4 +1,4 @@
        $ echo "bar-baz"; echo "bar-bad"; echo foo
  -  \x1b[38;5;34m+  bar*baz (glob)\x1b[39m (esc)
  +  +  bar*baz (glob)
        bar*bad (glob)
  -  \x1b[38;5;124m-  bar*baz (glob)\x1b[39m (esc)
  -  \x1b[38;5;124m-  | fo (re)\x1b[39m (esc)
  -  \x1b[38;5;34m+  foo\x1b[39m (esc)
  +  -  bar*baz (glob)
  +  -  | fo (re)
  +  +  foo

Differential Revision: D28852093

fbshipit-source-id: 98397cb79bedc0c605131462483b78eb5bc2671a
2021-06-02 16:08:06 -07:00
Robin Håkanson
31fb330fc7 Consistent and unique source_name
Summary:
source_name to be unique even if the same git-repo and project name is mapped several times in the same manifest.

source_name need to match between all the different places we use it, `add_target_config, sync_changeset, changesets_to_merge`  we where not consistent.

Reviewed By: StanislavGlebik

Differential Revision: D28845869

fbshipit-source-id: 54e96dcdeaf22ec68f626e9c30e5e60c54ec149b
2021-06-02 15:03:59 -07:00
Meyer Jacobs
a1b4aa8117 scmstore: add tracing logging
Summary:
Instrument file scmstore with tracing logging. There's more we should add here, but this will be a good starting place - I've already discovered some issues from looking at the log output. (Why does drop run twice? How does it run twice?)

It'd also probably be nice to support formatting the output like https://crates.io/crates/tracing-tree, which will be a lot less cluttered by the logged fields (like `attrs` on `fetch`).

Reviewed By: DurhamG

Differential Revision: D28750954

fbshipit-source-id: 63baa602f7147d24ac3e34defa969a70a92f96a4
2021-06-02 14:15:40 -07:00
Leander Schulten
457faa0d1d FindSodium: Do not create target unconditionally (#430)
Summary:
This fixes  https://github.com/facebook/fbthrift/issues/429

Pull Request resolved: https://github.com/facebook/fbthrift/pull/430

Reviewed By: iahs

Differential Revision: D28832985

Pulled By: vitaut

fbshipit-source-id: 0719f27207d11bb7970cc43c621640c425d8f55d
2021-06-02 11:05:54 -07:00
Robert Grosse
17246b7e40 histroy -> history
Summary: histroy -> history

Differential Revision: D28808613

fbshipit-source-id: b5368fdaf428cc717de39d3ca3b61b746ebb149b
2021-06-02 10:26:41 -07:00
Stanislau Hlebik
56107af712 mononoke: create move commits in parallel
Summary:
We are going to create quite a lot of move commits at the same time, and it can
be slow. Let's instead create them in parallel, and then call
`save_bonsai_changesets` for all the commits in one go.

Reviewed By: mitrandir77

Differential Revision: D28795525

fbshipit-source-id: f6b6420c2fe30bb98680ac7e25412c55c99883e0
2021-06-02 07:59:55 -07:00
Stanislau Hlebik
631d21ec95 mononoke: create a stack of merge commits
Summary:
Previously add sync target method was creating a single merge commit. That
means that we might create a bonsai commit with hundreds of parents. This is
not ideal, because mercurial can only work correctly with 2 parents - for
bonsai changeset with 3 parents or more mercurial file histories might be lost.

So instead of creating a single giant merge commit let's create a stack of
merge commits.

Reviewed By: mitrandir77

Differential Revision: D28792581

fbshipit-source-id: 2f8ff6b49db29c4692b7385f1d1ab57986075d57
2021-06-02 07:59:55 -07:00
Stanislau Hlebik
2c63981029 aosp megarepo: validate commits that are passed to add_sync_target
Summary:
Basically do validations that are described in [source_control.thrift
file](https://www.internalfb.com/code/fbsource/[9f147c38ea74f1c6482b99cfd1edd6103d5bd3db]/fbcode/eden/mononoke/scs/if/source_control.thrift?lines=965-975).

Reviewed By: mitrandir77

Differential Revision: D28712729

fbshipit-source-id: 7c55116e4ac875961ded82d6708af56d14a1bf79
2021-06-02 07:59:55 -07:00
CodemodService FBSourceClangFormatLinterBot
9e789df34c Daily arc lint --take CLANGFORMAT
Reviewed By: zertosh

Differential Revision: D28826571

fbshipit-source-id: e3929280917dffcb00707823963ab2ba0e786bd1
2021-06-02 04:06:31 -07:00
Mateusz Kwapich
1089e0bfec allow the async worker to process requests concurrently
Reviewed By: StanislavGlebik

Differential Revision: D28796403

fbshipit-source-id: 38f8cc1071ff80aeb61ddfa160d41c68dc685ca2
2021-06-02 01:40:52 -07:00
Mateusz Kwapich
342298d6c9 return the newly created changeteset id from megarepo methods
Summary:
This fixes the potential race condition in megarepo methods where the bookmark
is updated between executing the actual method and returning the current
bookmark value

Reviewed By: StanislavGlebik

Differential Revision: D28796099

fbshipit-source-id: 472f9bbcfafe1c2eb78c62ccf88d149b5157b646
2021-06-02 01:40:52 -07:00
Mateusz Kwapich
51fc0e5027 use real worker instead of completing requests in SCS
Summary: Let's stop executing requests synchronously when we can do it asynchronously.

Reviewed By: farnz

Differential Revision: D28759935

fbshipit-source-id: ae3db711d3cd466cc142fd9c7f4c3a988150042c
2021-06-02 01:40:52 -07:00
Mateusz Kwapich
3a33d340e1 return bookmark value from megarepo thrift methods
Summary: This replicates D28679134 (f9a712d618) to actual worker methods.

Reviewed By: StanislavGlebik

Differential Revision: D28759936

fbshipit-source-id: 5d834a9e9e19126788f66c9eff11e0bc1fd06180
2021-06-02 01:40:52 -07:00