Summary:
Running `setup.py` with Python 3 for Python 2 build will cause issues as
`setup.py` writes `.pyc` files in Python 3 format.
Reviewed By: chadaustin
Differential Revision: D23717661
fbshipit-source-id: 38cfabdfdf20424a21f8a5bdaf826e74da2304ac
Summary:
tpx doesn't support heavyweight tags or rate limiting, and integration
tests regularly fail with timeouts on my devbig, so bump the process
start and process stop timeouts.
Reviewed By: genevievehelsel
Differential Revision: D23553924
fbshipit-source-id: fa9b8710395d61b087963d18718137e4525ae03d
Summary:
30 seconds is not enough time on heavily contented systems, including
CI. Bump the shutdown timeout to 120.
Also, correctly send SIGKILL to the daemon process when it's been
started with sudo.
Reviewed By: simpkins
Differential Revision: D22422784
fbshipit-source-id: dc7be0962705f1feb9643990309f570e352b68a0
Summary:
This is the function that was used in repo_import tool to wait until hg sync
has processed all of the entries in the queue. Let's move it to the hg sync
helper lib so that it can be used in other places. E.g. I'd like to use it in
the next diffs in mononoke_x_repo_sync_job.
Reviewed By: krallin
Differential Revision: D23708280
fbshipit-source-id: ea846081d89b55b0d2f5407c971e13869cedfd8b
Summary:
This stack updates eden to be able to check all of the locations that able
users certificate may reside.
THRIFT_TLS_CL_CERT_PATH is usally set with the location for the users x509
certs. So it seems best to check this location. In order to be able to check
this location, we need to be able to resolve the enviroment variable in our
parsing.
Reviewed By: wez, genevievehelsel
Differential Revision: D23359815
fbshipit-source-id: 2008cc52ab64d23dbcfda41292a60a4bf77a80df
Summary:
In preparation of moving away from SSH as an intermediate entry point for
Mononoke, let Mononoke work with newly introduced Metadata. This removes any
assumptions we now make about how certain data is presented to us, making the
current "ssh preamble" no longer central.
Metadata is primarily based around identities and provides some
backwards-compatible entry points to make sure we can satisfy downstream
consumers of commits like hooks and logs.
Simarly we now do our own reverse DNS resolving instead of relying on what's
been provided by the client. This is done in an async matter and we don't rely
on the result, so Mononoke can keep functioning in case DNS is offline.
Reviewed By: farnz
Differential Revision: D23596262
fbshipit-source-id: 3a4e97a429b13bae76ae1cdf428de0246e684a27
Summary:
ghostbooleans
Apparently I didn't test for the positive case in my previous diff that introduces this check :(
Reviewed By: xavierd
Differential Revision: D23698179
fbshipit-source-id: 95a28cc13bff5e325214b6a398e19c821b5ae17f
Summary: We only care about the files we need when recording prefetch profiles (since we don't want to fetch top level directories). So let's skip recording `Tree` object types.
Reviewed By: kmancini
Differential Revision: D23693533
fbshipit-source-id: 9af5437ff6571a34597425ca5f657e7126671ba9
Summary: Support for multiple heads in `BonsaiDerived::find_all_underived_ancestors`. This change will be needed to remove manual step of fetching of all changesets in `backfill_derived_data` utilty.
Reviewed By: StanislavGlebik
Differential Revision: D23705295
fbshipit-source-id: 32aa97a77f0a4461cbe4bf1864477e3e121e1879
Summary:
As it says in the title, this adds support for receiving compressed responses
in the revisionstore LFS client. This is controlled by a flag, which I'll
roll out through dynamicconfig.
The hope is that this should greatly improve our throughput to corp, where
our bandwidth is fairly scarce.
Reviewed By: StanislavGlebik
Differential Revision: D23652306
fbshipit-source-id: 53bf86d194657564bc3bd532e1a62208d39666df
Summary:
This adds support for compressing responses in the LFS Server, based on what
the client sent in `Accept-Encoding`. The compression changes are fairly
simple. Most of the codes changes are around the fact that when we compress,
we don't send a Content-Length (because we don't know how long the content will
be).
Note that this is largely implemented in StreamBody. This means it can be used
for free by the EdenAPI server as well. The reason it's in there is because we
need to avoid setting the Content-Length when compression is going to be used
(`StreamBody` is what takes charge for doing this). This also exposes a
callback to get access to the stream post-compression, which also needs to be
exposed in `StreamBody`, since that's where compression happens.
Reviewed By: aslpavel
Differential Revision: D23652334
fbshipit-source-id: 8f462d69139991c3e1d37f392d448904206ec0d2
Summary:
This imports the async-compression crate. We have an equivalent-ish in
common/rust, but it targets Tokio 0.1, whereas this community-supported crate
targets Tokio 0.2 (it offers a richer API, notably in the sense that we
can use it for Streams, whereas the async-compression crate we have is only for
AsyncWrite).
In the immediate term, I'd like to use this for transfer compression in
Mononoke's LFS Server. In the future, we might also use it in Mononoke where we
currently use our own async compression crate when all that stuff moves to
Tokio 0.2.
Finally, this also updates zstd: the version we link to from tp2 is actually
zstd 1.4.5, so it's a good idea to just get the same version of the zstd crate.
The zstd crate doesn't keep a great changelog, so it's hard to tell what has changed.
At a glance, it looks like the answer is not much, but I'm going to look to Sandcastle
to root out potential issues here.
Reviewed By: StanislavGlebik
Differential Revision: D23652335
fbshipit-source-id: e250cef7a52d640bbbcccd72448fd2d4f548a48a
Summary: That might be used to pass more data to the server
Reviewed By: markbt
Differential Revision: D23704722
fbshipit-source-id: a6e41d615f6548f2f8fd036814c59573a45f93bc
Summary: New type async/await can mutate variables, we no longer need synchronization for this counters
Reviewed By: ikostia
Differential Revision: D23704765
fbshipit-source-id: eb2341cb0c82b8a49c28ad3c8fd811ed3af73436
Summary:
This would let us allow only a certain bookmarks to be remapped from a small
repo to a large repo.
Reviewed By: krallin
Differential Revision: D23701341
fbshipit-source-id: cf17a1a21b7594a94c5fb117065f7d9298c8d1af
Summary:
Previously we used target repo for a commit from a source repo. This diff fixes
it.
Reviewed By: krallin
Differential Revision: D23685171
fbshipit-source-id: 4aa105aec244ebcff92b7b71a6cb22dd8a10d2e5
Summary: Add a test to detect any unexpected changes in MPatheElements size
Reviewed By: farnz
Differential Revision: D23703345
fbshipit-source-id: 74354f0861b048ee4611304fc99f0289bce4a7a5
Summary:
Facebook
We need them since we are going to sync ovrsource commits into fbsource
Reviewed By: krallin
Differential Revision: D23701667
fbshipit-source-id: 61db00c7205d5d4047a4040992e7195f769005d3
Summary: Noticed there had been an upstream 3.11.10 release with a fix for a performance regression in 3.11.9, PR was https://github.com/xacrimon/dashmap/issues/100
Reviewed By: krallin
Differential Revision: D23690797
fbshipit-source-id: aff3951e5b7dbb7c21d6259e357d06654c6a6923
Summary:
in getdeps we currently don't build and run the tests
There are a few issues:
1. we need to also build tests for fizz, wangle, mvfst since proxygen tests include headers only exported if building tests in dependencies
2. we use `ExternalProject_add` for gtest/gmock. but doesn't seem to be playing nicely with getdeps
Reviewed By: dddmello, mjoras
Differential Revision: D16934955
fbshipit-source-id: fb1c52237f9f0c71da86643409972c94d16e6a71
Summary: properly find the required GMock version (1.8.0) and allow building tests in getdeps
Reviewed By: mjoras
Differential Revision: D16935741
fbshipit-source-id: 46f62511e2feaf553d028e286a862aa5b30393c6
Summary: also always install fizz test headers for mvfst and proxygen tests to consume without needing to build fizz tests
Reviewed By: yfeldblum
Differential Revision: D23676344
fbshipit-source-id: 7ae78c81c2d67bb8da135fcd69d4be119b50a27e
Summary: they were all transitively pulling it from folly
Reviewed By: mjoras
Differential Revision: D23683292
fbshipit-source-id: 2085a580584891b3fd0960c14505c0f675a11bd5
Summary:
EdenFS is adding a Python 3 Thrift client intended for use by other
projects, and the Mercurial Python 2 build doesn't understand Python 3
syntax files, so switch the default getdeps build to Python 3.
Reviewed By: quark-zju
Differential Revision: D23587932
fbshipit-source-id: 6f47f1605987f9b37f888d29b49a848370d2eb0e
Summary: These headers aren't needed, and are slowing compile time at best, remove them.
Reviewed By: chadaustin
Differential Revision: D23693491
fbshipit-source-id: 4aebdfbbe56897623f62017bd498dc5c90ea6532
Summary:
This was only used in EdenMount.h, to declare a method that was not compiled on
Windows, let's ifdef that method instead.
Reviewed By: chadaustin
Differential Revision: D23693494
fbshipit-source-id: 1eda62f2ae3a38a30aa0b517911635ef3d3896c2
Summary:
The ProcessNameCache code is compiled on Windows now, this definiton could
cause issues with different cpp files compiling different version of the
ProcessNameCache. To avoid this, let's remove it from Stub.h, this also removes
a bunch of #ifdef.
Reviewed By: chadaustin
Differential Revision: D23693490
fbshipit-source-id: 8f3f7b1128235b9a60f850e688b9e98910c202fc
Summary: This is not needed, remove it.
Reviewed By: chadaustin
Differential Revision: D23693489
fbshipit-source-id: 0d7674f3001410b2d9ff02ef95049c5391d8528c
Summary: This code is the same as the service/oss/main.cpp, no need to keep this one around.
Reviewed By: chadaustin
Differential Revision: D23689607
fbshipit-source-id: bb72a0623dcdb36beca40c3766e8d6817b99dea2
Summary:
This stack updates eden to be able to check all of the locations that able
users certificate may reside.
There can be multiple places where a cert may reside (we cant always
definitively choose one place to look based on the platform). Thus we
need to be able to configure multiple locations for certs in our eden
config.
Thus we need to be able to parse a list of options for a key in our config
parsing.
**Disclaimer this is really icky**
Our `FieldConverter` interface takes a string to parse. So this means
that after parsing the config file for each value we have to re-serialize it
into a string to pass it in here. Previously we only supported string and
bool values so this re-serialization was not too terrible. Now that we want
to support arrays this re-serialization is extra gross. To minimize the grossness,
I am reusing cpptoml for serializing / deserializing around the `FieldConverter`
interface.
Long term it would be better if FieldConverter took a cpptoml::base or
something more generic instead of a string so we dont have to do this.
But that will be a big refactor, and I don't currently have bandwidth for it :(
Reviewed By: wez
Differential Revision: D23359928
fbshipit-source-id: 7c89de485706dd13a05adf19df28425d2c1756a8
Summary:
This test can't be non-flaky, because it relies on the kernel deciding
when to drop inodes from cache, and we've investigated it multiple
times. Given it tests a rarely-used function that would be better
expressed as a unit test in C++, just remove it for now.
Reviewed By: wez
Differential Revision: D23665455
fbshipit-source-id: 522e47113857eff399be4f2bb60e26e801d61e9a
Summary: For ease of consumption, remove the descriptive line and the extra newline at the bottom of the generated prefetch profile. Also, sort the files for smaller generated diffs upon iteration.
Reviewed By: kmancini
Differential Revision: D23683153
fbshipit-source-id: e2bd510d5fbd7095f199e70b2556b84e0984a914
Summary:
We've often had cases where we need to nuke peoples caches for various
reasons. It's a hug pain since we haven't a way to communicate with all hg
clients. Now that we have configerator dynamicconfigs, we can use that to reach
all clients.
This diff adds support for configs like:
```
[hgcache-purge]
foo=2020-08-20
```
The key, 'foo' in this case, is an identifier used to only run this purge once.
The value is a date after which this purge will no longer run. This is useful
for bounding the damager from forgetting about a purge and having it delete caches
over and over in the future for new repos or repos where the run once marker
file is deleted for some reason.
Reviewed By: quark-zju
Differential Revision: D23044205
fbshipit-source-id: 8394fcf9ba6df09f391b5317bad134f369e9b416