Summary:
Now that makeImmediateFutureWith exists, we can simply use it instead of
constructing a ImmediateFuture<folly::Unit> and calling thenValue on it.
Reviewed By: chadaustin
Differential Revision: D28518059
fbshipit-source-id: 0041cf863fb32efab274f11c77c76109ca9b454f
Summary: There are some unused warning on Windows, should be an easy fix.
Reviewed By: xavierd
Differential Revision: D28665465
fbshipit-source-id: 9281de7fd62f38e09d91435bb53c819bb98fb4ec
Summary: There are some unused warning on Windows, should be an easy fix.
Reviewed By: xavierd
Differential Revision: D28595085
fbshipit-source-id: abc03d210b2e9c5aa19a8925be6d4c426311e826
Summary: There are some unused warning on Windows, should be an easy fix.
Reviewed By: xavierd
Differential Revision: D28594404
fbshipit-source-id: f3dec92403739d67df3ecd091f2d8283a11ea0db
Summary: This will be used to replace calls to folly::makeFutureWith.
Reviewed By: chadaustin
Differential Revision: D28515786
fbshipit-source-id: 2c2c542392e8e57b8f865173d6878cb9d00ba376
Summary:
For debugging and better error messages it would be nice to know who worked on
a given request.
I'll do AOSC schema change once accepted.
Reviewed By: krallin
Differential Revision: D28441673
fbshipit-source-id: ba146d7f43dde26d9433f76af7fe982da14b5b82
Summary:
Those will be used for two purposes:
* to limit a scope of given tailer to just given repo (this way we could use
different tailer binaries for different repos or disable processing for a
single repo etc...)
* to enforce a single in-flight request per repo (to prevent client from
accidentally scheduling duplicate requests etc...)
I'll do AOSC schema change once accepted.
Reviewed By: krallin
Differential Revision: D28441670
fbshipit-source-id: 7b35a1c7034707d7cf54220c559edd6e03f430c3
Summary: I want to put more things in the async_requests crate.
Reviewed By: krallin
Differential Revision: D28441671
fbshipit-source-id: 19233c2c5b697cc1e27107cd9904666baf8f10b7
Summary: I have modified the places where most of the errors were raised that users reported and were resolved by renewal of certificates.
Reviewed By: krallin
Differential Revision: D28568561
fbshipit-source-id: 44fb127a49bde83efee1c934e0435b31f8602a8d
Summary: This diff removes the gflag based batch size option and promote it into a EdenFS Configuration so we can experiment with different batch size easily.
Reviewed By: chadaustin
Differential Revision: D28555280
fbshipit-source-id: 6d3a7be3cd880f0aaa3f427c0328222efa2d37ea
Summary:
The telemetry wrapper didn't validate the SNAPSHOT header, which makes
migrating to a new format harder. Fortunately, it doesn't even need to
read the SNAPSHOT file. The dirstate file is maintained even in EdenFS
checkouts.
Reviewed By: quark-zju
Differential Revision: D28650333
fbshipit-source-id: 174cf7039adcbb28224ec528c2462e0a9232b6cd
Summary: Upcoming changes will force enable metalog so there will be no way to migrate down.
Reviewed By: DurhamG
Differential Revision: D28595290
fbshipit-source-id: a130b3c60c5b553d024868f28a28e48c50d44783
Summary:
It was added by D8527475 (72c3d8afc1) to workaround hgsql with no-fncache and long file
names synced from svn. Upcoming changes will force fncache to simplify
configuration and the hgsql server code was forked. So let's just delete
the test.
Reviewed By: DurhamG
Differential Revision: D28595291
fbshipit-source-id: 60d2449cca7af46b8b5b3c3b557a36507ff1576e
Summary: This will be used by fbclone to ship lazy commit hash backend.
Reviewed By: DurhamG
Differential Revision: D28554445
fbshipit-source-id: a263ae7683124b3b86f4025b02c7de20dcb9813e
Summary:
Add Dockerfile build for openr.thrift python module.
The python module is built by:
1. Building and installing Facebook libraries with fbcode_builder
2. Building Open/R
3. Generating Cython files from thrift files with the FB thrift compiler
4. Generating C++ files from the Cython modules with the Cython compiler
5. Compiling the C++ modules into shared objects
Future work for building and distributing Breeze:
- Fix the hacks in build_breeze.sh, see comments therein
- Use a staged Dockerfile build for the Open/R and Breeze build
- Install openr.thrift. The openr.thrift shared objects are build and
stored in the Docker image generated by Dockerfile, but are unused.
- Install all the openr python submodules in a single openr site-package
- Add cross-compilation to the openr.thrift build. This is needed for
Terragraph
- Upload the openr python package to PyPi
Reviewed By: saifhhasan
Differential Revision: D28614443
fbshipit-source-id: 38b7e7c5594fd4bb5a338f19c69e5fc3b3b95863
Summary: This makes it possible to use non-debugshell to compact the metalog.
Reviewed By: DurhamG
Differential Revision: D28550902
fbshipit-source-id: 789830ba35243d248397e6a52ee343584c1e01a9
Summary:
The "compact" API rebuilds the metalog by removing older history. It could be
useful to reduce the size overhead of the metalog.
This is also useful if we're doing other "rebuild" work, such as rebuilding the
changelog.
Reviewed By: DurhamG
Differential Revision: D28550903
fbshipit-source-id: 56f875bd955247181236a976dcce6163d126a4b6
Summary: I'm going to reuse this for AOSP import logic speedups, and I do not want my low QPS limit overridden by a higher QPS limit set for backfilling. Push the rate limiter out
Reviewed By: StanislavGlebik
Differential Revision: D28638180
fbshipit-source-id: ef3a783d4b1993614a146f534337f719958a1f36
Summary: We don't need to load the chunk data or update the chunk generation if the blobstore key is already present in IfAbsent mode.
Reviewed By: farnz
Differential Revision: D28640820
fbshipit-source-id: 3eab255ebfc896d4950935e3d7350b19f9a280b9
Summary:
The zipimport logic requires the pyc mtime to match its source. However, the
Windows system time zone can invalidate it and cause slow startups.
Workaround it by making the zipimport mtime function return a fallback value so
the mtime check is then bypassed.
# zipimport.py, _unmarshal_code
source_mtime, source_size = \
_get_mtime_and_size_of_source(self, fullpath)
if source_mtime: # if source_mtime is false, then the check is bypassed.
# We don't use _bootstrap_external._validate_timestamp_pyc
# to allow for a more lenient timestamp check.
if (not _eq_mtime(_unpack_uint32(data[8:12]), source_mtime) or
_unpack_uint32(data[12:16]) != source_size):
_bootstrap._verbose_message(
f'bytecode is stale for {fullname!r}')
return None
Change my Windows time zone from GMT-7 to GMT-4. Set PYTHONVERBOSE and
PYTHONDEBUG to 1. Ran `hg init -h` and check its stderr. It prints:
# bytecode is stale for 'edenscm.traceimport'
and alike before this change, and no longer after replacing the `__init__.py`
in the zip with the new version.
Reviewed By: DurhamG
Differential Revision: D28622287
fbshipit-source-id: bb3e8e378ea168e4f83f4b6aa9713103b2c90ef8
Summary:
Fix and add tests for two problems when running in OldestFirst mode.
1. The computed chunk bounds were incorrect when the walker chunk size was smaller than the bulkops fetch size.
2. when loading a checkpoint in OldestFirst direction the chunk bounds to continue from were reversed
Reviewed By: farnz
Differential Revision: D28624622
fbshipit-source-id: 23d3a3505631447da6607bc472f625a34d0b8752
Summary:
In practice, if the client has disconnected it's unlikely that the thing that
will fail is a flush (instead if will probably be the poll that checks if we
can write before). So, let's track that separately.
This isn't super important since we can already infer the timing from when the
error was logged but it'll make the logs less ambiguous.
Reviewed By: johansglock
Differential Revision: D28637258
fbshipit-source-id: 3bc9c9aaa9fc8cf7a2d2514fb520cb1729f4c560
Summary:
Extend the `blame_v2` format to include metadata about the location in the
parent commit that a blamed line replaces. This can be used to implement
accurate "skip past this change" in clients.
Most ranges only need the range of lines that the original blame range
replaces. For ranges that are inserts, the parent range is of zero length and
the offset indicates the line that the range was inserted before.
For renames, we must include the path of the file before the rename, so that
the file can be found in the parent.
For merge commits, if the file is present in more than one parent, then lines
that are introduced in the merge commit itself have multiple possibilities for
the parent range. We select and record the first parent that contains the file
as the provider of the parent range for these lines. This favours the p1
history of the file, but allows "skip past this change" to work when files
are merged in.
Reviewed By: farnz
Differential Revision: D28546768
fbshipit-source-id: 2af1e95a0d27fb25aeea51682177fbac2c41b029
Summary:
The derived data tailers use batch derivation aided by a graph structure. This does batches of derivation in-memory, then writes out the result of a batch.
Use this mechanism in `branch_forest_updater`
Reviewed By: mitrandir77
Differential Revision: D28614817
fbshipit-source-id: 351007a87302fb357e0f6db386e4493bb7879c78