Summary:
Pull Request resolved: https://github.com/facebookexperimental/eden/pull/67
With this change it will be possible to build dependencies of and run integration tests using getdeps.py.
This is the first goal of Q4 as per https://fb.quip.com/v8YzAYNSYgot: "Get Open Source version of integration tests running on Legocastle".
Before this diff:
The OSS integration tests run now on GitHub by:
- Building some test dependencies with getdeps.py
- Building some test dependencies with homebrew/apt-get
- Running tests via python script
The OSS integration tests were not running on Sandcastle.
After this diff:
The OSS integration tests run on Github by:
- Building and executing tests via getdeps.py (execution of tests happens by getdeps.py calling Make calling python script)
The OSS integration tests run on Sandcastle using the same getdeps.py setup as Github.
Reviewed By: krallin
Differential Revision: D24253268
fbshipit-source-id: cae249b72d076222673b8bbe4ec21866dcdbb253
Summary:
Include a `User-Agent` header in EdenAPI requests from Mercurial. This will allow us to see the version in Scuba, and in the future, will allow us to distinguish between requests send by Mercurial and those sent directly by EdenFS.
Keeping with the current output of `hg version`, the application is specified as "EdenSCM" rather than "Mercurial".
Reviewed By: singhsrb
Differential Revision: D24347021
fbshipit-source-id: e323cfc945c9d95d8b2a0490e22c2b2505a620dc
Summary: This was competely undocumented, and thus undiscoverable. Add some documentation, so that it's at least clear the feature exists
Reviewed By: singhsrb
Differential Revision: D24332961
fbshipit-source-id: 9e73163a9314ceb7f953a3b1ac0f58c9a6e6d4d9
Summary:
This updates multiplexedblob and logblob to capture perf counters for the
operations we run, and log them to Scuba. Along with the previous diffs in this
stack, this gives us the number of Manifold retries, total delay, and conflicts
all logged to blobstore trace on a per-operation basis.
Reviewed By: HarveyHunt
Differential Revision: D24333039
fbshipit-source-id: 9c7d0a467f8df08dcb2a0d3bb6b543cdb3ea1d90
Summary:
This updates ManifoldBlob to log the aformentioned data points to perf
counters. There's a bit of refactoring that also had to go into this to make
`ctx` available everywhere it's needed.
Reviewed By: aslpavel
Differential Revision: D24333040
fbshipit-source-id: 1b63bcd1e1ee36bae4dbbc1da053c7f1bdf96675
Summary:
This adds support for "forking" perf counters at a point in the stack, giving
you a CoreContext that logs to one or more sets of perf counters.
This is useful for low level operations where we want to collect granular more
logging data — in particular blobstore operations, where we'd like to collect
the time spent waiting for Manifold retries or the number of Manifold retries
in blobstore trace for each individual blobstore operation (we can't do that
using the `CoreContext` we have because that would be missing
The implementation supports a list of reference counted perf counters in the
CoreContext. When you want to add a new counter, we replace the list with a new
one, and give you a reference to the one you just added. When you write, we
write to all perf counters, and when you read, we read from the "top" perf
counter (which is always there). To read from one of the forked counters, you
use the reference you got whne you created it.
Reviewed By: aslpavel
Differential Revision: D24333041
fbshipit-source-id: ce318dfc04a1ea435b2454b53df4cae93d57c0a5
Summary:
The x-repo usually have "noop" mapping i.e. mapping that doesn't change the
paths (it might have arbitrary name though)
It's useful for commits that are identical between small and large repo to be
able to backfill this mapping. This diff adds a command to do that.
Reviewed By: krallin
Differential Revision: D24337281
fbshipit-source-id: 89a058f95677e4a5c8686122a317eadf8b1bb995
Summary: It will be used in the next diff, so let's move it to a separate function.
Reviewed By: krallin
Differential Revision: D24334717
fbshipit-source-id: e50d13d45c633397504cf08954f2ced9ace8f570
Summary: Convert derived data utils to use new style futures
Reviewed By: StanislavGlebik
Differential Revision: D24331068
fbshipit-source-id: ad658b278802afa1e4ecd44c5a24164135748790
Summary:
This is needed to be able to use `has_redaction_root_cause()` with a metadata
rebuilding error.
Reviewed By: StanislavGlebik
Differential Revision: D24360816
fbshipit-source-id: 388df8cedb769ff001bfe4ff9cd5063ccd9de9f1
Summary:
This is in line with other changes we're making to map logic now. Note that
apart from checking in-repo prefix-free-ness of the map, this also checked the
same across many small repos. It probably does not make sense to do that either
now that we allow non-prefix free maps within a repo.
Reviewed By: StanislavGlebik
Differential Revision: D24348161
fbshipit-source-id: caaa22953c8a15a08607157b99c9f0fd0edf633f
Summary:
Until we have the same standards for the native and push-redirected pushes,
these need to be automatically bypassed.
Reviewed By: krallin
Differential Revision: D24357372
fbshipit-source-id: f85459145f6a5217c07445d7017f3b11ed1284a7
Summary:
Except for the tail mode of x_repo_sync_job which we use normally there's also
"once" mode which means "sync a single commit". Previously it did just that -
synced a single commit and failed if parents of this commit weren't synced.
However this is very unpleasant to use - instead let's change the semantics to
sync the commit and all of its ancestors.
also I made target_bookmark an optional parameter - sometimes we just want to sync a commit without any bookmarks at all.
Reviewed By: mitrandir77
Differential Revision: D24135771
fbshipit-source-id: 341c1808a44c58f89536b8c07394b77d8ced3f37
Summary: These were stabilized in 1.45.0 and 1.47.0 respectively.
Reviewed By: StanislavGlebik
Differential Revision: D24353680
fbshipit-source-id: f2afe906e5260b1b360455acc20d9a806c988c9c
Summary:
On Windows, computing the sha1 of a materialized file requires opening up the
file in the working copy, as the file is cached there. Interestingly, this
potentially means that for computing the sha1 of a file, EdenFS may receive a
callback from ProjectedFS about that file or a parent directory. At this point,
EdenFS just refuses to serve this callback, as doing so may trigger an infinite
loop, or simply deadlocks. While this may sound weird, recursive callbacks are
not expected, as this signify that EdenFS view of the working copy doesn't
match what it actually is.
To close the loop, and from a code perspective, this means that computing the
sha1 of a file can fail and can throw an exception. Unfortunately, the code
didn't reflect this fact and exceptions were simply ignored, when that happens
during a checkout operation, this can leave the working copy in a weird state,
further agravating the mismatch between EdenFS view of the working copy, and
what it actually is.
Reviewed By: wez
Differential Revision: D24282048
fbshipit-source-id: 745af03189fe345150f0b1792ee1b37a1b8fb0d4
Summary:
The hide-before config was added to encourage people to actively hide unused
drafts for repo performance, instead of keeping unused draft forever, since a
lot of code paths assume `len(draft)` is small. See D13993584 (28b4dfbb38) for more context.
Now our hide-before data is set to 2.5 years ago (2018-2-25), this change
probably only affects a very small number of users.
Reviewed By: DurhamG
Differential Revision: D24298198
fbshipit-source-id: 938aca1222b55e09fdb058ff01bc063733f201dc
Summary:
Rust tests run in multiple threads. Setting environment variables affects other
tests running in other threads and causes random test failures.
Protect env vars using a lock.
Reviewed By: DurhamG
Differential Revision: D24296639
fbshipit-source-id: db0bee85625a7b63e07b95ea76d96029487881d4
Summary:
The shell-script cargo tests seem very flaky. Use a dedicated Python script to
run the tests, with a more concise output that only includes failures, and run
tests in parallel.
Reviewed By: DurhamG
Differential Revision: D24296433
fbshipit-source-id: 1d63146c6c84f1035dded24fcd3d79f116c2e740
Summary:
Ideally we'd just delete the p4 convert functionality, but I'm too lazy
to go through and extract it right now.
This was recently enabled when I enabled all the convert tests. We don't use the p4 logic, so it's safe to just turn back off to get a release out.
Reviewed By: quark-zju
Differential Revision: D24352068
fbshipit-source-id: 6f3a1f88739b2e2348aff00e8cae333473bbe71a
Summary: My recent change accidentally returned early when reading the prompt input, which skipped the \r truncation needed for Windows.
Reviewed By: sfilipco
Differential Revision: D24350672
fbshipit-source-id: 4a589d76bf41cda7fda2518003ef272f9a6ead48
Summary:
While on Linux these can't fail (or, to be more precise: it doesnt' matter),
they can on Windows. One such exemple is when a user lock a file and triggers
an update that modifies this file. The invalidation will fail, and thus the
update should keep track of that file not being updated properly.
Previously, the invalidation would raise an exception, but that proved to be
the wrong approach as some state would need to be rolled back which the
exception didn't help in. For that, let's just return a Try and make sure that
we handle all the cases properly.
Reviewed By: chadaustin
Differential Revision: D24163672
fbshipit-source-id: ac881984138eefa65c053478a160e2a653fd3fdf
Summary:
Update from 1.44.0. Updates have been blocked because of a bad interaction
between platform007+mode/opt-clang-thinlto+gold. Now that the default linker is
lld, perhaps this is no longer an issue.
Various tweaks due to updates:
- `atomic_min_max`, `const_transmute`, `inner_deref`, `ptr_offset_from` and `str_strip` are now stable
- `asm` renamed to `llvm_asm`
- `intra_doc_link_resolution_failure` lint renamed to `broken_intra_doc_link` (ndmitchell I didn't fix Gazebo because you'd explicitly suppressed the warning - I'll let you work out what to do with that)
(This is caused by incompatibility between the llvm used by rustc in
platform007 and llvm-fb. In platform009, rustc uses llvm-fb directly so there's
no scope for incompatibility.)
Reviewed By: dtolnay
Differential Revision: D24288638
fbshipit-source-id: 5155d85c186fd79d3cc86cb0bb554ab77d76c12c
Summary: Rename futures01 types from Foo to Foo01 in the top level lib.rs and derive_impl.rs files in preparation for adding a trait method that returns new futures
Reviewed By: aslpavel
Differential Revision: D24311165
fbshipit-source-id: 4f3b12ba3eaf8023959d6d4bbb4568d191b1fffb
Summary: This allows the user to see how the mover of a particular version operates on any given path.
Reviewed By: StanislavGlebik
Differential Revision: D24335975
fbshipit-source-id: f67847112eb0d3c8c49584604e2f9d93579cdde4
Summary:
Those tests are kinda broken in a number of ways right now.
First, they try to connect to a prod DB to record what bundle they just pushed.
That's not ideal, so this adds a flag to have them not do this.
Second, they are racy by design, and they mostly don't pass at tall in
mode/dev.
The way we make the tests run here is by having them forwardfill for 10 seconds
then give up, and we hope that during that time, they've fetched the bundles
they should fetch from SQL, and synced them. However, that's not really
sufficient because establishing your first connection to SQL from a mode/dev
binary is quite slow, so in the 10 seconds, you might pick up your first
bundle, start replaying it, then exit before you get to the second one.
To fix this, this diff updates the fillers to expect a specific number of
bundles to replay. We still have a limit on the number of total iterations to
avoid letting the tests hang if the number of bundles isn't the one we expect.
Fixing this revealed further breakage, which I solved earlier in this stack.
Unfortunately, it's not sufficient to make test-commitcloud-reversefiller.t
work on Python 3, because the infinitepush extension appears to be broken
server side. I'll file a task for the Mercurial oncall for that.
Reviewed By: mitrandir77
Differential Revision: D24277474
fbshipit-source-id: 0a5e1f7db8dc0c0068b0fc203abc0503226107ec
Summary:
We're seeing slow pack fetches in some cases. Let's add some extra
debug output to get more information.
Reviewed By: quark-zju
Differential Revision: D24295593
fbshipit-source-id: b5a5bdf169a8c05a3143da09d69646a7a742ef08
Summary:
We're seeing history fetching take quite a while during checkout and
rebase, but it's not really necessary for a checkout. In addition to it being
slow, if memcache doesn't have the history entry we'll fall back to a full
revision fetch from the server, which includes data. Let's disable prefetching
history during checkout.
Reviewed By: quark-zju
Differential Revision: D24295594
fbshipit-source-id: 70aa6e1925074b6546626a5192a7562d6da31f2b
Summary:
My recent diff changed these lines from `bytes(value)` to
`pycompat.decodeutf8(value)`, since we want these to be strings. Unfortunately,
on python 2 decodeutf8() just passes the value straight through, so in cases
where we're handed bytearray, we pass that through instead of converting it to
bytes. Some down stream consumers requires it to be bytes.
Let's conditionally turn it to bytes.
Reviewed By: krallin
Differential Revision: D24307818
fbshipit-source-id: a0cc64b7e2cf7645586e633e7a4a382b69390e15
Summary:
Our case conflict checking is very inefficient on large changesets. The root
cause is that we traverse the parent manifest for every single file we are
modifying in the new changeset.
This results in very poor performance on large changes since we end up
reparsing manifests and doing case comparisons a lot more than we should. In
some pathological cases, it results in us taking several *minutes* to do a case
conflict check, with all of that time being spent on CPU lower-casing strings
and deserializing manifests.
This is actually a step we do after having uploaded all the data for a commit,
so this is pure overhead that is being added to the push process (but note it's
not part of the pushrebase critical section).
I ended up looking at this issue because it is contributing to the high
latencies we are seeing in commit cloud right now. Some of the bundles I
checked had 300+ seconds of on-CPU time being spent to check for case
conflicts. The hope is that with this change, we'll get fewer pathological
cases, and might be able to root cause remaining instances of latency (or have
that finally fixed).
This is pretty easy to repro.
I added a binary that runs case conflict checks on an arbitrary commit, and
tested it on `38c845c90d59ba65e7954be001c1eda1eb76a87d` (a commit that I noted
was slow to ingest in commit cloud, despite all its data being present already,
meaning it was basically a no-op). The old code takes ~3 minutes. The new one
takes a second.
I also backtested this by rigging up the hook tailer to do case conflict checks
instead (P145550763). It is about the same speed for most commits (perhaps
marginally slower on some, but we're talking microseconds here), but for some
pathological commits, it is indeed much faster.
This notably revealed one interesting case:
473b6e21e910fcdf7338df66ee0cbeb4b8d311989385745151fa7ac38d1b46ef (~8K files)
took 118329us in the new code (~0.1s), and 86676677us in the old (~87 seconds).
There are also commits with more files in recent history, but they're
deletions, so they are just as fast in both (< 0.1 s).
Reviewed By: StanislavGlebik
Differential Revision: D24305563
fbshipit-source-id: eb548b54be14a846554fdf4c3194da8b8a466afe
Summary:
I'm reworking some of our case conflict handling, and as part of this, I'm
going to be using check_case_conflicts for all our checking of case conflicts,
and notably for the case where we introduce a new commit and check it against
its parent (which, right now, does not check for case conflicts).
To do this and provide a good user experience (i.e. indicate which files
conflicted and with what), I need `check_case_conflicts` to report what files
the change conflicts with. This is what this diff does.
This does mean a few more allocations, so I "paid those off" by updating our
case lowering to allocate one fewer Vec and one fewer String per MPathElement
being lowercased.
Reviewed By: StanislavGlebik
Differential Revision: D24305562
fbshipit-source-id: 8ac14466ba3e84a3ee3d9216a84c2d9125a51b86