Summary: As part of the effort to deprecate futures 0.1 in favor of 0.3 I want to create a new futures_ext crate that will contain some of the extensions that are applicable from the futures_01_ext. But first I need to reclame this crate name by renaming the old futures_ext crate. This will also make it easier to track which parts of codebase still use the old futures.
Reviewed By: farnz
Differential Revision: D24725776
fbshipit-source-id: 3574d2a0790f8212f6fad4106655cd41836ff74d
Summary:
In Mononoke for a sharded DB we historically used connection pool size 1 per shard. With the Mysql FFI client now it doesn't make sense, as the client's conn pool is smart enough and designed to works with sharded DBs, so currently we don't even benefit from having a pool.
In this diff I added an API to create sharded connections: a single pool is shared between all the shards.
Reviewed By: farnz
Differential Revision: D24475317
fbshipit-source-id: b7142c030a10ccfde1d5a44943b38cfa70332c6a
Summary:
This diff makes "Calculating additional actions for sparse profile update" more
efficient by using xormatcher instead of unionmatcher. Indeed, we are
interested only in files that changed their "state" after sparse profile change
e.g. either a file was included in sparse profile and then became excluded.
Reviewed By: sfilipco
Differential Revision: D24725902
fbshipit-source-id: ee611e7c123b95937652ced828b5bea6d75a3daf
Summary:
At the moment differencematcher.visitdir never returns "all".
This diff changes it to return all in the case if self._m2 doesn't visit the directory at all and
self.m1.visitdir(dir) returns "all". This makes sense - if m1 visits all files
in the directory and m2 doesn't exclude any file then it's safe to return all
in this case.
This optimization will be used in the next diff.
Reviewed By: sfilipco
Differential Revision: D24725903
fbshipit-source-id: 2a049cfb1ea4878331e8640cbb20af74da86a1a1
Summary:
Whenever a sparse profile changes (e.g. we include or exclude a directory or a file) we do a full prefetch for all trees in the revision and then for each file in a revision we check if this file has changed its state after sparse profile change (i.e. whether it was included before the change and became excluded after the change and vice versa). It can be quite expensive for large repos and looks like checking all the files is unnecessary.
For example, there might be top-level directories that are excluded in sparse profile before and after the change. In that case there's no reason to check every file in this directory, and there's no reason to prefetch manifests for this directory.
More importantly, `mf.walk()` method is already smart enough to do manifest prefetches if treemanifest.ondemandfetch is set to True, so it looks like there's no reason to do any additional prefetching at all (at least in theory).
So this diff does a few things:
1) The default mode is to use mf.walk() method with a union matcher to find all the files that were are included either in old or new sparse profile. In order for it to prefetch efficiently we force enable treemanifest.ondemandfetch config option.
2) It also adds a fallback option to full prefetch (i.e. the same thing we do right now) Hopefully this fallback option won't be necessary and we'll delete them soon. I've added them only to be able to fallback to current behaviour in case there are problems with the new behaviour
I think we can do an even more efficient fetch by using xor matcher instead of union matcher. I'll try to implement it in the next diffs
Reviewed By: sfilipco
Differential Revision: D24705823
fbshipit-source-id: 2c232a66cc74ee95bdaa84201df46448412f087f
Summary:
This seems to trip up Cargo builds
```
error: expected one of `!`, `.`, `::`, `;`, `?`, `{`, `}`, or an operator, found `with`
--> src/lib.rs:365:3
|
7 | S with version V1
| ^^^^ expected one of 8 possible tokens
error: aborting due to previous error
```
Reviewed By: StanislavGlebik
Differential Revision: D24754708
fbshipit-source-id: 0dc5539acf340ac409bf7b6158313c8fec16a275
Summary:
This commit takes advantage of git 2.5.0 being able to fetch a
requested revision rather than relying on the desired revision being within the
depth limited fetch.
This relies on having git 2.5.0 on the server which is true for all
of the projects we have manifests for; this shows zero matches:
```
$ rg repo_url opensource/fbcode_builder/manifests | grep -v github
```
We've had a couple of situations recently where folks have run into issues with
the commit rate in folly being higher than then fetch depth, so this should
address that.
Refs: https://github.com/facebook/watchman/issues/866
Reviewed By: fanzeyi
Differential Revision: D24747992
fbshipit-source-id: e9b67c61dddc9f55e05d8984e8d210e7d2faabcb
Summary: force-unmount-all.sh is a convenience script for edenfs, so move it into eden/fs/.
Reviewed By: fanzeyi
Differential Revision: D24745361
fbshipit-source-id: 661a6f09b73911411fbb8a00bc016757ad19eb2a
Summary: This is unecessary, remove it.
Reviewed By: chadaustin
Differential Revision: D24743519
fbshipit-source-id: 5e10eafcd3f84d9ad053be35798df86b21f97d4f
Summary:
One of the issue that EdenFS on Windows is currently facing is around
invalidation during an update. In effect, EdenFS is over invalidating, which
causes update to be slower than it should be, as well as EdenFS recursively
triggering ProjectedFS callbacks during invalidation. Both of these are a
sub-par UX.
The reason this issue exist is multi-faceted. First, the update code follows
the "kPreciseInodeNumberMemory" path which enforces that a directory that is
present in the overlay needs to be invalidated, even if it isn't materialized.
The second reason is that no reclamation is done for the overlay, combine the
two and you get an update that gets both slower over time and will issue
significantly more invalidation that is needed.
Solving this is a bit involved. We could for instance start by reclaiming
inodes from the overlay, but this wouldn't be effective as we use the fact that
an inode is present in the overlay as a way to know that the file is cached in
the overlay. If we reclaim from the overlay we simply won't be invalidating
enough and some files will be out of date.
It turns out that we already have a mechanism to track what is cached by the
kernel: the fuse refcount. On Linux/macOS, everytime an inode is returned to
the kernel, this refcount incremented, and the kernel then notifies us when it
forgot about it, at which point the refcount can be decremented. On Windows,
the rules are a bit different, and a simple flag is sufficient: set when we
write a placeholder on disk (either during a directory listing, or when
ProjectedFS asks for it), and unset at invalidation time during update. There
is however a small snag in this plan. On Linux, the refcount starts at 0 when
EdenFS starts as a mount/unmount will clear all the kernel references on the
inodes. On Windows, the placeholder aren't disappearing when EdenFS dies or is
stopped, so we need a way to scan the working copy when EdenFS starts to know
which inodes should be loaded (an UnloadedInode really).
The astute reader will have noticed that this last part is effectively a
O(materialized) operation that needs to happen at startup, which would be
fairly expensive in itself. It turns out that we really don't have choice and
we need to do it regardless due to Windows not disallowing writes to the
working copy when EdenFS is stopped, and thus for EdenFS to be aware of the
actual state of the working copy, it needs to scan it at startup...
The first step in doing all of this is to simply rename the various places that
uses "fuse refcount" to "fs refcount" which is what this diff does.
Reviewed By: chadaustin
Differential Revision: D24716801
fbshipit-source-id: e9e6ccff14c454e9f2626fab23daeb3930554b1a
Summary:
There are two separate changes here.
### Use `find_package`
The old setup of "let's manually enumerate and order the libraries that Bistro depends on" worked fine, except:
- it was a bit brittle (requiring occasional patches as deps changed), and
- it garnered a lot of feedback to the effect of "your build is weird, so it's probably broken because of that."
Now I expect to have fewer breaks and more plausible deniability :)
More importantly, this should make it much easier to migrate to `getdeps.py`.
## Statically link `fmt`
After `fmt` was added as a `folly` dependency, and linked into Folly code used by Bistro, its tests would fail to run with this error: `test_sqlite_task_store: error while loading shared libraries: libfmt.so.6: cannot open shared object file: No such file or directory`.
Something was getting messed up in the dynamic linking, and it wasn't clear to me what -- the way that Bistro is linking its dependencies certainly seems sensible. Most likely one of the dependencies is incompatible with dynamic linking in a subtle way. I suspect Proxygen.
The `fmt.py` change in this diff addresses this problem by forcing static linking on the offending library.
Reviewed By: yfeldblum
Differential Revision: D24604309
fbshipit-source-id: 35ecbbb277b25907ecaee493e8b0081d9f20b865
Summary:
By putting this in `fizz-config.cmake`, we can use depend on the `sodium` target without compromising our dependents ability to find the library.
Put the search module in the common location under `fbcode_builder/CMake` to let dependents use it.
Reviewed By: yfeldblum
Differential Revision: D24686041
fbshipit-source-id: 942d1ab34feef6cadac2b584eb8cb2d999bae0ca
Summary:
The revlog changelog has incompatible rev numbers with changelog2 backends. Do
not construct it. Instead, just use the current changelog.
Reviewed By: DurhamG
Differential Revision: D24513444
fbshipit-source-id: 35d9326cd9fde4af8b98d628f6df66bd80883f92
Summary:
Previously we were choosing current version, and just as with backsyncer this
is not always correct. Let's instead choose not the current version but the
version of the bookmark you are importing to.
This diff also introduced an integration test for a repo import into a pushredirected repo, and turned out there were a few bugs in the repo_import code (open_source_sql was used instead of open_sql). This diff fixed them as well
Reviewed By: ikostia
Differential Revision: D24651849
fbshipit-source-id: bfe36e005170ae2f49fa3a6cb208bf6d2c351298
Summary:
This diff changes semantic of `sync_commit()` function to return an error when
trying to sync a commit with no parents. This is a small code change which has big change
in semantics, and because of that I had to change how backsyncer and
mononoke_x_repo_sync job.
Instead of using `unsafe_sync_commit()/sync_commit()` functions both backsyncer and
`x_repo_sync_job` now use `unsafe_sync_commit_with_expected_version()`
which forces them to specify which version to use for commit with no parents.
And in order to find this version I changed find_toposorted_unsynced_ancestors
to not only return unsynced ancestors but also return mapping versions of their
(i.e. of unsynced ancestors) parents. Given this mapping we can figure out what
version is supposed to be used in `unsafe_sync_commit_with_expected_version()`.
The question raises what to do when a commit doesn't have any synced ancestor and hence we can't decide
which version to use to remap it. At the moment we are using current version (i.e. preserving the existing behaviour).
However this behaviour is incorrect, and so it will be changed in the next diffs
Reviewed By: ikostia
Differential Revision: D24617936
fbshipit-source-id: 6de26c50e4dde0d054ed2cba3508e6e8568f9222
Summary:
Previously we were always choosing the current version for remapping via
pushrebase, but this is incorrect. Let's instead select the version based on
what version parent commits used to remap with.
Reviewed By: ikostia
Differential Revision: D24621128
fbshipit-source-id: 2fedc34b706f090266cd43eaf3439f8fb0360d0d
Summary: Let strum crate do this for us
Reviewed By: krallin
Differential Revision: D24680444
fbshipit-source-id: dbde0077c105d6cc572a0c863bcb4d043714d441
Summary:
Now that fsnodes is async, convert more functions to use references, and tidy
up imports and type names.
Reviewed By: krallin
Differential Revision: D24726145
fbshipit-source-id: 75a619777f19754daf494a3743d26fa2e77aef54
Summary:
Update `fsnodes::derive_fsnode` and its immediate utility functions to use new style
futures and `async`/`.await` syntax.
Reviewed By: krallin
Differential Revision: D24726146
fbshipit-source-id: 0b0d5b1162a73568ef5c47db6e8252267e760e7f
Summary:
The goal of this diff is to provide more visibility into how long the client
takes to create/upload an infinitepush bundle. This is done in two ways:
- by adding more `perftrace` calls (useful when invistigating individual slow
pushes)
- by adding `ui.timesection` scopes (useful for aggregation purposes)
Two main things that are measured:
- creation of the bundle purely on the client
- sending of the bundle over the wire
In addition, in the perftrace recording, this measures how long it takes to
process the reply handlers, how much bytes are sent over the wire, what are the
part names and sizes (when available). These changes mostly do not distinguish
whether the code is infinitepush push or not, but they are always related to
some sort of a wireproto scenario, which means that the performance impact is
negligible (writing things to thread-local storage is *much* cheaper than
sending them over the network).
Reviewed By: DurhamG
Differential Revision: D24683484
fbshipit-source-id: 53fdfb63dcdfcf38924237c59a1e8f5e24ff96c0
Summary: We're getting rid of old futures - remove them as a dep here
Reviewed By: StanislavGlebik
Differential Revision: D24705787
fbshipit-source-id: 83ae938be0c9f7f485c74d3e26d041e844e94a43
Summary:
We can have different bonsai changesets hash for the same hg changeset. Consider situation - we have hg repo:
```
o B (Add file "b")
│
o A (Add file "a")
```
The correct bonsai changeset for B will have only entry `(<Path_to_b>,Some(<hash_b>))` in `file_changes`. But we can also have bonsai changeset for B with 2 entries `(<Path_to_b>,Some(<hash_b>)), (<Path_to_a>,Some(<hash_a>))`. This diff provides the functionality to manually create such situation. And later it will be used for verification blobimport backups
Reviewed By: StanislavGlebik
Differential Revision: D24589387
fbshipit-source-id: 89c56fca935dffe3cbfb282995efb287726a3ca9
Summary: We were incorrectly marking reverts as landed during pullcreatemarkers.
Reviewed By: quark-zju
Differential Revision: D24608217
fbshipit-source-id: f919f49469d6933c17894b3b0926ba2430a5947a
Summary:
As part of getting buck build to work on OSX, we need procinfo to
include it's OSX specific library.
Reviewed By: sfilipco
Differential Revision: D24513234
fbshipit-source-id: 69d8dd546e28b4403718351ff7984ee6b2ed3d1d