Summary:
In D38786500, I introduced `deep-sharded` flag that controls whether a repo needs to execute in deep-sharded mode or shallow-sharded mode. However, the flag was universal (i.e. it was applicable to all services). Since we currently have a situation where certain services need to execute repos in deep sharded (e.g. `SCS`) while others need to execute in shallow-sharded mode (e.g. `Eden API`). This diff introduces `ShardingModeConfig` which determines if a repo is deep sharded for a particular service.
For now, the `deep_sharded` flag is kept as is. Once the config + code changes to use this new property are landed, I will delete the old `deep_sharded` flag.
Corresponding configerator diff: D4250164
Reviewed By: mitrandir77
Differential Revision: D42531443
fbshipit-source-id: 697496cc8351f0d9e686312fd8fff8d4e2fa010b
Summary:
In D38786500, I introduced `deep-sharded` flag that controls whether a repo needs to execute in deep-sharded mode or shallow-sharded mode. However, the flag was universal (i.e. it was applicable to all services). Since we currently have a situation where certain services need to execute repos in deep sharded (e.g. `SCS`) while others need to execute in shallow-sharded mode (e.g. `Eden API`). This diff introduces `ShardingModeConfig` which determines if a repo is deep sharded for a particular service.
For now, the `deep_sharded` flag is kept as is. Once the config + code changes to use this new property are landed, I will delete the old `deep_sharded` flag.
Corresponding configerator diff: D42501648
Differential Revision: D42530372
fbshipit-source-id: 1bd46533ad18939da4465239dab1340a0e8219d2
Summary: This makes the test a bit stronger.
Reviewed By: zzl0
Differential Revision: D42493269
fbshipit-source-id: d51ef1f6c0e9c5d7d72dd2e27c1800f0e62c16e9
Summary:
Use locking internally to avoid racy concurrent writes that could cause
unrecoverable data corruption.
Previously:
Timeline: -------------------------->
Process 1: open(), append, append, ...
Process 2: open(), append, append, ...
Now:
Process 1: open(), lock { append, append }, ...
Process 2: open(), lock { append, append } ...
`append()` to the same file is exclusive to one instance.
Note we need to update `position` after obtaining the lock to pick changes from
other instances.
Reviewed By: zzl0
Differential Revision: D42492667
fbshipit-source-id: 6a6184c945d23e16eb1619458abdb18afc64868c
Summary: It'll be used by the next change.
Reviewed By: zzl0
Differential Revision: D42492670
fbshipit-source-id: 676320fc4fd14411a6feca450b53a0b67e82f6ec
Summary:
It'll be used by the next change. Note `Rc<RefCell>` or `Arc<RefCell>` won't
compile for the rest of the project.
Reviewed By: zzl0
Differential Revision: D42492672
fbshipit-source-id: 28b167b1d1e61a715f86911ea1d314fc2420eafd
Summary:
trait FileReadWrite is used for both on-disk and in-memory files. Add locking
so the next change can use it to avoid races on the physical filesystem.
Reviewed By: zzl0
Differential Revision: D42492668
fbshipit-source-id: 06c02ea4fb52213c3a64fc17a336a5edba385dc4
Summary: Currently, 'hg rebase' is rebasing hidden commits and making them visible to users, this is a regression caused by D42087621 (96b767efde).
Reviewed By: quark-zju
Differential Revision: D42608733
fbshipit-source-id: de4b7b231d42a45844174e4a417d3c07a6869b97
Summary:
Now that we have a notion of version, we should log it during startup. This makes our logs easier to read through.
I think including the platform name would also be worth doing.
A little bit unfortunately, we have the platform name in the client platform, but we really want to log it on the server. So here I also just duplicate the platform name into the server platform definition.
We only have 3 implementations right now:
- browser
- vscode
- android studio
Reviewed By: bolinfest
Differential Revision: D42589867
fbshipit-source-id: be2f3783b4d471fa9d503a9ff3d4464f7e5954ab
Summary:
We need in our analytics&bug reporting to know the version number of a given instance of ISL.
The idea here is that we'll have a version number set on startup, which we'll keep constant. This was not a given before this stack!
(There may be situations where reloading an `sl web` page could load newer client javascript from disk than the server side was started with—we'll need to address this in the future for the assumption of a constant version number to really fit)
Version numbers depend on the platform/embedding we're working with:
- `sl web` is basically just the `sl` version, since it's built and deployed alongside `sl`
- for the VS Code extension, we want to use the vscode extension version (from the package.json), because it's updated separately from sl itself.
- Both of these may differ for internal versions vs external versions. Though analytics only happen on internal builds, so it shouldn't matter much (should be clear from context if internal/OSS).
Note that these version numbers may be different from the version number of `sl` that we shell out to, even for `sl web`. Imagine spawning `sl web`, then some time passes, and the `sl` binary is updated on disk. Running `sl version` would give a different version than what ISL calls its version. But that's OK, since the API layer between ISL and `sl` is just command running, and we will need to be weary of backwards compatibility there at all times—we don't assume these versions match.
To start with, we won't log the actual `sl` version for commands we spawn in our analytics. We'll correlate ISL analytics with our other `sl` analytics instead, which should give that info.
Reviewed By: bolinfest
Differential Revision: D42589825
fbshipit-source-id: e51c9eac84a9825720885654854a2d773aa03a19
Summary: Add arguments to isl.py so that each ISL server is spawned with --slversion as expected
Reviewed By: bolinfest
Differential Revision: D42563631
fbshipit-source-id: 9a493c4ddf3f7529f032bc130cbd164fb9d4a668
Summary: The patched abomonation is required to build because of the transitive dependency on the new smallvec feature. We are separately working on removing this dependency, but for now we need to include the patch.
Reviewed By: bolinfest
Differential Revision: D42612033
fbshipit-source-id: 7991db74946cce8640795104623841f23abf82c6
Summary:
EdenFS assumes in several places that even materialized directories will
receive readdir requests. This assumption is slightly wrong on Windows in the
case where a directory hierarchy is completely removed, and then reconstructed
at the identical, with say a filesystem revert operation. In that case, EdenFS
will not participate in any readdir requests in this directory hierarchy as no
directories are placeholder.
During checkout, this assumption is used to dematerialize directories in
saveOverlayPostCheckout, in some cases however, this can lead to dematerialized
directories being contained in a non-placeholder directory on Windows. This
case in particular is fraught with issues. For instance,
invalidateChildrenNotMaterialized may invalidate these dematerialized
directories but doing so would make these disappear entirely due to EdenFS not
participating in non-placeholder directories! I believe a similar issue may
also be possible with future checkout operations, although the exact steps
elude me.
To solve the above, one easy way is to simply force directories traversed by
checkout to always have a placeholder added to them. This is achieved by
invalidating the directory. Since it's unclear if we could also always
invalidate the directory on Linux and macOS, let's add a config so we can roll
this out slowly.
Reviewed By: kmancini
Differential Revision: D42483525
fbshipit-source-id: 9d8c9fe3db8914ca2a46a77f10cf77f2ff871ad9
Summary: When reading copytrace related code, I found we deprecated puredirs in favor of Rust 'dirs' in D19634699 (8e673a1de0).
Reviewed By: markbt
Differential Revision: D42539556
fbshipit-source-id: 59ae23e5d92626cac11cc9ea4c41d196940796b0
Summary:
We met an issue that `oldml["remotenames"]` can be None.
```
with progress.bar(ui, _("reading repo history"), "", len(roots)) as prog:
for rootid in reversed(roots):
prog.value += 1
oldml = ml.checkout(rootid)
node = bookmod.decoderemotenames(oldml["remotenames"]).get(mainfullname)
if node and node in repo:
found = node
break
```
And it will cause below TypeError.
```
│ `hg doctor` stderr:
│ segments/v1: repaired
│ remote/master points to an unknown commit - trying to move it to a known commit
│ exception TypeError('Expected type that converts to PyBytes but received NoneType') ignored during checkmissingmaster
```
Reviewed By: quark-zju
Differential Revision: D42588709
fbshipit-source-id: 79013cc311cfdb4ab033c20cdd7c508d3341c88d
Summary:
use phases information returned from the server to speed up
retrieval of mutations
if we keep phases calculation client side, it often makes few single "commit_hash_to_location" requests and then a huge batched request with few hundred of hashes
Reviewed By: quark-zju
Differential Revision: D42453084
fbshipit-source-id: bec211c41da61332ba324b6b934540d5a48bc65d
Summary:
This removes the `ps>` prefix in the windows installation instructions. With `ps>` included, you can't directly copy and past the commands into powershell
Pull Request resolved: https://github.com/facebook/sapling/pull/454
Reviewed By: evangrayk
Differential Revision: D42594922
Pulled By: bolinfest
fbshipit-source-id: f798d9d74fdccfd088be9efb869857cfee8dead4
Summary:
Printing raw thrift params has one big drawback - commit hashes are unreadable.
This diff is adding function that prints the requests in a nicer way.
Differential Revision: D42501626
fbshipit-source-id: a20d39ab7bf431b8242fec3fe8001ce6ef532fdd
Summary:
That CLI had 0 test coverage so far. This adds very basic integration
test that will catch some of the regressions.
I've had to modify the query to not filter by query status because I couldn't
get that to work with sqlite backend.
Differential Revision: D42501627
fbshipit-source-id: 90d326045d7e262d6ca1eadd5c33965e0d7fba3c
Summary:
Before adding a random_reads benchmark, this diff enables the
random_writes benchmark on Windows and adds throughput numbers.
Reviewed By: kmancini
Differential Revision: D42461465
fbshipit-source-id: 165de212e061f9e2ad58222178422592f3bc688d
Summary:
Add tests for run-proxy. This whole area was previously not tested, despite being the main entry point for `sl web` and having quite complicated stuff going on.
These tests aren't even perfect, there's some stuff that's not getting tested by this, and there's lots of code paths that are uncovered. But it's better than nothing and lets us add more later.
I also had to move some bits and pieces around to make the tests happy (notably for mocking I needed to import some functions differently, which is annoying)
Reviewed By: bolinfest
Differential Revision: D42561949
fbshipit-source-id: e5830038a502f84cd87bc436a7c1064c564de338
Summary: Refactoring run-proxy so that the implementation is a different file from the main entry point. This allows us to write tests where we require the implementation of main(), without actually running it during the require
Reviewed By: bolinfest
Differential Revision: D42561950
fbshipit-source-id: 81f95eab232de3ef5e99771c2fae48911908d330
Summary:
Right now, our server re-use is a bit aggressive. A spawned ISL server will stick around until you close all ISL windows or kill the server yourself or use --force.
But if we updated sl in the background, then there's room for incompatibilities. The server you're already running would be using the old javascript, but it would server the newer javascript for the client, which could easily cause bugs.
In such a situation, we should pretend the user passed `--force`.
The implementation of this is slightly annoying. We're not currently set up to check for the previous server information before we try spawning the server. So we won't be able to add `--force` directly. Instead, if we fail to spawn, then see the existing instance uses a different version, we'll have to kill it and then restart the entire process. This seems to work well in practice.
Even though we're recursively calling `main`, I don't see a way for it to loop forever, though it is worth making sure that's the case. Even if you had a pathological case of multiple ISL servers trying to spawn at the same time.
TODO: we need to update the isl wrapper inside the python sl launcher to pass the version string in
Reviewed By: bolinfest
Differential Revision: D42561951
fbshipit-source-id: bdd0a6e0fb9065ee5fb7cb3110a5a937fc40d9ce
Summary:
Change `test-ext-github-pr-submit.t` so it tests the
new `sl pr submit` behavior introduced in D42550052
while introducing `test-ext-github-pr-submit-placeholder-issue.t`
to verify the old behavior.
Reviewed By: sggutier
Differential Revision: D42554648
fbshipit-source-id: 79f4a876993ad3d85bb2a11be1460de55d5530c0
Summary:
The entry in the bulleted list in the pull request body
should match the pull request number argument
rather than hardcoding it to `1`.
Reviewed By: sggutier
Differential Revision: D42554647
fbshipit-source-id: d5ec178ad8981d92214300f8663e5bc13981357d
Summary:
D42050143 (e77e67bba7) / e77e67bba7
changed the default behavior for `sl pr submit` in hopes of
being more performant [by rearchitecting things so that more
work could be done in parallel], but ended up causing the following
issues:
- could not be used with repos that disabled GitHub issues: https://github.com/facebook/sapling/issues/371
- unexpected failures when trying to convert a placeholder issue
to a pull request (such as https://github.com/facebook/sapling/issues/384)
meant that users often found themselves with stale placeholder issues
- third-party tools, such as Slack apps, that have triggers based on the creation
of new GitHub issues, find this noisy: https://github.com/facebook/sapling/issues/383
This diff changes `sl pr submit` such that it defaults to the previous
behavior (in which pull requests are created in series, rather than in
parallel, and is subject to a race condition where pull request numbers
may not match branch numbers), though for now, it still makes the
old behavior available behind a boolean config, `github.placeholder-strategy`.
Because quite a few changes were made to
`eden/scm/edenscm/ext/github/submit.py` since D42050143 (e77e67bba7)
was committed, this diff is not a straight revert of the
original diff.
Reviewed By: muirdm
Differential Revision: D42550052
fbshipit-source-id: 68d6d1e00b5af2d166b7e3aab2dda959fae5de82
Summary:
To address the Thrift QueueTimeout issue we have been seeing in EdenFS Thrift server.
The underlying issue is that, `globFiles` Thrift method in EdenFS is causing too much fanouts, overloading the Thrift CPU worker. As a result, all the new incoming requests are waiting in the Thrift queue too long to get picked, resulting as QueueTimeout exceptions on the client side.
This diff adds `folly::SerialExecutor` for the `globFiles` method to limit concurrency when processing glob requests.
Reviewed By: chadaustin
Differential Revision: D41390586
fbshipit-source-id: 54d834bb370d4ae693089051f200b7103de2f85b
Summary: This diff adds a `TraceBus::hasSubscription` method so we can quickly check if a TraceBus has any subscriber. This way we can avoid paying the cost of constructing tracing events if there is no subscriber.
Reviewed By: chadaustin
Differential Revision: D41526562
fbshipit-source-id: f89f43ee7739149d51c6a483bd582c7c93e9fcd9
Summary: This diff adds `TaskTraceBlock` to globbing code so we can understand which thread are these code being executed on.
Reviewed By: chadaustin
Differential Revision: D41527107
fbshipit-source-id: a001f473795e413d2bf4a8125389867a1629515b