Summary:
Support the `--history` flag (and other flags from `hg cloud sl`) on `hg cloud
ssl`. The only real difference between the two is the template.
Fetching the phabricator status for large number of commits can take a while,
so show a progress spinner while we do it.
Reviewed By: liubov-dmitrieva
Differential Revision: D27460144
fbshipit-source-id: c80830b85fd0766ad9c56afa9afea0c2d21f9b36
Summary:
Replace the implementation of `hg cloud sl --history` with one that uses
streampager. This allows the user to scroll around the rendered smartlog while
also switching from one version to the next.
The old (broken) history is available by setting `commitcloud.useoldhistorytui` to true.
Reviewed By: quark-zju
Differential Revision: D27268251
fbshipit-source-id: 77501e4b7f3da9506cd5a9cabb9cba0388743723
Summary:
Add bindings for `pysptui`. This allows using `streampager` as a TUI, using
streampager's controlled file mode to manage the display.
Reviewed By: quark-zju
Differential Revision: D27268252
fbshipit-source-id: d191a09c44ca4ed013647feb81e6f031d553b2f2
Summary:
We weren't validating any hashes in walker, and so it was entirely possible to
have corrupt content in the repo. Let's start hash validation it by validating hg
filenodes
Reviewed By: ahornby
Differential Revision: D27459516
fbshipit-source-id: 495d59436773d76fcd2ed572e1e724b01012be93
Summary:
When going into a root unode we check if fastlog is derived or not. if it's not
derived we don't even try to check fastlog dirs/files, and that's correct.
However that means that if fastlog batch for a given commit is derived than
we could (and should!) check if all fastlog batches referenced from this
commit exist.
Previously we weren't doing that and this diff fixes it. Same applies for hg
filenode.
Reviewed By: ahornby
Differential Revision: D27401597
fbshipit-source-id: 71ad2744eee33208c44447163cf77bc95ffe98d0
Summary:
During an `hg update`, all the loaded inodes are unloaded and forgotten, except
the ones that had a positive refcount, these are forgotten when FUSE calls the
`forget` API. For NFS, this `forget` API doesn't exist but the protocol relies
on file handle to stay valid for some time. The duration for which a file
handle need to stay alive depends on several factor. The root one needs to
always be valid, the rest relies on the lookup cache that the client keeps.
This lookup cache is invalidated when the mtime for the parent directory
changes, but mtime changes are often not immediately detected, as attributes
are cached client side with the `acdirmin` and `acregmin` for directories and
files.
For now, this diff doesn't attempt to deal with files being out of date right
after an `hg update`, it merely accounts for the NFS client being able to pass
old file handles to EdenFS. The astude reader will have noticed that
InodeNumber are never reclaimed for now, for that, a time-based mechanism will
need to be added to forget InodeNumbers that have expired.
Reviewed By: kmancini
Differential Revision: D27405295
fbshipit-source-id: af4a4ce9e31bfcc335608da91f0247b50ab87b3f
Summary: In a later diff, I plan to make this `pub` in order to parse HTTP versions from the user's config.
Reviewed By: quark-zju
Differential Revision: D27449576
fbshipit-source-id: 28a60080393eff73399c65b9e808647b39603719
Summary:
Production confidence team wants a simple tool to capture current state of the repo and get a commit hash in commitcloud
ephemeralcommit achieves that by making a hidden commit and pushing it to commitcloud
Main purpose of this is to use with ephemeral fbpkg, this produces relatively low volume of commits (comparing to total commit count).
This currently does not add untracked files, but removes missing files
Usage:
```
# note that when invoking from automatic tools need to specify -q
$ hg debugephemeralcommit -q
<hash of the commit>
```
Reviewed By: quark-zju
Differential Revision: D27339320
fbshipit-source-id: 07a4ea8ff80b80ce620fb609096db97f46d383dc
Summary: WSAEACCES is included in Windows only. Only check it if the os is windows.
Reviewed By: quark-zju
Differential Revision: D27434606
fbshipit-source-id: 25eb8036363b42629fbd010f7637a404dccff236
Summary:
`isinstance(i, int)` is problematic on Python 2 Windows:
In [1]: isinstance(72057594037927936,long)
Out[1]: True
In [2]: isinstance(72057594037927936,int)
Out[2]: False
Fix it by replacing `isinstance(_, int)` with a pycmpat function.
Reviewed By: DurhamG
Differential Revision: D27436107
fbshipit-source-id: 9d9d9f9359546a91a564d948718078a9aa594420
Summary: Integration test and macOS tests did not run at diff time.
Reviewed By: DurhamG
Differential Revision: D27436108
fbshipit-source-id: ab94ec88bad8de42025f539023ab426002b9bed5
Summary: On macOS, the mount argument are encoded with XDR, let's add them first before using them.
Reviewed By: genevievehelsel
Differential Revision: D27306770
fbshipit-source-id: 727824f05d3874119858af60c263267adfb3e61e
Summary:
The returned value was never used by any callers, let's simply not return any
value.
Reviewed By: kmancini
Differential Revision: D27418015
fbshipit-source-id: 2a6f15eee01052cdfa9ae334c34e69f2f0a74407
Summary: Expand the IdConvert implementation so we can change it.
Reviewed By: sfilipco
Differential Revision: D27339280
fbshipit-source-id: eb55c63529c895502a25bb279bcba3c13737452a
Summary:
Add an overlay IdMap field for NameDag to store temporary remote IdMap results.
This diff just adds the field. It's not used yet.
Reviewed By: sfilipco
Differential Revision: D27339274
fbshipit-source-id: dbbde227f26de15d10c84f5d7c61ca8054577752
Summary:
In a future diff, AbstractNameDag wants an "overlay" IdMap to store temporary
remote IdMap results. The MemIdMap is suitable but has extra features (next
free id, version). Extract the core in-memory IdMap logic for the "overlay"
purpose.
Reviewed By: sfilipco
Differential Revision: D27339277
fbshipit-source-id: 4e73032b8bc6670264e3fa1dd5515ea3bc853d10
Summary:
The `Process::process` contains logic to resolve Id <-> Vertex using remote
service. The remote service is async so let's make `Process` async.
Reviewed By: sfilipco
Differential Revision: D27308798
fbshipit-source-id: 30c2c3eda124d542d0867d278ce56a7a174f33e0
Summary:
The config turns on fsync for all indexedlog writes. It can be useful in places
where we want reliability more than performance with unknown kernel issues.
Reviewed By: sfilipco
Differential Revision: D27347595
fbshipit-source-id: c0b31928684e8805a9e6441062f96b05ad311ea2
Summary:
Add a global flag, if turned on, ensure all atomic files, and indexes and
primary logs use fsync.
Also enhance fsync so it syncs the directory too.
Reviewed By: sfilipco
Differential Revision: D27347596
fbshipit-source-id: 831e27e494cc343a33ca675619c030ead8023210
Summary: test-lfs-placeholders.t fails on windows. The code isn't used on Windows so mark the test as no-windows.
Reviewed By: sfilipco
Differential Revision: D27433793
fbshipit-source-id: 4cbf70efae655ca318d776f6a2d6b79e83c78cbc
Summary:
Current we treat missing just as any other error, and that makes it hard to
have an alarm on blobs that are missing, because any transient error might make
this alarm go off.
Let's instead differentiate between a blob being missing and failed to fetch a
blob because of any other error (i.e. manifold is unavaialble). Turned out this
is not too difficult to do because a lot of the types implement Loadable trait,
which has Missing variant in its errors.
side-note:
It looks like we have 3 steps which treat missing blobs as not being an
error. These steps are:
1) fastlog_dir_step
2) fastlog_file_step
3) hg_filenode_step
I think this is wrong, and I plan to fix it in the next diff
Reviewed By: ahornby
Differential Revision: D27400280
fbshipit-source-id: e79fff25c41e4d03d77b72b410d6d2f0822c28fd
Summary:
Add a way to disable revnum resolution for non-automation (non-HGPLAIN)
usecases. Automation (ex. nuclide-core) still use revision numbers.
Some tests still use revnums. Put them in a list so this feature
does not break them.
Reviewed By: DurhamG
Differential Revision: D27144492
fbshipit-source-id: fba97fc90c7942e53914c29354938178b2637f44
Summary:
Manifest parent triggers an unimplemented tree history fetch path and is
generally prone to errors. See D9013996 (2b7e9e5e8b) and D9013995 (9e51fdef40).
Reviewed By: DurhamG
Differential Revision: D27411626
fbshipit-source-id: aee79f7928f0eb7fd39f68d12ec3ca33873f4e0b
Summary: Use Edenapi book request and response type in bookmarks edenapi endpoint. Serialize the response as cbor.
Reviewed By: kulshrax
Differential Revision: D27174122
fbshipit-source-id: 6bc7295c25bd355db4625da3c1f8c68349e7b0b7
Summary: Add edenapi types for the bookmarks endpoint. Now the endpoint can handle a request for a batch of bookmarks instead of a single bookmark. The request type still needs to be modified at some point to allow for bookmark prefix listing.
Reviewed By: kulshrax
Differential Revision: D27133284
fbshipit-source-id: c3960629cad76504e222f726a151eb3390850276
Summary: We have number of small repos which doesn't have much traffic and load so it can be combined in one single tailer job.
Reviewed By: StanislavGlebik
Differential Revision: D27361196
fbshipit-source-id: 3326a7445cf4f0f0cb27fe0f4545cf8ee3357ff2
Summary:
When EdenFS is mounted via FUSE, the changed directory and files are flushed so
the kernel is forced to re-read them to obtain new InodeNumber. When NFS is
used, EdenFS can't directly invalidate the client view of the FS, instead we
need to rely on when and how is this invalidated. On Linux, a file handle won't
be refreshed if its parent directory hasn't changed, which is done by looking
at the mtime of the directory by performing a GETATTR call. It is thus crucial
that the mtime is updated during an `hg update`.
Reviewed By: chadaustin
Differential Revision: D27403729
fbshipit-source-id: 1f7195c6c33e2a34c3bb73145f404d652302d828
Summary:
We can move the ifdef in the function itself. This makes the functions calling
these easier to read, and it's less likely for someone to forgot to add the
proper ifdef. If we also ever want to make these do something on Windows, it
will be easier to do too.
Reviewed By: chadaustin
Differential Revision: D27403730
fbshipit-source-id: c5b78ea7c7eb70eaf8d4974e5bec14296f91576f
Summary: This fixes various tests on Windows. Many of these fixes were ported from upstream.
Reviewed By: markbt
Differential Revision: D27174617
fbshipit-source-id: b9f36ad0714793f2b76db32c1d840284b744a841
Summary: This will cause a compile error if a new variant is added to `Value` - I have looked at the existing types, and none of the ones called out here would be used by MySQL for an integer field or an enum field.
Reviewed By: StanislavGlebik
Differential Revision: D27402174
fbshipit-source-id: e1fd8dd821f66094225f1ea4c38cd22626c8ab64
Summary:
The placeholder mode is meant to be used when talking to LFS server is not available in
given environment. It allows the user to work in degraded mode where
all of the LFS files are replaced with placeholders informing about their
original sizes and hashes.
Reviewed By: xavierd
Differential Revision: D27294603
fbshipit-source-id: 2bb8e2cb74ffccefcd90d618d6791ce5c45755d6
Summary:
This situation is not normally possible - every hg changeset should have a
corresponding bonsai. So let's return an error if that's not the case.
Reviewed By: farnz
Differential Revision: D27400281
fbshipit-source-id: 4b01b973eeef0e3336c187fb90dd2ab4853b5c02
Summary:
Non-list optional data can be present in some XDR description, let's special
case it so the intent is clear when declaring XDR datastructures.
Reviewed By: fanzeyi
Differential Revision: D27306768
fbshipit-source-id: 9d4d18bf8deff16f859c6d28a2579341dac8ee6f
Summary:
After receiving a network packet, it's possible that more than one fragment
were received as part of it. We thus need to service all of them before
returning.
This would typically be seen when running `rg` in the repository, which would
cause hangs due to some requests not being serviced as they would stay in the
iobuf queue until a new packet was received.
Reviewed By: kmancini
Differential Revision: D27194038
fbshipit-source-id: 3d81c797b5be7d0466d4acad7208f6a82593b4ca
Summary:
Computing the length of an iobuf chain can be expensive due to having to walk
its entirety. Thanksfully, IOBufQueue can cache the total length when data is
appended to it, which makes computing the length a constant operation.
Reviewed By: kmancini
Differential Revision: D27194037
fbshipit-source-id: af659c162ada61f2796bf407f419f5f15e918c02
Summary:
By moving the work to a background threadpool, we can more quickly go back to
servicing incoming NFS requests and thus allow more work to be done
concurrently. This would allow tools like ripgrep to being able to use multiple
cores to search in the code base.
Reviewed By: genevievehelsel
Differential Revision: D27194040
fbshipit-source-id: 7f1775ddaaa7eaf8776a06d05951cb936cd3fbb5
Summary:
D26744888 added use of Value::UInt to the MySQL client. This breaks the ChunkingMethod conversion, which depended on it being supplied Value::Int.
Add UInt support to the list - and keep the extra debugging data used to track this down
Differential Revision: D27396544
fbshipit-source-id: d9b8f1ef37f112d9a7d560e4b2a039480997ea0b
Summary:
Manual import from https://github.com/facebookexperimental/eden/pull/81 as the auto import mapped the .yml paths incorrectly
Updates eden/scm/Makefile to use python3 so we don't need to install multiple py versions
Adds hgext.convert.repo to setup3.py packages as mononoke tests showed it was missing
Updates github actions python versions
Reviewed By: quark-zju
Differential Revision: D27367568
fbshipit-source-id: 3817bdc1c48a8f7bfa8e29b5f7ec87d0eed579a9
Summary:
Like it says in the title. We don't use this anymore.
Context in D27268419
Reviewed By: markbt
Differential Revision: D27268635
fbshipit-source-id: 236adb5e68bc67612610d99f626344f4d592b5f9
Summary:
`create_mysql_connections_sharded` takes over a second to construct
4,000 shards (over 250 µs per shard). It's also blocking; run it in a blocking
thread so that other tasks are not waiting for it.
Reviewed By: StanislavGlebik
Differential Revision: D27360221
fbshipit-source-id: 79065bf4a8cd60dddbb5c1e8bf871872fd52f428
Summary: We don't use it, so let's remove it
Reviewed By: farnz
Differential Revision: D27359959
fbshipit-source-id: 42ce7da16fd0359bbceeab9d1f99712f45a80314
Summary: Can reduce number of allocations and copies by sharing the underlying thrift buffer
Reviewed By: markbt
Differential Revision: D27043232
fbshipit-source-id: a6e58c53035cb07f7b205df465a9ba2f7a78d52e
Summary: Can reduce number of allocations and copies by sharing the underlying thrift buffer
Reviewed By: krallin
Differential Revision: D27043231
fbshipit-source-id: 90731ac0a94d50ec28c9082f4e878c2ba24fcffd
Summary: Add a test target to the eden_scm getdeps manifest and underlying Makefile
Reviewed By: markbt
Differential Revision: D27336805
fbshipit-source-id: 07ec4be1ff03c6a384451ce138d88938dd4bf86e
Summary:
The `remotebookmarks` field in the local commit cloud state should always be one of:
* The empty set, if the previous sync was performed with remotebookmarks sync disabled; or
* The cloud workspace's remote bookmarks for that version.
Currently when processing remote bookmarks, we may store in the local state the
outcome of conflict resolution for the remote bookmarks. This is the wrong
thing to do, as it means we won't then upload those conflict resolutions as a
new cloud version, which means they may get lost and rolled back.
Change application of cloud remote bookmarks to store the cloud remote bookmarks
in the local state, even if we changed them through conflict resolution. This
means we will always upload the newly updated remote bookmarks to the server,
and things will stay more in sync.
Reviewed By: quark-zju
Differential Revision: D27291238
fbshipit-source-id: 8e6a0ab150da5907d32b8127aa0e6ccb17df4eea
Summary:
When connecting to a commit cloud workspace where there are no draft commits to
pull, no local bookmarks to sync, but the remote bookmarks in the local repo
are ahead of the ones in the commit cloud workspace, we fail to sync the remote
bookmarks to the server.
This results in the remote bookmark rewinding on the next sync.
Reviewed By: quark-zju
Differential Revision: D27291237
fbshipit-source-id: 8ba56542492fda26b9cecb6726ddd1b85ed5c180
Summary:
In the next diff I'd like to add support for syncing globalrev to our darkstorm
repos. Doing it the same way we do it for hgsql isn't going to work, because
darkstorm repos stores globalrevs the same way mononoke does it (i.e. per
commit entry in mysql) and not the way hgsql does (i.e. one row per repo).
In this diff I do a small refactoring that remembers which bonsai commits were pushed
in a bundle, so that in the next diff we can start writing them to darkstorm
db.
Reviewed By: krallin
Differential Revision: D27268778
fbshipit-source-id: bbb39de233719c8435d11d00980f6eaf5b755ba6
Summary:
Original commit changeset: 0708a4b0dc37
It seem to be the reason of sql timeouts on mononoke startup
Differential Revision: D27337030
fbshipit-source-id: 7b154c09397b0e297e18b186a6338ab801b1769d
Summary:
In production, all repos are instantiated roughly the same time so all reload
processes are started roughly the same time. Reload makes a bunch of requests
and could potentially cause load. Jitter spreads out the load of the reload.
Avoiding the load spike will make overall server behavior more predictable.
Reviewed By: krallin
Differential Revision: D27280117
fbshipit-source-id: 0727af2e7f231a5b6c948424022788a8e7071f82
Summary:
We would like to distribute the load of update process when many repositories
have Segmented Changelog enabled. Without the jitter all enabled repositories
start their update at roughly the same time. The jitter smooth out the load and
reduces variance in the update process.
Reviewed By: krallin
Differential Revision: D27280118
fbshipit-source-id: 41ad83b09700da1ef70c09dd5d284977e53a95a2
Summary:
`build_from_scratch` only called in `run_with_idmap_version` so we can inline
the code so that the seed process reads better.
This function used to be used as a shortcut towards getting a built dag but now
we prefer to fetch the dags from the store that the seeder writes to.
Reviewed By: krallin
Differential Revision: D27210036
fbshipit-source-id: 0b31ff1126a0f4904578da333cf6d34d69b2782c
Summary:
Removing the last callsite for SegmentedChangelogBuilder means that
the whole class goes away.
Reviewed By: krallin
Differential Revision: D27208339
fbshipit-source-id: 006aa91ed9656e4c33b082cbed82f9a497b0d267
Summary:
We are removing SegmentedChangelogBuilder.
Remove the last uses of Builder in the tests module.
Reviewed By: krallin
Differential Revision: D27208341
fbshipit-source-id: 00f1aaa2376ee5d68dbf7c1256b312cfe0b96d86
Summary:
Any functions that returns SegmentedChangelog is a valid argument for
reloading.
Reviewed By: krallin
Differential Revision: D27202520
fbshipit-source-id: fe903c6be4646c8ec98058d1a025829268c36619
Summary:
PeriodReloading is not fundamentally tied to the Manager. A future change will
update the load function.
Reviewed By: krallin
Differential Revision: D27202524
fbshipit-source-id: a0e4b08cb8605d071d5f30be8c3054f75321aa9c
Summary:
Let's look at these test from a higher perspective. Right now the tests use
internal APIs because not all components were ready when they were added. We
now have components for all the parts of the lifecycle so we can set up tests
in the same form that we would set up the production workflow.
This simplifies the API structure of the components since they can be catered
to one workflow.
Reviewed By: krallin
Differential Revision: D27202530
fbshipit-source-id: 6ec10a0b1ae49da13cfbe803e120a4e754b35fc7
Summary:
The broad goal is to get rid of SegmentedChangelogBuilder.
We will have a new constructor for Seeder, one that uses non segmented_changelog
dependencies as input.
Reviewed By: krallin
Differential Revision: D27202523
fbshipit-source-id: d420507502925d4440d5c3058efef0a4d2dbe895
Summary:
For tests that don't care about the bookmarks specifically, we want to use the
default bookmark name that we defined in BOOKMARK_NAME.
FWIW, it makes sense for this bookmark to be set by blobrepo even... at least
the fixtures. They set a bookmark and it makes sense for us to have a reference
to the bookmark that they set. Something to think about.
Reviewed By: krallin
Differential Revision: D27202522
fbshipit-source-id: 7615e4978dded491dd04ae44ce0b85134a252feb
Summary:
This gets rid of the odd builder for the Seeder.
We can get into design discussions with this one. What is struct and what is
function? For real structures that provide some behavior, I prefer to put
dependencies in owned data. Things that are part of the request go into
function parameters. In mononoke, RepositoryId is the common exception.
Anyway, IdMapVersion is part of the request for seeding. It is useful to have
that as a parameter when starting the seeder.
Reviewed By: krallin
Differential Revision: D27202528
fbshipit-source-id: a67b33493b20d2813fd0a144b9bb7f4510635ae8
Summary:
With the mix of external pager and progress suspension, the progress might
be enabled by accident:
# pager: disable forever
disable_progress(True)
# suspension
with progress.suspend()
...
# on __exit__, re-enables pager
Update the pager disabling logic be nested to avoid the potential issue.
Reviewed By: andll
Differential Revision: D27275016
fbshipit-source-id: 35ca7aef1890a981e6a1f0e5313b6a482ed43368
Summary:
Currently dropping progress bar panics if `__exit__` returns error
This happens, for example, when handling interrupts. Best course of actions just do not panic in this case
Reviewed By: sfilipco
Differential Revision: D27334897
fbshipit-source-id: c879fb14cfd4c16c0f9caede552129f8117defdc
Summary:
The progress total can be expensive to calculate for generatorset. Do not
calculate it. This will preserve laziness of certain sets (ex. generatorset
used by fastlog and average directory log).
Reviewed By: sfilipco
Differential Revision: D27327127
fbshipit-source-id: b0e3655e33b9e89ee2100941af18a769315f25bb
Summary:
Buffering the stream can provide suboptimal UX if the stream is slow.
Detect slow streams and avoid full buffering.
Reviewed By: sfilipco
Differential Revision: D27327128
fbshipit-source-id: a7b8037b7ba28fccc10661ffd15fd68f191d0048
Summary:
Remove use of dangerous_override from the repo client tests.
Previously this was used to override filestore config, so just use the existing
config override mechanism to set the filestore params this is generated from.
Reviewed By: ahornby
Differential Revision: D27169424
fbshipit-source-id: 7d17437f0e218d1cf19cc64d48e1efdd7012e927
Summary:
Remove use of dangerous_override in the pushrebase tests.
This was being used to ensure the bookmarks db in the test repo was shared with the mutable counters db. This is now directly possible by accessing the metadata database from the factory.
Reviewed By: ahornby
Differential Revision: D27169435
fbshipit-source-id: 1412231bdd9214bc869a3bfa7f63bf6c14db6836
Summary:
Remove uses of dangerous_override from derived data tests.
These were being used to override the derived data config or the lease object.
Extend the factory to allow customization of the config or the lease object,
and use that instead.
Reviewed By: StanislavGlebik
Differential Revision: D27169438
fbshipit-source-id: e8d0be248391d02bb054e19fdb9a90005db09c84
Summary:
Remove uses of dangerous_override in the commit rewriting tests.
Previously, the test was using dangerous_override to replace the bookmarks
attributes with ones that share a backing database with a `SqlMutableCounters`
instance.
With TestRepoFactory, all attributes share one metadata database. Obtain that
from the factory and use it to initialize the `SqlMutableCounters` instance.
Reviewed By: StanislavGlebik
Differential Revision: D27169429
fbshipit-source-id: 3c1b285db38a96deca7029d37e6692cb49356d31
Summary:
Remove dangerous overrides used in the tests in the bookmarks crate.
The test was using dangerous overrides to change the blobstore, which is now
supported directly by the test repo factory.
It was also using dangerous overrides to override the bookmark update log to
reset its database so it looks like the log is empty. Instead, clear out the
bookmark update log database as part of the test.
Reviewed By: StanislavGlebik
Differential Revision: D27169426
fbshipit-source-id: 64e1e89e31f62dcb585741ea728ebbe45f60fd38
Summary: This has been superseded by `test_repo_factory::TestRepoFactory`.
Reviewed By: StanislavGlebik
Differential Revision: D27169434
fbshipit-source-id: 97fcd400c5e3c6e8f86c9acfc5e979909e2eda31
Summary: Use the test factory for the remaining existing tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169443
fbshipit-source-id: 00d62d7794b66f5d3b053e8079f09f2532d757e7
Summary: Use the test factory for existing bookmarks and pushrebase tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169430
fbshipit-source-id: 65a7c87bc37cfa2b3b42873bc733cec177d8c1b0
Summary: Use the test factory for test fixtures that are used in many existing tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169432
fbshipit-source-id: bb3cbfa95b330cf6572d1009c507271a7b008dec
Summary: Use the test factory for existing lfs_server tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169427
fbshipit-source-id: 004867780ad6a41c3b17963006a7d3b0e5b82113
Summary: Use the test factory for existing commit_rewriting tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169442
fbshipit-source-id: df2447b2b6423d172e684d7e702752ad717a2a4b
Summary: Use the test factory for existing repo_client and repo_import tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169425
fbshipit-source-id: 2d0c34f129447232cec8faee42056d83613de179
Summary: Use the test factory for existing hooks and mapping tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169436
fbshipit-source-id: f814e462bb2244e117c83bd355042016f5bfde8e
Summary: Use the test factory for existing mononoke_api tests.
Reviewed By: StanislavGlebik
Differential Revision: D27169444
fbshipit-source-id: d940bfa494dfe89fdb891d5248b73ba764f74faf
Summary: Use the test factory for existing derived data tests.
Reviewed By: farnz
Differential Revision: D27169433
fbshipit-source-id: b9b57a97f73aeb639162c359ab60c6fd92da14a8
Summary:
Create a factory that can be used to build repositories in tests.
The test repo factory will be kept in a separate crate to the production repo factory, so that it can depend on a smaller set of dependencies: just those needed for in-memory test repos. This should eventually help make compilation speeds faster for tests.
A notable difference between the test repos produced by this factory and the ones produced by `blobrepo_factory` is that the new repos share the in-memory metadata database. This is closer to what we use in production, and in a couple of places it is relied upon and existing tests must use `dangerous_override` to make it happen.
Reviewed By: ahornby
Differential Revision: D27169441
fbshipit-source-id: 82541a2ae71746f5e3b1a2a8a19c46bf29dd261c
Summary:
Convert `BlobRepo` to a `facet::container`. This will allow it to be built
from an appropriate facet factory.
This only changes the definition of the structure: we still use
`blobrepo_factory` to construct it. The main difference is in the types
of the attributes, which change from `Arc<dyn Trait>` to
`Arc<dyn Trait + Send + Sync + 'static`>, specified by the `ArcTrait` alias
generated by the `#[facet::facet]` macro.
Reviewed By: StanislavGlebik
Differential Revision: D27169437
fbshipit-source-id: 3496b6ee2f0d1e72a36c9e9eb9bd3d0bb7beba8b
Summary: To prepare for making `RepoBlobstore` a facet, convert it to a newtype wrapper.
Reviewed By: ahornby
Differential Revision: D27169439
fbshipit-source-id: ceefe307e962c03c3b89be660b5b6c18d79acf3e
Summary: We have support for backup-repo-id, but tw blobimport doesn't have id and have source repo name to use. Let's add support similar to other repo-id/source-repo-id etc.
Reviewed By: StanislavGlebik
Differential Revision: D27325583
fbshipit-source-id: 44b5ec7f99005355b8eaa4c066cb7168ec858049
Summary:
I'm trying to track down an issue in SQLBlob construction, where it takes over 3 wall-clock seconds to construct.
This is indicative of a missing `blocking` annotation somewhere; to make it easier to add the correct annotation, switch construction to modern futures and `async`
Reviewed By: krallin
Differential Revision: D27275801
fbshipit-source-id: 2b516b4eca7143e4be17c50c6542a9da601d6ac6
Summary:
The pack key needs to be removed using unlink() after target keys are pointed to it so that old unused packs can be GCed. e.g.
packer run 1: keys A and B are packed to P1
packer run 2: keys A and C are packed to P2
packer run 3: keys B and D are packed to P3
If we don't unlink P1 during run1 then GC can't collect the old pack after run 3 as the key will keep it live
Reviewed By: farnz
Differential Revision: D27188108
fbshipit-source-id: bbe22a011abbf85370a8c548413401ffb54b16b6
Summary:
Refactor `scmstore::types` into separate `file` and `tree` modules and introduce a new `StoreTree` type to represent trees in the scmstore API.
Introduce a minimal `StoreTree` type. Importantly, for now this type does not provide any methods for deserializing the tree manifest and inspecting it's contents. This functionality won't be too hard to implement, though - it'll require some revisions to the `manifest-tree` crate and / or moving the `StoreTree` and `StoreFile` types to `revisionstore_types`.
Reviewed By: kulshrax
Differential Revision: D27310878
fbshipit-source-id: 712330fba87f33c49587fa895efea3601ce377af
Summary:
The revset is not optimized for the (to be migrated to segments) revlog backend.
Optimize it.
Reviewed By: jteosw
Differential Revision: D27317708
fbshipit-source-id: cec9d6aad0f6c30c69a931898f8e1cc7c904b3f8
Summary:
We've seen occasional index timeouts on inserts in MySQL. This is very
reminiscent of D19158550 (fef360b284).
aida points out this seems to have started to happen (occasionally) recently.
That would make sense: we used to insert one-by-one so we wouldn't have ordering
issues (because bug), but we also recently started inserting in batches again (because bugfix).
There's little reason to expect we couldn't run into the same bug there as well.
So let's give filenodes & copydata the same sorting treatment we give paths.
Worst case, this does nothing. Best case, it fixes the issue.
Reviewed By: StanislavGlebik
Differential Revision: D27301588
fbshipit-source-id: 2b24ddd68e1a1c4e31fe33e03efcef47dad3657d
Summary: Previously, accessing both repo.filelog and repo.manifestlog in a test debug command, under buck builds only, would cause a "memcache singleton leaked" error to be printed after the command completes (see https://www.internalfb.com/intern/qa/87181/runtime-warnings-about-memcacheconnectionmanagerim ). There are still some unanswered questions about this issue, as noted in the QA post, but this fixes the main issue at hand.
Reviewed By: sfilipco
Differential Revision: D27297498
fbshipit-source-id: e19665333bae9f91e1c3c6db370962a3aea2727d