Summary: In this diff the configs are parsed from toml and passed around to hook's execution context. The actual usage of configs will be introduced in separate diff.
Reviewed By: StanislavGlebik
Differential Revision: D13862837
fbshipit-source-id: 60ac10aa9c25d224e703e1e55bef13dc481ba07e
Summary:
D13853115 adds `edenscm/` to `sys.path` and code still uses `import mercurial`.
That has nasty problems if both `import mercurial` and
`import edenscm.mercurial` are used, because Python would think `mercurial.foo`
and `edenscm.mercurial.foo` are different modules so code like
`try: ... except mercurial.error.Foo: ...`, or `isinstance(x, mercurial.foo.Bar)`
would fail to handle the `edenscm.mercurial` version. There are also some
module-level states (ex. `extensions._extensions`) that would cause trouble if
they have multiple versions in a single process.
Change imports to use the `edenscm` so ideally the `mercurial` is no longer
imported at all. Add checks in extensions.py to catch unexpected extensions
importing modules from the old (wrong) locations when running tests.
Reviewed By: phillco
Differential Revision: D13868981
fbshipit-source-id: f4e2513766957fd81d85407994f7521a08e4de48
Summary: This is to take the guesswork out of revisions. MononokeRevision now states explicitely whether the value is a hash or bookmark. To roll this out I will prepare an updated scmquery-proxy version, test everything on shadow. Then use the sitevar to disable mononoke usage for a short time while both apisever and scmquery-proxy are rolled out simultaneously
Reviewed By: StanislavGlebik
Differential Revision: D13817040
fbshipit-source-id: d5eee7cf9ac972fb1313a10a17bf60c4545054af
Summary: The first request was way too slow as the cache was not yet warm. This speeds up the initial request substentially.
Reviewed By: StanislavGlebik
Differential Revision: D13817064
fbshipit-source-id: 0f2a01395743ef848e6bc4a5e71c0562b268c0cf
Summary: Ran rustfmt on the complete folder
Reviewed By: HarveyHunt
Differential Revision: D13803235
fbshipit-source-id: ccb41d7be8099055b8ea8dec57ce5e04b38b9461
Summary:
Previously pushrebasing an empty commit failed because we assumed that root
manifest of a commit is always sent in a bundle. This diff removes this
assumption
Reviewed By: lukaspiatkowski
Differential Revision: D13818556
fbshipit-source-id: 44e96374ae343074f48e42a90c691b21e3c41386
Summary:
We've had this problem for almost a year now, and I've finally made some
progress.
The problem was in tests failing randomly with error like
```
- remote: * pushrebase failed * (glob)
- remote: msg: "pushrebase failed Conflicts([PushrebaseConflict { left: MPath(\"1\"), right: MPath(\"1\") }])"
+ remote: Jan 25 08:46:24.067 ERRO Error in hgcli proxy, error: Connection reset by peer (os error 104), root_cause: Os {
+ remote: code: 104,
+ remote: kind: ConnectionReset,
+ remote: message: "Connection reset by peer"
remote: * backtrace* (glob)
```
or
```
remote: * pushrebase failed * (glob)
remote: msg: "pushrebase failed Conflicts([PushrebaseConflict { left: MPath(\"1\"), right: MPath(\"1\") }])"
+ remote: Jan 25 08:47:59.966 ERRO Error in hgcli proxy, error: Connection reset by peer (os error 104), root_cause: Os {
+ remote: code: 104,
+ remote: kind: ConnectionReset,
+ remote: message: "Connection reset by peer"
+ remote: }, backtrace:
```
note that the problem are slightly different. In the first case the actual error message is completely lost + we get unnecessary
ConnectionReset problem message. In the second case it's just `ConnectionReset`.
This diff fixes the problem of the lost error message (problem #1) and hides `ConnectionReset` problem (problem #2).
Problem #1 was due to a bug in streamfork. Before this diff if streamfork hit
an error, then it might have not sent already received input to one of the
outputs. This diff fixes it.
This diff just hides Problem #2. If we see a ConnectionReset then an error
won't be reported. That's a hack which should be fixed, but at the moment
a) The bug is not easily debuggable
b) The problem is not urgent and shouldn't cause problems
In some cases server actually sends Connection reset, but in that case
mercurial stil gives us self-explanatory message
```
abort: stream ended unexpectedly (got 0 bytes, expected 4
``
Reviewed By: lukaspiatkowski
Differential Revision: D13818558
fbshipit-source-id: 7a2cba8cd0fcef8211451df3dea558fe2d60fa60
Summary:
Pushrebase wasn't returning a response to a pushkey part, and we get `server
ignored bookmark ... update` messages. This diff fixes it by returning the
reply to a pushkey part.
Note that behaviour is different from mercurial. In mercurial many pushkey
parts are allowed, while we allow only which moves `onto` bookmark. That
shouldn't be restrictive, however we can change this behaviour later if needed.
Reviewed By: aslpavel
Differential Revision: D13781546
fbshipit-source-id: edb0fdc7dc10c7a5cf4c49157fce0887e71fcf8a
Summary:
This will reduce max potential number of fetching bookmarks from O(len(longest stack)) to O(1) (with is 0 or 1) if there are draft commits.
Previously, the refetching could happen on every iteration while we are
searching for public roots.
This diff allows us to reuse the bookmarks from the first time they were needed
and has been fetched in the whole phases calculation process. Fetch just once!
If there are no draft commits bookmarks were fetched only once already (and only if needed).
Reviewed By: StanislavGlebik
Differential Revision: D13753520
fbshipit-source-id: d96a6cf434cb4a1fe95e51ae734afb1671124336
Summary:
Resolving pushvars before commonheads breaks hooks tests (they were disabled, so didn't come up)
The order was changed in D13802402, So, this diff changes the order as it was before and keep the remaining logic.
Reviewed By: StanislavGlebik
Differential Revision: D13817434
fbshipit-source-id: 1443b18ade7161304f8b5359e7d49dab13b022cc
Summary: This allows us to run pushbackup and cloud sync commands for Read Only Mononoke repos.
Reviewed By: ikostia
Differential Revision: D13804545
fbshipit-source-id: 8026fc4668afc8bb5c2c0a9587ca024e3c6920da
Summary:
The test is ignored by default because it would probably be
flakey/slowThe test is ignored by default because it would probably be
flakey/slow.
Reviewed By: aslpavel
Differential Revision: D13795070
fbshipit-source-id: 8459d34126917e0fce3151ec2f4e68e639d2e7e6
Summary:
Next step is to change infinitepush / cloud sync to pass this pushvar to
Mononoke
Reviewed By: StanislavGlebik
Differential Revision: D13802402
fbshipit-source-id: 25e3d699bd934e3e015b9784040bd2dc4b43188d
Summary: This hint is passed to many places, so it reduces the code.
Reviewed By: StanislavGlebik
Differential Revision: D13802159
fbshipit-source-id: 891eef00c236b2241571e24c50dc82b9862872cc
Summary: Added tw_handle to be able to distinguish source of samples
Reviewed By: ikostia
Differential Revision: D13800968
fbshipit-source-id: 6c8528fc69302b2d5c5fbd40ccdf729c9379a101
Summary: Simple stats for put/get/is_present so we can monitor basic functionality.
Reviewed By: StanislavGlebik
Differential Revision: D13736062
fbshipit-source-id: 1fd6def0d98d9d29dfbbf11430006b91936d35dc
Summary:
Use rustfix and some hand-editing to convert to Rust 2018 - main
benefit is the elimination of a lot of `extern crate` lines.
Reviewed By: HarveyHunt
Differential Revision: D13736087
fbshipit-source-id: 50e4edfdff3e8ceea94b2beed36399681871a436
Summary: This is the cloud sync test with local (on disk) commit cloud backend and mononoke backend.
Reviewed By: markbt
Differential Revision: D13762527
fbshipit-source-id: c454dbf67999333e37a54e5e6ac84c3cebcf8c3b
Summary: This test covers corner case (partially public stacks)
Reviewed By: StanislavGlebik
Differential Revision: D13750852
fbshipit-source-id: cd1a5a84dfb62951cb37f1fbdd6c510d825adb41
Summary:
this is required to cover corner cases when client has some stacks and part of those became public
calculation for public roots happen for draft heads only, it doesn't change performance of hg pull
Reviewed By: StanislavGlebik
Differential Revision: D13742685
fbshipit-source-id: d8c8bc357628b9b513bbfad4a82a7220d143f364
Summary: Create the corresponding endpoint for thrift calls to get_commit_v2. For now this only supports the non optional fields.
Reviewed By: StanislavGlebik
Differential Revision: D13730602
fbshipit-source-id: fd9d845620c864bf7dade13d810e98270425ea00
Summary:
Previously we rely on CachingChangesetFetcher to quickly fetch all commits into
memory, but CachingChangesetFetcher was deleted in D13695201. Instead let's use
`get_many()` method from Changesets trait to quickly fetch many changesets at
once into memory.
Reviewed By: lukaspiatkowski
Differential Revision: D13712783
fbshipit-source-id: 12e8fa148f7989028547ac8d374438e23b44b6d1
Summary:
The primary motivation for this method is to quickly build skiplist indexes
(see next diff). Building skiplists is much faster if all the data is in
memory.
Reviewed By: lukaspiatkowski
Differential Revision: D13712784
fbshipit-source-id: 716dc020a49cbffac273eb466e474ed86887927d
Summary:
We decided to populate phases table in 2 places: blobimport and push-rebase
(alredy done).
This diff is for blobimport. We know the commits are public.
Reviewed By: lukaspiatkowski
Differential Revision: D13731900
fbshipit-source-id: b64e5643e7cffd9e8fb842e9929f4c1ee7a66197
Summary: Implemented advanced parser to parse bundlecaps json object into more suitable to work datastructure.
Reviewed By: aslpavel
Differential Revision: D13602738
fbshipit-source-id: 9a2f8e78d55a21e80229aae23e5a38f6cc14c7e8
Summary: Log all thrift calls to scuba. Since the params are in thrift format we log them both in json and human readable format (e.g. path is binary thus not really readable in json format). In addition both http and thrift now log the request type.
Reviewed By: StanislavGlebik
Differential Revision: D13730245
fbshipit-source-id: 3419ec067a9066e181210d184195fbd02980c1e0
Summary: `BlobRepo` is cheaply clonable and doesn't need to be `Arc`'d.
Reviewed By: StanislavGlebik
Differential Revision: D13726727
fbshipit-source-id: 0b983c7c4625f47df8a11da5781b7777cc38d72f
Summary:
Previously we had a timeout per session i.e. multiple wireproto command will
share the same timeout. It had a few disadvantages:
1) The main disadvantage was that if connection had timed out we didn't log
stats such as number of files, response size etc and we didn't log parameters
to scribe. The latter is even a bigger problem, because we usually want to
replay requests that were slow and timed out and not the requests that finished
quickly.
2) The less important disadvantage is that we have clients that do small
request from the server and then keep the connection open for a long time.
Eventually we kill the connection and log it as an error. With this change
the connection will be open until client closes it. That might potentially be
a problem, and if that's the case then we can reintroduce perconnection
timeout.
Initially I was planning to use tokio::util::timer to implement all the
timeouts, but it has different behaviour for stream - it only allows to set
per-item timeout, while we want timeout for the whole stream.
(https://docs.rs/tokio/0.1/tokio/timer/struct.Timeout.html#futures-and-streams)
To overcome it I implemented simple combinator StreamWithTimeout which does
exactly what I want.
Reviewed By: HarveyHunt
Differential Revision: D13731966
fbshipit-source-id: 211240267c7568cedd18af08155d94bf9246ecc3
Summary:
the extension was not enabled for repo pull (the second repo)
hg pull -r was still working but other things like hg up <commit cloud hash> were not working.
it caused a bit of confusion
It is cleaner to enable the extension for both sides.
Reviewed By: StanislavGlebik
Differential Revision: D13710518
fbshipit-source-id: 231aec1a71a5c13d707c2b361ce77158573b93f0