Summary:
The main issue is that cargo test fails preventing adding sandcastle
configuration that would run these tests on CI.
Reviewed By: singhsrb
Differential Revision: D14543988
fbshipit-source-id: c299148cce01316fad872b9cf8e15dea6633da48
Summary:
As we are moving to smooth switching to Mononoke and backwards, we can start to deprecate
Mercurial specific optimizations to simplify the code.
Reviewed By: markbt
Differential Revision: D14131594
fbshipit-source-id: fa927011890ecdf0874a3a74b4910412b3c84b70
Summary:
Use the `Fallible` type alias provided by `failure` rather than defining our
own.
Differential Revision: D13657314
fbshipit-source-id: f1a379089972f7f0066c49ddedf606d36b7ac260
Summary:
This is done by running `fix-code.py`. Note that those strings are
semvers so they do not pin down the exact version. An API-compatiable upgrade
is still possible.
Reviewed By: ikostia
Differential Revision: D10213073
fbshipit-source-id: 82f90766fb7e02cdeb6615ae3cb7212d928ed48d
Summary: This is the final step to make CAT authentification work
Reviewed By: markbt
Differential Revision: D12975214
fbshipit-source-id: e445ca502f8abaac914140f3f30476d50b3c2fbc
Summary:
To solve friction with OAuth tokens we will support CAT tokens as well in Scm Daemon.
Icebreaker support has been done in D12942971
CATs tokens can be generated on dev servers without user (via the tool based on TLS certs).
So we are going to use them in the next diff.
This will allow us to enable token-less cloud sync for everyone, scm daemon will use CATs.
Reviewed By: markbt
Differential Revision: D12962342
fbshipit-source-id: 173301387ee446622bf77b2d6bed6934b5ced2c3
Summary:
Basically if Unauthorized it will try to access the token again and restart all the subscriptions
rather than trying to reconnect with the same token in infinite loop.
We know OAuth tokens have potential to be invalidated.
CAT token (that we are going to support as well) will always be valid for some time - like 1 day, so we need a smooth way to recover from Unauthorized and issue a fresh token.
Reviewed By: markbt
Differential Revision: D12960843
fbshipit-source-id: 630c446c490b0724df38c61507ee555dc7ed7241
Summary: This is just the result of running `./contrib/fix-code.py $(hg files .)`
Reviewed By: ikostia
Differential Revision: D10213075
fbshipit-source-id: 88577c9b9588a5b44fcf1fe6f0082815dfeb363a
Summary:
Maybe useful as a backup for the regular path and also for syncing speed up.
Scm daemon know new and removed heads, so if for example 1 new and 1 removed head - it is the most probably just an amend, so scm daemon can try the fast path first depends on information in the notification, and if it fails try the slow path.
So users can have better experience before Mononoke, it is much much faster and scm daemon makes 2 attempts anyway!
Reviewed By: quark-zju
Differential Revision: D9309856
fbshipit-source-id: d59f498160a45fab11760b5c1397b48470feb7f8
Summary: If the daemon can't find the token file, it will try to read from secrets_tool on unix-like systems. Integrates well with people who have enabled the secrets_token option as their token file will have been deleted.
Reviewed By: liubov-dmitrieva
Differential Revision: D9029795
fbshipit-source-id: b364d9e8885ee0473b8d1effd6ee0b2e86a699f9
Summary:
this will reduce cloud sync errors and unnecessary cloud sync calls
the daemon triggers cloud sync on service start/restart
it is not always the time when the machine online (and connected to correct network), so we get cloud sync errors
Reviewed By: markbt
Differential Revision: D8692972
fbshipit-source-id: 59033fd4c3e7c30100d82b908442bbf1ebea9322
Summary: log the pid of the spawned cloud sync process, it might help with debugging if something is broken
Reviewed By: markbt
Differential Revision: D8478566
fbshipit-source-id: fd9a9a228bc325056fb35d17ee93c865679e6e23
Summary:
read the token only when it is needed to do so, not in the constructor
scm daemon can run for users who are not registered with Commit Cloud
Reviewed By: markbt
Differential Revision: D8445923
fbshipit-source-id: b0d8c86729721037a02f93bbf7fa1fc88d7d7979
Summary:
this is needed because `hg cloud sync` can be triggered by external serviced like scm_daemon on behalf of the user,
so it should just fail rather than expect user to type the password, so we change ui ssh option to the bgssh (background ssh) that is defined in infinitepush section
Reviewed By: markbt
Differential Revision: D8331723
fbshipit-source-id: 28f9d007702e4f6ed5216114921375b76def3f93
Summary: The Ubuntu and Windows builders have an older rustc that doesn't support this syntax.
Reviewed By: DurhamG
Differential Revision: D8301570
fbshipit-source-id: 56990a804053a4dc78e41789c7b577bcf82868d7
Summary:
The windows and ubuntu builds don't have a version of rust that
supports these features, so this breaks the build.
Reviewed By: phillco, quark-zju, singhsrb
Differential Revision: D8289651
fbshipit-source-id: d08b141b4d9996e3b899ac0604225ad34f863990
Summary:
just refactoring to improve the code quality
the main improvement is that I separated TcpReceiver to a different service,
any other services can register callbacks with TcpReceiver service.
For WorkspaceSubscriberService callbacks are implemented using mpsc channel to notify the main WorkspaceSubscriberService thread and single atomic flag that allows running subscriptions to join.
Another improvement is that I added logic to run cloud sync on the first keep alive after connection errors
Reviewed By: markbt
Differential Revision: D8226109
fbshipit-source-id: 3fe513da9273b28b2262948ecdf620821e7ab313
Summary:
Added logic to control logging rate: empty messages that comes to confirm the subscription is alive, also on error logging rate when we are offline, also when we are running in standby with no active subscriptions
Also, I made a simple cross platform API, so that hg can trigger restart subscriptions in 2 lines of code. It is simple request - response API on tcp socket and json.
If a human run `hg cloud join`, hg will add subscriber file to the directory scm daemon reads subscribers from and will send the restart command, same for any `hg cloud leave` run
Another advantage is that the client (hg) can very easy check if the scm daemon is alive or not. (In 2 lines of code, cross platform, without any pid logic or other platform specific ifs)
Another advantage is that we can use it to receive some stats from the scm daemon.
I decided do not go with any watching directory logic, because changes are really rare events, and it will be better if a client (hg) will just notify the service to restart subscriptions when needed.
Also, I verified that hg and SCM Daemon use the same config options and logic related to detected home directory on different platforms and reading the token.
Reviewed By: markbt
Differential Revision: D8162237
fbshipit-source-id: 3cb48b90f5e065ce4dc7fdc7215c3ce6ad57fb9a
Summary: This changes to support Scm Daemon on dev machines
Reviewed By: farnz
Differential Revision: D8139892
fbshipit-source-id: b6df53d6ce6615d24822b739d4d1705e0f572660
Summary: Scm Daemon initial implementation that currently just listen to Commit Cloud Live Notifications and trigger `hg cloud sync` on notifications
Reviewed By: markbt
Differential Revision: D8119768
fbshipit-source-id: a0d86624fe4b81b3adc89990640916d3da279b8c