Summary:
Pull in SegmentedChangelogConfig and build a SegmentedChangelog instance.
This ties the config with the object that we build on the servers.
Separating the instatiation of the sql connections from building any kind of
segmented changelog structure. The primary reason is that there may be multiple
objects that get instantiated and for that it is useful to be able to pass
this object around.
Reviewed By: krallin
Differential Revision: D26708175
fbshipit-source-id: 90bc22eb9046703556381399442117d13b832392
Summary:
This was lost somehow. I probably incorrectly resolved some conflict when
rebasing a previous change.
Reviewed By: quark-zju
Differential Revision: D27146022
fbshipit-source-id: 13bb0bb3df565689532b2ab5299cd757f278f26e
Summary:
Finding a parent that was previously found signals that we want to assign
that changeset sooner if it was not already assigned.
Reviewed By: quark-zju
Differential Revision: D27092205
fbshipit-source-id: ed39a91460ff2f91a458236cdab8018341ec618b
Summary:
Seeding fbsource I found that loading the commits from sql took longer than I
was expecting, around 90 minutes where I was expecting around 10 miuntes.
I added more logging to validate that commits were actively loaded rather
than something being stuck.
Reviewed By: krallin
Differential Revision: D27084739
fbshipit-source-id: 07972707425ecccd4458eec849c63d6d9ccd923d
Summary:
Pretty big bug here with the "Overlay" when we are updating both stores. It
turns out that we don't really want a standard Overlay. We want the loaded
iddag to operate with the Ids in the shared IdMap and we want whatever is
updates to use the in process IdMap. The problem we have with the overlay is
that the shared IdMap may have more data than the in process IdMap. The shared
IdMap is always updated by the tailer, after all. This means that when we query
the overlay, we may get data from the shared store even if this is the first
time we are trying to update a changeset for the current process.
The solution here is to specify which vertexes are fetched from either store.
Reviewed By: quark-zju
Differential Revision: D27028367
fbshipit-source-id: e09f003d94100778eabd990724579c84b0f86541
Summary:
Using the generic load function from SegmentedChangelogManager. This is the
config SegmentedChangelog that is consistent with the specified configuration.
I wanted to have another look at ArcSwap to understand if
`Arc<ArcSwap<Arc<dyn SegmentedChangelog>>>` was the type that it was
recommending for our situation and indeed it is.
Reviewed By: quark-zju
Differential Revision: D27028369
fbshipit-source-id: 7c601d0c664f2be0eef782700ef4dcefa9b5822d
Summary:
Scuba stats provide a lot of context around the workings of the service.
The most interesting operation for segmented changelog is the update.
Reviewed By: krallin
Differential Revision: D26770846
fbshipit-source-id: a5250603f74930ef4f86b4167d43bdd1790b3fce
Summary:
STATS!!!
Count, success, failure, duration. Per instances, per repo.
I wavered on what to name the stats. I wondered whether it was worth being more
specific that "mononoke.segmented_changelog.update" with something like
"inprocess". In my view the in process stats are more important than the tailer
stats because the tailer is more simple and thus easier to understand. So I add
extra qualifications to the tailer stats and keep the name short for inprocess
stats.
Reviewed By: krallin
Differential Revision: D26770845
fbshipit-source-id: 8e02ec3e6b84621327e665c2099abd7a034e43a5
Summary: Currently unused. Will add stats the reference it.
Reviewed By: krallin
Differential Revision: D26770847
fbshipit-source-id: d5694cd221c90ba3adaf89345ffeb06fa46b9e7b
Summary:
I am not sure why the integration tests didn't fail for this one. I know that
a similar issue was caught last week. Probably one of those cases where not
all tests ran. Anyway. SegmentedChangelogManager requires bookmarks now.
It's not going to use them with the way to SegmentedChangelog is built. Using
the bookmarks needs another code change.
I noticed this because it was failing the Tailer. It will crash Mononoke too.
Long story on why the tailer uses this codepath. Needless to say, we don't want
Mononoke crashing so FIX :)
Reviewed By: quark-zju
Differential Revision: D26962608
fbshipit-source-id: 6efafc67f0816792b841af2cc456edc0cc579460
Summary:
Using a more specific name. Looking to differentiate between tailer update
and in process dag update.
Reviewed By: krallin
Differential Revision: D26770844
fbshipit-source-id: b35e6e705a0bfac6289c70a8e8e8cb9ba38a8d99
Summary:
Our production setup has an OnDemandUpdateSegmentedChangelog that gets updated
in various ways. With a setup where the dag is reloaded completely from saves,
we need a factory for the OnDemandUpdateSegmentedChangelog.
SegmentedChangelogManager takes the role of being the factory for our
production Dags.
At some point we will remove the SegmentedChangelog implementation for Manager.
Reviewed By: krallin
Differential Revision: D26708173
fbshipit-source-id: b3d8ea612b317af374f2c0ce6d7c512e3b09b2d2
Summary:
The manager was added as a high level abstraction for storing and loading a
SegmentedChangelog. It worked well when we had one configuration for
SegmentedChangelog. The problem now is that SegmentedChangelog has various
configurations. Storing and loading is an asymetric operation.
In contexts where we do storing we want to have used a specific configuration,
one that operates on an owned dag and has an IdMap that writes to the database.
Then, when running on the server we never store, our writes to the idmap are
in process only and the iddag is wrapped in layers that keep it up to date.
The manager would have to be too complicated to handle all these scenarios.
The solution here is to simplify the manager to cater to the server use case
and inline the logic for the saves where it is used (seeder and tailer).
Reviewed By: krallin
Differential Revision: D26921451
fbshipit-source-id: aedf4acf4bc8371a5d0b249f8bccd9447e85ae0a
Summary:
At the same time remove SqlIdMapFactory. Consolidate the details surrounding
building the IdMap in this factory by moving the logic for caching and in
memory construction from the Manager to the factory.
Reviewed By: krallin
Differential Revision: D26708177
fbshipit-source-id: a6a7f6270c2508adf85f529eef2c75653d002cd0
Summary:
Consolidating on the SegmentedChangelog suffix for the structures in the
`segmented_changelog` crate.
Reviewed By: quark-zju
Differential Revision: D26891996
fbshipit-source-id: 75192bed9cc073adfe7b82ac2b60516ac6629b76
Summary:
Consolidating on the SegmentedChangelog suffic for the structures in the
`segmented_changelog` crate.
Reviewed By: quark-zju
Differential Revision: D26892000
fbshipit-source-id: 47c6ece8aa7ef13e3ea51bbe558655e3f61fdedf
Summary:
Consolidating on the SegmentedChangelog suffix for the structures in the
`segmented_changelog` crate.
Reviewed By: quark-zju
Differential Revision: D26892003
fbshipit-source-id: ad1ccb8c359e7cd5b58d053aa13ed908252988b0
Summary:
Consolidating on the SegmentedChangelog suffix for the structures in the
`segmented_changelog` crate.
Reviewed By: quark-zju
Differential Revision: D26891998
fbshipit-source-id: 86576a029f851e0ac4a6d6600a8839289c9f1f93
Summary:
Consolidating on the SegmentedChangelog suffix for the structures in the
`segmented_changelog` crate.
Reviewed By: quark-zju
Differential Revision: D26892002
fbshipit-source-id: df52027a7c20684c0d46b7adc80692d262b669d4
Summary:
The macro helps with implementing SegmentedChangelog interface for the
structures that rely on another SegmentedChangelog.
Reviewed By: quark-zju
Differential Revision: D26892001
fbshipit-source-id: 6e5f1f04b47f814cf7ed6fd67f4797c5270ba701
Summary:
Consolidating on SegmentedChangelog for the structures in the
`segmented_changelog` crate. We treat these structures as a specific kind of
dag and we name them specifically.
The `dag` crate can have the Dag structures. The `dag` crate generalizes the
graph concept. Dag for generalization, SegmentedChangelog for specific use.
The migration on the DB is simple. We will stop the tailer processes and copy
the data from `segmented_changelog_bundle` to `segmented_changelog_version`.
We will then update the jobs to an ephemeral package that uses
`segmented_changelog_version`. We will remove the old table a week later.
Reviewed By: quark-zju
Differential Revision: D26891997
fbshipit-source-id: e0061973942defa09493b4d23c89d2aaed40825a
Summary:
AsyncVfs provides async vfs interface.
It will be used in the native checkout instead of current use case that spawns blocking tokio tasks for VFS action
Reviewed By: quark-zju
Differential Revision: D26801250
fbshipit-source-id: bb26c4fc8acac82f4b55bb3f2f3964a6d0b64014
Summary: Now the queries macros are asynced, lets do the same with the Transaction api exposed from them.
Reviewed By: krallin
Differential Revision: D26730195
fbshipit-source-id: 278753a5d0401f602ce50519138164bb5e49d550
Summary: Migrate to the std futures version of sql::queries!
Reviewed By: krallin
Differential Revision: D26700360
fbshipit-source-id: 9ed2664d522bde8d0e923142357ca876a7de2613
Summary:
Async the query macros. This change also migrates most callsites, with a few more complicated ones handle as separate diffs, which temporarily use sql01::queries in this diff.
With this change the query string is computed lazily (async fn/blocks being lazy) so we're not holding the extra memory of query string as well as query params for quite as long. This is of most interest for queries doing writes where the query string can be large when large values passed (e.g. Mononoke sqlblob blobstore )
Reviewed By: krallin
Differential Revision: D26586715
fbshipit-source-id: e299932457682b0678734f44bb4bfb0b966edeec
Summary:
This diffs add a layer of indirection between fbinit and tokio, thus allowing
us to use fbinit with tokio 0.2 or tokio 1.x.
The way this works is that you specify the Tokio you want by adding it as an
extra dependency alongside `fbinit` in your `TARGETS` (before this, you had to
always include `tokio-02`).
If you use `fbinit-tokio`, then `#[fbinit::main]` and `#[fbinit::test]` get you
a Tokio 1.x runtime, whereas if you use `fbinit-tokio-02`, you get a Tokio 0.2
runtime.
This diff is big, because it needs to change all the TARGETS that reference
this in the same diff that introduces the mechanism. I also didn't produce it
by hand.
Instead, I scripted the transformation using this script: P242773846
I then ran it using:
```
{ hg grep -l "fbinit::test"; hg grep -l "fbinit::main" } | \
sort | \
uniq | \
xargs ~/codemod/codemod.py \
&& yes | arc lint \
&& common/rust/cargo_from_buck/bin/autocargo
```
Finally, I grabbed the files returned by `hg grep`, then fed them to:
```
arc lint-rust --paths-from ~/files2 --apply-patches --take RUSTFIXDEPS
```
(I had to modify the file list a bit: notably I removed stuff from scripts/ because
some of that causes Buck to crash when running lint-rust, and I also had to add
fbcode/ as a prefix everywhere).
Reviewed By: mitrandir77
Differential Revision: D26754757
fbshipit-source-id: 326b1c4efc9a57ea89db9b1d390677bcd2ab985e
Summary:
This diff rollouts V2 of autocargo in an atomic way so there are quite a few things done here.
Arc lint support:
V1 used to be part of the default fbsource `arc lint` engine, but since V2 calls buck it must live in a separate lint engine. So this diff:
- Adds running `autocargo` as part of `arc lint-rust`
Mergedriver update:
- Mergedriver used in resolving conflicts on commits is now pointing to V2
- It handles files in `public_autocargo/` directories in addition to the ones containig generation preamble
Including regeneration results of running `common/rust/cargo_from_buck/bin/autocargo`. All the differences are accounted for:
- Some sections and attributes are removed as they can be autodiscovered by Cargo (like `lib.path = "src/lib.rs"` or empty [lib] section)
- "readme" attribute is properly defined as relative to Cargo.toml location rather than as hardcoded string
- "unittest = false" on a Buck rule propagates as "test = false; doctest = false" to Cargo
- "rusqlite" is not special-cased anymore, so the "budled" feature will have to be enabled using custom configuration if required by the project (for rust-shed in order to not break windows builds a default feature section was added)
- Files generated from thrift_library rules that do not support "rust" language are removed
- Custom .bzl rules that create rust artifacts (like `rust_python_extension`) are no longer ignored
Others:
- Changed `bin/cargo-autocargo` to be a wrapper for calling V2 via `cargo autocargo`
- Updated following files to use V2:
- `common/rust/tools/reindeer/version-bump`
- `remote_execution/rust/setup.sh`
- Removed few files from V1 that would otherwise interfere with V2 automatic regeneration/linting/testing
Reviewed By: zertosh
Differential Revision: D26728789
fbshipit-source-id: d1454e7ce658a2d3194704f8d77b12d688ec3e64
Summary:
This dag periodically reloads the dag from storage.
It currently loads a simple dag that has no update logic because that is what
the manager returs. It's not relevant for this code.
This is probably the last piece before we refactor construction to take a
SegmentedChangelogConfig. To be seen how much will be strict types and how much
will be Arc<dyn SegmentedChangelog>.
Reviewed By: krallin
Differential Revision: D26681458
fbshipit-source-id: 6056d00db6f25616e8158278702f9f4120b92121
Summary: There were no unit test for SegmentedChangelogManager so I added one.
Reviewed By: krallin
Differential Revision: D26681459
fbshipit-source-id: 40ceefe7b89043ae6d2c4d31a2adf504245161fb
Summary:
A placeholder for convenience functions.
Right not it has a proxy for the head of the dag.
Reviewed By: krallin
Differential Revision: D26681457
fbshipit-source-id: 6856abbf2685407f96701ea5a508342373503360
Summary:
An OnDemandUpdateDag can now track a bookmark. Every given period it will
query the changeset of the bookmark and incrementally build the dag.
Reviewed By: krallin
Differential Revision: D26656765
fbshipit-source-id: 95057863b5201f9632c654be5544922c7538f974
Summary:
For dependencies V2 puts "version" as the first attribute of dependency or just after "package" if present.
Workspace section is after patch section in V2 and since V2 autoformats patch section then the third-party/rust/Cargo.toml manual entries had to be formatted manually since V1 takes it as it is.
The thrift files are to have "generated by autocargo" and not only "generated" on their first line. This diff also removes some previously generated thrift files that have been incorrectly left when the corresponding Cargo.toml was removed.
Reviewed By: ikostia
Differential Revision: D26618363
fbshipit-source-id: c45d296074f5b0319bba975f3cb0240119729c92
Summary:
Simple test that can give us an intuition for how the ConcurrentMemIdMap should
perform.
Reviewed By: krallin
Differential Revision: D26601378
fbshipit-source-id: ae8f2ada6fc08eef806f3ece72a6c1c2f011ac32
Summary: I removed tokio-compat yesterday but this landed at the same time and uses it.
Reviewed By: mitrandir77, StanislavGlebik
Differential Revision: D26605246
fbshipit-source-id: 189f485bc8bc3018abb3e9290953eba14bd178de
Summary:
Adding a new configuration that instantiates SegmentedChangelog by downloading
a dag from a prebuilt blob. It then updates in process.
Reviewed By: krallin
Differential Revision: D26508428
fbshipit-source-id: 09166a3c6de499d8813a29afafd4dfe19a19a2a5
Summary:
I am not sure if this too abstract. It might be. This however has separation of
concerns :)
The goal here is to end up with an in memory IdMap that we write to and read
from first. For things that are not found in the in memory IdMap we fall back
to the SqlIdMap. We'll end up with something like:
`OverlayIdMap(ConcurrentMemIdMap, SqlIdMap)`
Reviewed By: quark-zju
Differential Revision: D26417642
fbshipit-source-id: b2b310306db4dc9fc3427bbf50b19366160882a9
Summary:
The `MemIdMap` is not a valid `IdMap` implementation because it takes `&mut
self` when doing inserts. Wrapping the IdMap in a `RwLock` allows us to
implement the `IdMap` trait.
Reviewed By: krallin
Differential Revision: D26417643
fbshipit-source-id: cb5e3513841fa1dd7c8b8004ce7b2fe1467983d7
Summary:
The on demand update code we have is the most basic logic that we could have.
The main problem is that it has long and redundant write locks. This change
reduces the write lock strictly to the section that has to update the in memory
IdDag.
Updating the Dag has 3 phases:
* loading the data that is required for the update;
* updating the IdMap;
* updating the IdDag;
The Dag can function well for serving requests as long as the commits involved
have been built so we want to have easy read access to both the IdMap and the
IdDag. The IdMap is a very simple structure and because it's described as an
Arc<dyn IdMap> we push the update locking logic to the storage. The IdDag is a
complicated structure that we ask to update itself. Those functions take
mutable references. Updating the storage of the iddag to hide the complexities
of locking is more difficult. We deal with the IdDag directly by wrapping it in
a RwLock. The RwLock allows for easy read access which we expect to be the
predominant access pattern.
Updates to the dag are not completely stable so racing updates can have
conflicting results. In case of conflics one of the update processes would have
to restart. It's easier to reason about the process if we just allow one
"thread" to start an update process. The update process is locked by a sync
mutex. The "threads" that fail the race to update are asked to wait until the
ongoing update is complete. The waiters will poll on a shared future that
tracks the ongoing dag update. After the update is complete the waiters will go
back to checking if the data they have is available in the dag. It is possible
that the dag is updated in between determining that the an update is needed and
acquiring the ongoing_update lock. This is fine because the update building
process checks the state of dag before the dag and updates only what is
necessary if necessary.
Reviewed By: krallin
Differential Revision: D26508430
fbshipit-source-id: cd3bceed7e0ffb00aee64433816b5a23c0508d3c
Summary:
This structure is going to be useful to implement the SegmentedChangelog
functionlity for the OnDemandDag as we move forward with separate objects for
the iddag and the idmap rather than a direct dependency on a Dag object.
Reviewed By: quark-zju
Differential Revision: D26508429
fbshipit-source-id: 9116f1c82d301e8e5b726966abd2add2e32765d6
Summary:
Moving all update logic to `dag::update`.
Additional minor changes: removing Dag::build and spliting build_incremental
around the mutable update section of iddag.
Reviewed By: krallin
Differential Revision: D26508427
fbshipit-source-id: 984259d2f199792fcf0635dd3100ec39260fd3ed