Summary:
Include the full error source chain on any internal errors. This will improve
debugability when internal errors are encountered, as the outermost context
might not be enough to describe the origin of the error.
Detect when backtraces are disabled, and only include them on the error if a backtrace was captured.
Reviewed By: StanislavGlebik
Differential Revision: D19813785
fbshipit-source-id: c28ffc0c44050a82cb7050afa57e976f37962432
Summary: create a bounded traversal crate with the 0.1 version of bounded_traversal_stream functionality, prior to updating to futures 0.3
Reviewed By: farnz
Differential Revision: D19804873
fbshipit-source-id: e98e00111fee5b1a9fcfc20bb054eeae1263fb26
Summary: replace use of spawn_future with tokio 0.2 join handle
Reviewed By: krallin
Differential Revision: D19770171
fbshipit-source-id: e0b7bf3da58896b223149b339a72bfec997215ee
Summary:
Update the walker step methods to use new futures, and combine them with async fn
Later in stack planning to:
* remove use of spawn_future and replace it with the tokio 0.2 join handles
* port bounded_traversal_stream to new futures so all these 0.3 futures don't immediated get compat'd back to 0.1
Reviewed By: farnz
Differential Revision: D19767902
fbshipit-source-id: 10fd6236a064efbb7d0815fadbdd32036bcafead
Summary:
D19767626 added an original_timestamp column to the
blobstore_sync_queue. Update the sqlite schema to keep it in sync.
Reviewed By: krallin
Differential Revision: D19787488
fbshipit-source-id: ad576e2ec99349953e2ab69e3defb73d1ff556c0
Summary:
Modify the multiplexed blobstore implementation so that the
multiplex_id is written to the healer queue after a put. Further, update the
blobstore healer to only look at entries with the same multiplex ID as it's
configured to run with.
Reviewed By: ahornby
Differential Revision: D19770057
fbshipit-source-id: 41db19f0b0f84c048d49ab9e6258cccc89cf4195
Summary:
This test deliberately races itself. Unfortunately, tokio's scheduler is sufficiently quick that if we spawn all the futures as they're created, they sometimes don't race each other.
Fix this by spawning in the join line instead.
Reviewed By: ahornby
Differential Revision: D19812651
fbshipit-source-id: 86685bbb71c451e9c2a96100c83ddff28d0718dd
Summary:
Before we start blocking generation of derived data let's start with logging if
derived data is not specified.
Reviewed By: farnz
Differential Revision: D19791523
fbshipit-source-id: 15bed8463f8a021de76a2878f06ec95d9fef877f
Summary:
See D19787960 for more details why we need to do it.
This diff just adds a struct in BlobRepo
Reviewed By: HarveyHunt
Differential Revision: D19788395
fbshipit-source-id: d609638432db3061f17aaa6272315f0c2efe9328
Summary:
Looks like most of our tests got slower recently, and this particular test
started failing as a result since it's sensitive to timing. Unlike when this
test was written, we can now get a little more info from Mononoke LFS by
looking at Scuba logs, to know if we went to upstream or not. So, let's do
that.
Reviewed By: HarveyHunt
Differential Revision: D19790000
fbshipit-source-id: 5617b088595c911018166d2c13eb43dc6adca60b
Summary: Make the EdenAPI server report that it is exiting when asked to shut down. This ensures that Proxygen will stop sending traffic to servers that are about to be shut down by Tupperware. This diff is basically the same as krallin's diff D17626009 for the LFS server.
Reviewed By: quark-zju
Differential Revision: D19782432
fbshipit-source-id: 41b9e6761145402e7dcf18c53a2b33799588594c
Summary: This diff sets up the Mononoke API (from the `mononoke_api` crate) in the EdenAPI server, and makes it available to route handlers by adding it to a new `EdenApiContext` struct that is maintained as part of the server's global state. The server's route handlers should use the Mononoke API for accessing source control data, and any new source control business logic should be incorporated into that crate rather than being part of the EdenAPI server itself.
Reviewed By: xavierd
Differential Revision: D19778441
fbshipit-source-id: bc2efb82e0276d75c49980c52fa0d3017a4ce2f1
Summary:
Currently existing validation won't catch a bug where commits `a <- b` get
replayed as `b <- a` as long as they don't touch the same files. Let's add
such check.
Reviewed By: StanislavGlebik
Differential Revision: D19723150
fbshipit-source-id: ddc15063b9ae4fc38416ab9b96681da302fec8d4
Summary:
In order to uniquely identify a blobstore multiplexer configuration,
add an ID.
Reviewed By: krallin
Differential Revision: D19770058
fbshipit-source-id: 8e09d5531d1d27b337cf62a6126f88ce15de341b
Summary:
Follow up from D19718839 - let's add a function that will safely sync a commit
from one repo to another. Other function to sync a commit are prefixed with
unsafe
Reviewed By: krallin
Differential Revision: D19769762
fbshipit-source-id: 844da3e2c1cc39ef3cd86d282d275d860be55f44
Summary:
If we e.g. a getpack for path like "foo\"bar", then we can't decode it into a
`&str` because we need to allocate a new `String` to hold it. At the same time,
if the path is "foo bar", then having a reference into the JSON we received is
nicer.
Right now, we expect a `&str`, so the latter case. But, if we find command args
from the first case, we can't deserialize them. To fix this, let's use
`Cow<...>`, which lets us either have a referenced or an owned string.
Also, let's add tests to confirm this works.
Reviewed By: ikostia
Differential Revision: D19767689
fbshipit-source-id: bf9e06d4a885638073c819a25a68810ff44f2546
Summary:
Fetching things from MySQL sequentially in a buffered fashion is a bad
practice, since we might end up saturating the underlying MySQL pool, and
starving other MySQL clients.
Instead, let's make fewer, bigger queries.
Reviewed By: ahornby
Differential Revision: D19766787
fbshipit-source-id: 1cf9102eaca8cc1ab55b7b85039ca99627a86b71
Summary:
Fetching things from MySQL sequentially in a buffered fashion is a bad
practice, since we might end up saturating the underlying MySQL pool with a lot
of requests. Doing so will result in other queries being delayed as they wait
behind our batch of queries, which results in higher dispatch latency.
Instead, let's make fewer, bigger queries. Also, while we're in here, let's
update blobrepo to have an up-to-date comment.
Reviewed By: StanislavGlebik
Differential Revision: D19766788
fbshipit-source-id: 318ec4778ca259b210d431fc2add8b327bfce99a
Summary: We don't need to log so many blob fetches. Let's not.
Reviewed By: HarveyHunt
Differential Revision: D19766017
fbshipit-source-id: 674dee276234f96938a9459af18dd78d09243350
Summary: This will let us lower Scuba utilization from Fastreplay.
Reviewed By: HarveyHunt
Differential Revision: D19766018
fbshipit-source-id: 4eac19b929914db910ed13096b2a5910c134ed3a
Summary:
If the user requests blame information for a file where the blame was rejected
(either becuase the file is too big, or because it is binary), this should be
considered a request error.
Reviewed By: farnz
Differential Revision: D19768261
fbshipit-source-id: 7f0d7ba53fe1087b68f4432ec0c6de0353dc3885
Summary: They are not used much - let's use new futures instead
Reviewed By: krallin
Differential Revision: D19767952
fbshipit-source-id: c04bcf5efc6f8ee6f1d31254fcb2cb4603769b91
Summary:
Suggestions come in the error message as it is currently implemented in
Mercurial code. Format of suggestions also stays the same.
We give the hash, time, author and the title.
All suggestions are ordered (most recent go first).
We don't show them if there are two many.
Reviewed By: krallin
Differential Revision: D19732053
fbshipit-source-id: b94154cbc5a4f440a0053fc3fac2bca2ae0b7119
Summary:
Useful for debugging.
I also fixed how we open a SqlSyncedCommitMapping, because we used incorrect path for that.
Reviewed By: ikostia
Differential Revision: D19767148
fbshipit-source-id: baf67bceceb7b22429b05b41020cf4350e3c87bd
Summary:
This is the api that will be used by Sandcastle to remap a commit from one repo
to another.
Previously the implementation api was just looking in the commit mapping table,
but that's not enough - draft commit cloud commits are not in this table, so we
actually need to sync them.
There's a caveat though - we allow syncing public commits from a large repo to
a small repo, but not the other way around. Comment in the code has more info
about it.
Reviewed By: ikostia
Differential Revision: D19718839
fbshipit-source-id: 9939530f818fafd22bc3838b4647dd9cbc1c8c07
Summary:
Jump from "generating filenodes while generating hg changeset" to "generate
filenodes separately" is tricky to do without breaking production. This diff
adds additional logic in IncompleteFilenodes that should make this transition
smoother. See code comment for more details.
Reviewed By: krallin
Differential Revision: D19741913
fbshipit-source-id: 48987c15fc4144c50afcee7ae34072f6cd634271