Summary:
Not sure if this is the proper solution, but the template engine refuse to work
with anything but utf8 strings, but the diff operation only work on bytes.
Let's convert the return of diff to utf-8.
Reviewed By: DurhamG
Differential Revision: D20927680
fbshipit-source-id: 25c2947cac417448ca3521c2d5478fa8eebef04f
Summary:
When cloning a Mercurial repository, default to checking out the `master`
bookmark, if it exists. Continue using `.` in case the repository does not
have a `master` bookmark.
Reviewed By: pkaush
Differential Revision: D20876461
fbshipit-source-id: 57fa12e4c713bd50c15f59eb9281e0511c3cfe88
Summary:
The `tm_mon` field returned by `localtime_r()` has a range of 0 to 11.
We want to show human-readable month numbers of 1 to 12 in the fsck directory
name and log timestamps. Fix the formatting by adding 1 to the `tm_mon`
value.
Reviewed By: fanzeyi
Differential Revision: D20909591
fbshipit-source-id: 8625d09306b625e4e71dab9e0679fed3abc7bcf6
Summary:
In Python3, the error returned from binascii.unhexlify changed, from a generic
TypeError to a binascii.Error. Therefore, wrap the binascii function and catch
the binascii.Error before raising a TypeError.
Reviewed By: DurhamG
Differential Revision: D20924129
fbshipit-source-id: 33f852ea97396af715ef73630e0dd1b4324eb707
Summary:
I can't say that I understand `mdiff.splitnewlines`. In my test it does not
behave the way it reads and I don't know why. The stripping that it's supposed
to do doesn't happen for some reason. It behaves like splitlines.
I believe that rcutil used '\n' for line termination because it was relying on
Python to do the conversion to system line end. I updated to use os.linesep
now that we encode the contents.
Reviewed By: quark-zju
Differential Revision: D20935377
fbshipit-source-id: 0958fdff03950ab0a4b2da02e4333b5438ac5c70
Summary:
Do not put a `README_EDEN.txt` file in the checkout root on Windows. On Linux
& Mac the EdenFS mount hides this directory, so the README file is not visible
while the checkout is mounted and running normally. However on Windows
anything present in this directory is visible to the user, so this results in
`README_EDEN.txt` incorrectly showing up in the checkout root.
Reviewed By: genevievehelsel
Differential Revision: D20929408
fbshipit-source-id: 9994524041f22fd8922c531f0185186b04c54821
Summary:
In the Windows main.cpp file, print an exception if one is thrown while
running EdenFS. This doesn't report any backtrace information, but at least
prints the exception message itself.
Previously if an exception was thrown EdenFS would exit with a non-zero status
code but the actual exception message wasn't printed anywhere.
Reviewed By: fanzeyi
Differential Revision: D20928827
fbshipit-source-id: f9397f9688ef25b38f23421213058c417ddefaf9
Summary:
Bytes and Str usages mostly. __iter__ seems to have been incorrectly
converted to python 3 previously.
Reviewed By: xavierd
Differential Revision: D20933166
fbshipit-source-id: 10e63e90bd83c70a51dd808e9b5073ab8d766e71
Summary: We should read/write to it via as utf8.
Reviewed By: DurhamG
Differential Revision: D20923404
fbshipit-source-id: 86cdc329395d60c88637f24d3c7c5caedcc7111a
Summary:
Make sure the function that decides if we should use systemd always returns
False on non-Linux platforms, even if it is explicitly enabled via the config
file or environment variable. systemd is Linux-specific, and it doesn't make
sense to try and use it on other platforms.
Reviewed By: fanzeyi
Differential Revision: D20925344
fbshipit-source-id: cee67f607809da15f584de1eb12a2c4a243b0c91
Summary:
Sometimes the Rust io::Error is generated without an errno (ex. pipe
0.2 would generate BrokenPipe error without an errno). The Python
land uses errno to check error type (Python does not have io::ErrorKind).
Therefore attempt to translate ErrorKind to Python errno. Without this
exiting the rust pager early would crash like:
StdioError: [Errno None] pipe reader has been dropped
abort: pipe reader has been dropped
Reviewed By: markbt
Differential Revision: D20898559
fbshipit-source-id: ef863617e0e500d878ea0f9aeac06b4d87ffbcf2
Summary:
This makes sense to have when running locally. Of you're running Mononoke LFS
locally, then implicitly your access is governed by whether you have access to
the underlying data. If you are on the source control team and you do have
access, it makes sense to let you run without ACL checks (since you could
rebuild from source anyway).
Reviewed By: farnz
Differential Revision: D20897249
fbshipit-source-id: 43e8209952f22aa68573c9b94a34e83f2c88f11b
Summary:
When a client requests a blob that is redacted, we should tell them that,
instead of returning a 500. This does that, we now return a `410 Gone` when
redacted content is accessed.
Reviewed By: farnz
Differential Revision: D20897251
fbshipit-source-id: fc6bd75c82e0cc92a5dbd86e95805d0a1c8235fb
Summary:
If a blob is redacted, we shouldn't crash in batch. Instead, we should return
that the blob exists, and let the download path return to the client the
information that the blob is redacted. This diff does that.
Reviewed By: HarveyHunt
Differential Revision: D20897247
fbshipit-source-id: 3f305dfd9de4ac6a749a9eaedce101f594284d16
Summary:
502 made a bit of sense since we can occasionally proxy things to upstream, but
it's not very meaningful because our inability to service a batch request is
never fully upstream's fault (it would not a failure if we had everything
internally).
So, let's just return a 500, which makes more sense.
Reviewed By: farnz
Differential Revision: D20897250
fbshipit-source-id: 239c776d04d2235c95e0fc0c395550f9c67e1f6a
Summary:
I noticed this while doing some unrelated work on this code. Basically, if we
get an error from upstream, then we shouldn't return an error the client
*unless* upstream being down means we are unable to satisfy their request
(meaning, we are unable to say whether a particular piece of content is
definitely present or definitely missing).
This diff fixes that. Instead of checking for a success when hearing form
upstream _then_ running our routing logic, let's instead only fail if in the
course of trying to route the client, we discover that we need a URL from
upstream AND upstream has failed.
Concretely, this means that if upstream blew up but internal has all the data
we want, we ignore the fact that upstream is down. In practice, internal is
usually very fast (because it's typically all locally-cached) so this is
unlikely to really occur in real life, but it's still a good idea to account
for this failure scenario.
Reviewed By: HarveyHunt
Differential Revision: D20897252
fbshipit-source-id: f5a8598e8a9da382d0d7fa6ea6a61c2eee8ae44c
Summary:
Right now we have a couple functions, but they're not easily composable. I'd
like to make the redacted blobs configurable when creating a test repo, but I
also don't want to have 2 new variants, so let's create a little builder for
test repos.
This should make it easier to extend in the future to add more customizability
to test repos, which should in turn make it easier to write unit tests :)
Reviewed By: HarveyHunt
Differential Revision: D20897253
fbshipit-source-id: 3cb9b52ffda80ccf5b9a328accb92132261616a1
Summary:
This asyncifies the internals of `subcommand_tail`, which
loops over a stream, by taking the operation performed in
the loop and making it an async function.
The resulting code saves a few heap allocations by reducing
clones, and is also *much* less indented, which helps with
readability.
Reviewed By: krallin
Differential Revision: D20664511
fbshipit-source-id: 8e81a1507e37ad2cc59e616c739e19574252e72c
Summary: These hooks behave the same way in Mercurial and Bonsai form. Port them over to operating on Bonsai form
Reviewed By: krallin
Differential Revision: D20891165
fbshipit-source-id: cbcdf217398714642d2f2d6669376defe8b944d7
Summary: Running on Mercurial hooks isn't scalable long term - move the consumers of hooks to run on both forms for a transition period
Reviewed By: krallin
Differential Revision: D20879136
fbshipit-source-id: 4630cafaebbf6a26aa6ba92bd8d53794a1d1c058
Summary: To use Bonsai-based hooks, we ned to be able to load them. Make it possible.
Reviewed By: krallin
Differential Revision: D20879135
fbshipit-source-id: 9b44d7ca83257c8fc30809b4b65ec27a8e9a8209
Summary: We want all hooks to run against the Bonsai form, not a Mercurial form. Create a second form of hooks (currently not used) which acts on Bonsai hooks. Later diffs in the stack will move us over to Bonsai only, and remove support for Mercurial changeset derived hooks
Reviewed By: krallin
Differential Revision: D20604846
fbshipit-source-id: 61eece8bc4ec5dcc262059c19a434d5966a8d550
Summary:
Thanks to StanislavGlebik for this idea: we can make the looping over
upload changesets into straightforward imperative code instead
of using `.and_then` + `.fold` by taking the next chunk in a
while loop.
The resulting code is probably easier to understand (depends whether
you come from a functional background I guess), and it's less indented
which is definitely more readable
Reviewed By: StanislavGlebik
Differential Revision: D20881862
fbshipit-source-id: 7ecf76a2fae3eb0e6c24a1ee14e0684b6334b087
Summary:
A couple of minor improvements, removing some overhead:
- We don't need to pass cloned structs to `erive_data_for_csids`,
refs work just fine
- We can strip out one of the boxing blocks by directly assigning
an `async` block to `globalrevs_work`
- We can't do the same for `synced_commit_mapping_work` because
we have to iterate over `chunk` in synchronous code, so that
`chunk` can later be consumed by the line defining `changesets`.
Reviewed By: StanislavGlebik
Differential Revision: D20863304
fbshipit-source-id: 14cad3324978a66bcf325b77df7803d77468d30b
Summary:
This wound up being a little tricky, because
that `async move` blocks capture any data used,
and most of the fields of the `Blobimport` struct
are values rather than refs.
The easiest solution that I came up with, which looks
a little weird but works better than anything else
I tried, is to just inject a little block of code
(which I commented so it will hopefully be clear to
future readers) taking refs of anything that we need
to use in an async block but also have available later.
In the process, we are able to strip out a layer of
clones, which should improve efficiency a bit.
Reviewed By: StanislavGlebik
Differential Revision: D20862358
fbshipit-source-id: 186bf9939b9496c432ff0d9a01e602da47f4b5d4
Summary:
The computation of commit obsolescence is inconsistent. If we compute the full
set of obsolete commits in `mutation.obsoletecache.obsoletenodes`, then we
correctly ignore public commits as they cannot be obsolete.
However, if we compute the obsolescence state for a single public commit with
`mutation.obsoletecache.isobsolete`, and that commit somehow has a visible
successor, then we will incorrectly consider the commit as obsolete.
Similarly, `allpredecessors` and `allsuccessors` should stop when they hit a
public commit.
Reviewed By: quark-zju
Differential Revision: D20892778
fbshipit-source-id: 223cb8b2bc9f2f08124df6ff51c2eb208bb8eb5f
Summary: Some methods that were unused or barely used outside of the cmdlib crate were made non-public (parse_caching, CachelibSettings, init_cachelib_from_settings).
Reviewed By: krallin
Differential Revision: D20671251
fbshipit-source-id: 232e786fa5af5af543239aca939cb15ca2d6bc10
Summary: This makes the tracing features easier to use.
Reviewed By: DurhamG
Differential Revision: D19797703
fbshipit-source-id: fb5cb17cd389575cf0134a708bcd9df3b90e9ab4
Summary:
Somehow, enabling VT can fail when writing to the pager, but this doesn't
mean that the pager doesn't support VT mode, so let's just ignore the error
when the pager is active.
Reviewed By: DurhamG
Differential Revision: D20906374
fbshipit-source-id: 7cba52817bc8e4dc91d5d50e856ad8af7fc9542c
Summary: Print out a command that can be copied and executed to make `--keeptmp` more handy.
Reviewed By: sfilipco
Differential Revision: D20829140
fbshipit-source-id: 7976e3f64fd423425ec29634a53a34f7b5e091d0
Summary:
Add a command to print visibleheads. This was part of my attempt to check if
visibleheads can accidentally include public commits and if there are a way
to remove it. I ended up thinking D20808884 might actualy solve the only case
that visibleheads include public heads.
I also tried to add strong verification so that the visibility layer never
writes public nodes. That's for non-narrow-heads use-cases. However, the
phasescache + repoview layer is kind of messy in a way that inside a
transaction there is only one "repo" that has the right in-memory, dirty
"phasescache" and other repos will load the (stale, wrong) phasescache from
disk. That means if we test phases in visibility before transaction flushes,
we won't be able to access the latest phases information correctly. So I
gave up this approach too.
Anyway, I wasn't able to add a new interesting test, but the utility built for
the test seems useful. Therefore this change.
Reviewed By: sfilipco
Differential Revision: D20829136
fbshipit-source-id: 5ebafefac820ebb4044db63b7892ffaa341c0573
Summary:
Make the error cleaner and more actionable. We don't autopull the commit
because the revset layer might be not ready for it (ex. it expects commit
graph to be immutable and might have done some calculations based on the
old graph already).
Reviewed By: sfilipco
Differential Revision: D20845159
fbshipit-source-id: c51f2f52c612ff14a88fb891c10d1faad1094635