This will help cases where Node is broken causing Copilot to fail to
start but because it doesn't install via NPM we would not have caught it
prior.
Release Notes:
- Improved detection of broken Node installation impacting Copilot
([#1551](https://github.com/zed-industries/community/issues/1551)).
This will help cases where Node is broken causing Copilot to fail to
start but because it doesn't install via NPM we would not have caught
it prior.
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
Code snippet
```rust
fn main() {
//√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√√
}
```
has length of 191, but consists of 87 chars, and the debug code with
`.truncate(100)` panicked.
Fixed that issue, cc @KCaverly
Release Notes:
- N/A
Fixes:
https://linear.app/zed-industries/issue/Z-2416/improvements-to-feedback-submission
We get a lot of duplicate messages through our in-app feedback. My best
guess is that because we do not tell the user we are doing anything, and
because submission takes awhile, users are hitting the submission button
mutliple times. This PR blocks the submission code, once an initial
submission is sent. If the original submission fails, we unblock the
submission code. The submit button is disabled and enabled accordingly
as well.
Release Notes:
- N/A
Spent a bit in a deep dive into how to handle this and honestly the
situation is rather unfortunate. The core problem is that when we have a
panic anywhere we need to tear down the app, and we'd like to do that as
cleanly as possible, avoiding throwing any other panics along the way if
possible.
We've been seeing a number of panics being reported which are
nonsensical, seemingly pointing to being a fallout panic from a worker
thread panic-ing, at which point we would write multiple panics to the
panic file, and we could possibly upload either both or the wrong panic
causing a wild goose chase. Unfortunately I've been entirely unable to
reproduce the specific panic we've been seeing but I was able to read
through the code responsible and confirm that under specific situations
a panic on one worker can cause another worker or the main thread to
also panic.
An easy solution to this is just to ignore any panics after the first
one. I'm thinking that *hopefully* we can trust the first panic to reach
the panic hook first so that the flag doesn't accidentally filter out
the panic we actually care about.
That being said we were expecting that to have already been the case
about which panic gets written to the panic file first, the first one in
the file being the one we upload, which doesn't seem to have been the
case. I'm hoping it was IO silliness causing that and that the flag
shouldn't be race-y, however this is still a shot in the dark. 🤞
As for cleanly shutting down, there's not really much we can do. One
thread physically cannot cause another to unwind without somehow sending
a message which isn't super useful. The only way for a thread to shut
down all threads and the process is to go nuclear and abort/exit the
process. This will never unwind other threads, effectively having the
same effect on those threads as compiling with `panic = "abort"` would.
With some (mis)use of `std::panic::resume_unwind` we can at least say
that for whatever thread actually panic-ed we will unwind, and any other
threads that panic as a result will probably get at least partway
through unwinding. This is weird, almost a combination of panic
rewinding and aborting, and may actually be worse than just biting the
bullet and aborting immediately.
I'm really not a fan of where I've ended up but it does seem to at the
very least an improvement. The main question in my mind at this point is
whether it would be better to attempt to unwind what we can or go all in
on abort. I'd love some input on that.
Release Notes:
- Improved panic reporting when a background thread panics.