Store a reference to the newly created execution context in an aptly
named variable, rename global_this_value to just this_value, and only
call set_global_object() in a single place.
It makes no sense to require passing a global object and doing a stack
space check in some cases where running out of stack is highly unlikely,
we can't recover from errors, and currently ignore the result anyway.
This is most commonly in constructors and when setting things up, rather
than regular function calls.
This is taken from the abandoned error stacks proposal, which
already serves as the source of truth for the setter. It only requires
the this value to be an object - if it's not an Error object, the getter
returns undefined.
I have not compared this behavior to the non-standard implementations of
the stack property in other engines, but presumably the spec authors
already did that work.
This change gets the Sentry browser SDK working to a point where it can
actually send uncaught exceptions via the API :^)
By using the same NativeFunction constructor as plain ErrorConstructor
and passing the name, TypeError & co. will now include their name in
backtraces and such.
Eventually we should probably rely on [[InitialName]] for this, but for
now that's how it works.
In object binding, we would attempt to get NonnullRefPtr<Identifier>
from alias on the alias.has<Empty>() code path. In this case, we need
to get it from name instead.
The update block can generate bytecode that refers to the lexical
environment, so we have to end the scope after it has been generated.
Previously the Jump to the update block would terminate the block,
causing us to leave the lexical environment just before jumping to the
update block.
After we terminate a block (e.g. break, continue), we cannot generate
anymore bytecode for the block. This caused us to crash with this
example code:
```
a = 0;
switch (a) {
case 0:
break;
console.log("hello world");
}
```
Anything after a block terminating instruction is considered
unreachable code, so we can safely skip any statements after it.
Listing all the registers will lead to the inability to allocate enough
space in one basic block (as there can be an arbitrary number of
registers used), instead switch to specifying the range of registers
used and save a lot of space in the process.
This follows how the regular AST interpreter creates arrays, as using
Array::create_from uses create_data_property_or_throw, which will crash
when it encounters an empty value. We require empty values to represent
array holes.
This will leave any lexical/variable environments on the way to the
closest unwind context boundary.
This will not leave the closest unwind context, as we still need the
unwind context to perform the Throw instruction correctly.
When we reach a block terminating instruction (e.g. Break, Throw),
we cannot generate anymore instructions after it. This would not allow
us to leave any lexical/variable environments.
This uses the mechanism introduced in ba9c49 to unwind environments
when we encounter these instructions.
For example, a try/catch block with no finally. The try block and catch
block do not need to unwind to a finally block, so the unwind context
is no longer needed when we jump to the catch block.
If we threw an exception in a catch block of a try/catch, there will be
no handler or finalizer and the unit would continue on as if nothing
happened.
This would subsequently crash with the `m_saved_exception.is_null()`
assertion failure when we next call a non-native function.
Previously we would only end these scopes if the block was not
terminated. If the block was generated, we would not end the scope
and would generate other bytecode with these scopes still open.
These functions do not generate any code, so they can be used even if
the current block is terminated. The enter and end scope functions are
only used to track where to unwind to when break/continue are used.
The steps for creating a DeclarativeEnvironment for each iteration of a
for-loop can be done equivalently to the spec without following the spec
directly. For each binding creating in the loop's init expression, we:
1. Create a new binding in the new environment.
2. Grab the current value of the binding in the old environment.
3. Set the value in the new environment to the old value.
This can be replaced by initializing the bindings vector in the new
environment directly with the bindings in the old environment (but only
copying the bindings of the init statement).
This was not handling the nullary call case correctly, remove the whole
nullary check as there's nothing particularly expensive in the catch-all
case anyway.
On the code path where we are setting a TypedArray from another
TypedArray of the same type, we forgo the spec text and simply do a
memmove between the two ArrayBuffers. However, we forgot to apply
source's byte offset on this code path.
This meant if we tried setting a TypedArray from a TypedArray we got
from .subarray(), we would still copy from the start of the subarray's
ArrayBuffer.
This is because .subarray() returns a new TypedArray with the same
ArrayBuffer but the new TypedArray has a smaller length and a byte
offset that the rest of the codebase is responsible for applying.
This affected pako when it was decompressing a zlib stream that has
multiple zlib chunks in it. To read from the second chunk, it would
set the zlib window TypedArray from the .subarray() of the chunk offset
in the stream's TypedArray. This effectively made the decompressed data
from the second chunk a mis-mash of old data that looked completely
scrambled. It would also cause all future decompression using the same
pako Inflate instance to also appear scrambled.
As a pako comment aptly puts it:
> Call updatewindow() to create and/or update the window state.
> Note: a memory error from inflate() is non-recoverable.
This allows us to properly decompress the large compressed payloads
that Discord Gateway sends down to the Discord client. For example,
for an account that's only in the Serenity Discord, one of the payloads
is a 20 KB zlib compressed blob that has two chunks in it.
Surprisingly, this is not covered by test262! I imagine this would have
been caught earlier if there was such a test :^)