LibLine should ultimately not care about what a "token" means in the
context of its user, so force the user to split the buffer itself.
This also allows the users to pick up contextual clues as well, since
they have to lex the line themselves.
This commit pacthes Shell and the JS repl to better handle completions,
so certain wrong behaviours are now corrected as well:
- JS repl can now complete "Object . getOw<tab>"
- Shell can now complete "echo | ca<tab>" and paths inside strings
Make sure that userspace is always referencing "system" headers in a way
that would build on target :). This means removing the explicit
include_directories of Libraries/LibC in favor of having it export its
headers as SYSTEM. Also remove a redundant include_directories of
Libraries in the 'serenity build' part of the build script. It's already
set at the top.
This causes issues for the Kernel, and for crt0.o. These special cases
are handled individually.
This patch is unfortunately rather large and might make some things feel
bloated, but it is necessary to fix a few flaws in LibJS, primarily
blindly coercing values to numbers without exception checks - i.e.
interpreter.argument(0).to_i32(); // can fail!!!
Some examples where the interpreter would actually crash:
var o = { toString: () => { throw Error() } };
+o;
o - 1;
"foo".charAt(o);
"bar".repeat(o);
To fix this, we now have the following...
to_double(Interpreter&)
to_i32()
to_i32(Interpreter&)
to_size_t()
to_size_t(Interpreter&)
...and a whole lot of exception checking.
There's intentionally no to_double(), use as_double() directly instead.
This way we still can use these convenient utility functions but don't
need to check for exceptions if we are sure the value already is a
number.
Fixes#2267.
Passing a Heap& to it only to then call interpreter() on that is weird.
Let's just give it the Interpreter& directly, like some of the other
to_something() functions.
This was supposed to be the foundation for some kind of pre-kernel
environment, but nobody is working on it right now, so let's move
everything back into the kernel and remove all the confusion.
We will now actually use MIME types for clipboard. The default type is now
"text/plain" (instead of just "text").
This also fixes some issues in copy(1) and paste(1).
There are now two API's on Value:
- Value::to_string(Interpreter&) -- may throw.
- Value::to_string_without_side_effects() -- will never throw.
These are some pretty big sweeping changes, so it's possible that I did
some part the wrong way. We'll work it out as we go. :^)
Fixes#2123.
Giving the lexer the ability to generate errors adds unnecessary
complexity - also it only calls its syntax_error() function in one place
anyway ("unterminated string literal"). But since the lexer *also* emits
tokens like Eof or UnterminatedStringLiteral, it should be up to the
consumer of these tokens to decide what to do.
Also remove the option to not print errors to stderr as that's not
relevant anymore.
The unzip command will unzip a zip file passed as an argument into the
current pwd, with the syntax:
unzip file.zip
This implementation is pretty barebones as it does not support things
like file access times, compression or even compression detection, so
if the user tries to unzip a compressed zip most probably he would find
wrong data inside the files.
However it's an starting point :^)
Moves DirectoryServices out of LibCore (because we need to link with
LibIPC), renames it Desktop::Launcher (because Desktop::DesktopServices
doesn't scan right) and ports it to use the LaunchServer which is now
responsible for starting programs for a file.
Adds fully functioning template literals. Because template literals
contain expressions, most of the work has to be done in the Lexer rather
than the Parser. And because of the complexity of template literals
(expressions, nesting, escapes, etc), the Lexer needs to have some
template-related state.
When entering a new template literal, a TemplateLiteralStart token is
emitted. When inside a literal, all text will be parsed up until a '${'
or '`' (or EOF, but that's a syntax error) is seen, and then a
TemplateLiteralExprStart token is emitted. At this point, the Lexer
proceeds as normal, however it keeps track of the number of opening
and closing curly braces it has seen in order to determine the close
of the expression. Once it finds a matching curly brace for the '${',
a TemplateLiteralExprEnd token is emitted and the state is updated
accordingly.
When the Lexer is inside of a template literal, but not an expression,
and sees a '`', this must be the closing grave: a TemplateLiteralEnd
token is emitted.
The state required to correctly parse template strings consists of a
vector (for nesting) of two pieces of information: whether or not we
are in a template expression (as opposed to a template string); and
the count of the number of unmatched open curly braces we have seen
(only applicable if the Lexer is currently in a template expression).
TODO: Add support for template literal newlines in the JS REPL (this will
cause a syntax error currently):
> `foo
> bar`
'foo
bar'
We're starting with a very basic decoding API and only ISO-8859-1 and
UTF-8 decoding (and UTF-8 decoding is really a no-op since String is
expected to be UTF-8.)
We now store the response headers in a download object on the protocol
server side and pass it to the client when finishing up a download.
Response headers are passed as an IPC::Dictionary. :^)
Only being able to complete enumerable properties is annoying,
especially since we updated everything to use the correct attributes.
Most standard built-in objects are *not* enumerable.