enso/docs/runtime-guide.md
Hubert Plociniczak 6c440beecc
Move logic calculating the index in Vector.at to a builtin method to make the performance of Vector to be on par with Array (#3811)
The main culprit of a Vector slowdown (when compared to Array) was the normalization of the index when accessing the elements. Turns out that the Graal was very persistent on **not** inlining that particular fragment and that was degrading the results in benchmarks.

Being unable to force it to do it (looks like a combination of thunk execution and another layer of indirection) we resorted to just moving the normalization to the builtin method. That makes Array and Vector perform roughly the same.

Moved all handling of invalid index into the builtin as well, simplifying the Enso implementation. This also meant that `Vector.unsafe_at` is now obsolete.
Additionally, added support for negative indices in Array, to behave in the same way as for Vector.

# Important Notes
Note that this workaround only addresses this particular perf issue. I'm pretty sure we will have more of such scenarios.
Before the change `averageOverVector` benchmark averaged around `0.033 ms/op` now it does consistently `0.016 ms/op`, similarly to `averageOverArray`.
2022-10-20 12:50:44 +00:00

13 KiB
Raw Blame History

layout title category tags order
developer-doc Runtime Guide summary
contributing
guide
graal
truffle
7

Runtime Guide

GraalVM and Truffle

Papers

  1. One VM To Rule Them All a high-level overview of what GraalVM is and how it works.
  2. Practical Partial Evaluation for High-Performance Dynamic Language Runtimes an introduction to basic Truffle concepts, including Polymorphic Inline Caches and other frequently used techniques.
  3. Fast, Flexible, Polyglot Instrumentation Support for Debuggers and other Tools an introduction to Truffle instrumentation framework this is what Enso's runtime server for the IDE uses.
  4. Cross-Language Interoperability in a Multi-Language Runtime an introduction to how Truffle cross-language interop works.
  5. The whole list of publications because something may be useful at some point.

Tutorials

  1. The list of Truffle docs on specific topics Certain more advanced topics are covered in these, use as needed.
    1. Optimizing Tutorial you'll want to read this one for sure.
    2. TruffleLibrary Tutorial this is an important architectural concept for building Truffle interpreters. We wish we knew about this sooner and we recommend you structure the interpreter around this in the future.
  2. A tutorial on building a LISP in Truffle It's a 4-part tutorial, linked is part 4, start with part 2 (part 1 is not about Truffle). This one is important, even though it is old and uses stale APIs it will still highlight the most important concepts, in particular the way Enso deals with lexical scopes and Tail Call Optimization.
  3. Simple Language this is an implementation of a very simple toy language. Read it for basic understanding of simple Truffle concepts.

Tips and Tricks

  1. Familiarize yourself with IGV. It's a horrible tool. It's clunky, ugly, and painful to use. It has also saved us more times than we can count, definitely worth investing the time to understand it. Download Enso Language Support for IGV. Use this tutorial (and the follow up post) to familiarize yourself with the representation.
  2. Use our sbt withDebug utility. Familiarize yourself with the different otpions. It is a useful helper for running your programs and microbenchmarks with different Truffle debugging options.
  3. Use hsdis for printing the generated assembly you can often spot obvious problems with compilations. That being said, IGV (with Enso Language Support) is usually the better tool, if you take a look at the later compilation stages.
  4. Pay attention to making things final and @CompilationFinal. This is the most important way Graal does constant-folding. Whenever a loop bound can be compilation final, take advantage (and use @ExplodeLoop).
  5. Read the generated code for the nodes generated by the DSL. Learning the DSL is quite difficult and the documentation is sorely lacking. It is best to experiment with different kinds of @Specialization and read the generated code. Without this understanding, it's way too easy to introduce very subtle bugs to the language semantics.
  6. Join the GraalVM Slack server. All the authors are there and they will happily help and answer any questions.
  7. Be aware that Truffle Instrumentation is more constrained than it could be, because it wants to be language agnostic. The Enso runtime server is Enso-specific and therefore you may be better served in the future by rolling your own instrumentation. Read the instrumentation sources, it will help you understand how non-magical it actually is.
  8. Clone the sources of Truffle and TruffleRuby. Set them up as projects in your IDE. Read the code when in doubt. Truffle documentation is really lacking sometimes, even though it is improving.
  9. Understand the boundary between the language-side APIs (see e.g. InteropLibrary) and embedder side (see Value). You want to make sure you use the proper APIs in the proper places in the codebase. As a rule of thumb: all code in the runtime project is language/instrumentation-side. All code elsewhere is embedder-side. In particular, the only Graal dependency in embedder code should be graal-sdk. If you find yourself pulling things like truffle-api, you've done something wrong. Similarly, if you ever import anything from org.graalvm.polyglot in the language code, you're doing something wrong.
  10. Avoid deoptimizations. Understanding IGV graphs can be a very time-consuming and complex process. Sometimes it is sufficient to only look at the compilation traces to discover repeated or unnecessary deoptimizations which can significantly affect overall performance of your program. You can tell runner to generate compilation traces via additional options:
    JAVA_OPTS="-Dpolygot.engine.TracePerformanceWarnings=all -Dpolyglot.engine.TraceTransferToInterpreter=true -Dpolyglot.engine.TraceDeoptimizeFrame=true -Dpolyglot.engine.TraceCompilation=true -Dpolyglot.engine.TraceCompilationDetails=true"
    
    Make sure you print trace logs by using --log-level TRACE.
  11. Occasionally a piece of code runs slower than we anticipated. Analyzing Truffle inlining traces may reveal locations that one thought would be inlined but Truffle decided otherwise. Rewriting such locations to builtin methods or more inliner-friendly representation can significantly improve the performance. You can tell runner to generate inlining traces via additional options:
    JAVA_OPTS="-Dpolyglot.engine.TraceInlining=true -Dpolyglot.engine.TraceInliningDetails=true"
    
    Make sure you print trace logs by using --log-level TRACE. See documentation for the explanation of inlining decisions.

Code & Internal Documentation Map

Other than the subsections here, go through the existing documentation.

Entry Points

  1. See Main in engine-runner and Language in runtime. The former is the embedder-side entry point, the latter the language-side one. They do a bit of ping-pong through the polyglot APIs. That is unfortunate, as this API is stringly typed. Therefore, chase the usages of method-name constants to jump between the language-side implementations and the embedder-side calls. Alternatively, step through the flow in a debugger.
  2. Look at the MainModule in language-server and RuntimeServerInstrument in runtime. This is the entry point for IDE, with language/embedder boundary as usual, but with a server-like message exchange instead of polyglot API use.

Compiler

Look at Compiler in runtime. It is the main compiler class and the flow should be straightforward. A high level overview is: the compiler alternates between running module-local passes (currently in 3 groups) and global join points, where information flows between modules.

Interpreter

There are a few very convoluted spots in the interpreter, with non-trivial design choices. Here's a list with some explanations:

  1. Function Call Flow: It is quite difficult to efficiently call an Enso function. Enso allows passing arguments by name, supports currying and eta-expansion, and defaulted argument values. It also has to deal with polyglot method calls. And it has to be instrumentable, to enable the "enter a method via call site" functionality of the IDE. Start reading from ApplicationNode and follow the execute methods (or @Specializations). There's a lot of them, but don't get too scared. It is also outlined here.
  2. Polyglot Code: While for some languages (Java, Ruby and Python) it is straightforward and very Truffle-like, for others (JS and R) it becomes tricky. The reason is that Truffle places strong limitations on threading in these languages and it is impossible to call JS and R from a multithreaded language context (like Enso's). For this reason, we have a special, internal sub-language, running on 2 separate Truffle contexts, exposing the single threaded languages in a safe way (through a GIL). The language is called EPB (Enso Polyglot Bridge) and lives in this subtree. To really understand it, you'll need to familiarize yourself with what a TruffleContext is and how it relates to polyglot and language contexts (oh, and also get ready to work with about 7 different meanings of the word Context...).
  3. Threading & Safepoints: Enso has its own safepointing system and a thread manager. The job of the thread manager is to halt all the executing threads when needed. Safepoints are polled during normal code execution (usually at the start of every non-inlined method call and at each iteration of a TCO loop). See the source.
  4. Resource Finalization: Enso exposes a system for automatic resource finalization. This is non-trivial on the JVM and is handled in the ResourceManager.
  5. Builtin Definitions: Certain basic functions and types are exposed directly from the interpreter. They currently are all bundled in a virtual module called Standard.Builtins. See the Builtins class to see how that module is constructed. There's also a java-side annotation-driven DSL for automatic generation of builtin method boilerplate. See nodes in this tree to get an idea of how it works. Also read the doc
  6. Standard Library Sources: These are very non-magical just plain old Enso projects that get shipped with every compiler release. They live in this tree. And are tested through these projects. It also makes heavy use of host interop. The Java methods used by the standard library are located in this directory.
  7. Microbenchmarks: There are some microbenchmarks for tiny Enso programs for basic language constructs. They are located in this directory. They can be run through sbt runtime/bench. Each run will generate (or append to) the bench-report.xml file. It will also fail the benchmark suite if any benchmark is more than 20% slower than the fastest recorded run. Don't use your computer when running these. It is also worth noting that these can be run through the withDebug utility, which allows you to test truffle compilations (and e.g. watch the graphs in IGV with Enso Language Support).
  8. Tests: There are scalatests that comprehensively test all of the language semantics and compiler passes. These are run with sbt runtime/test. For newer functionalities, we prefer adding tests to the Tests project in the standard library test. At this point, Enso is mature enough to self-test.

Language Server

Talk to Dmitry! He's the main maintainer of this part.