Merge pull request #1748 from rtfeldman/wasm_stack_memory

Wasm stack memory
This commit is contained in:
Brian Carroll 2021-09-29 19:05:46 +01:00 committed by GitHub
commit 1fe20b0422
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 1037 additions and 526 deletions

View File

@ -3,48 +3,57 @@
## Plan
- Initial bringup
- Get a wasm backend working for some of the number tests.
- Use a separate `gen_wasm` directory for now, to avoid trying to do bringup and integration at the same time.
- Improve the fundamentals
- [x] Get a wasm backend working for some of the number tests.
- [x] Use a separate `gen_wasm` directory for now, to avoid trying to do bringup and integration at the same time.
- Get the fundamentals working
- [x] Come up with a way to do control flow
- [x] Flesh out the details of value representations between local variables and stack memory
- [ ] Set up a way to write tests with any return value rather than just i64 and f64
- [ ] Figure out relocations for linking object files
- [ ] Think about the Wasm module builder library we're using, are we happy with it?
- [x] Set up a way to write tests with any return value rather than just i64 and f64
- [x] Implement stack memory
- [x] Push and pop stack frames
- [x] Deal with returning structs
- [x] Distinguish which variables go in locals, own stack frame, caller stack frame, etc.
- [ ] Ensure early Return statements don't skip stack cleanup
- [ ] Vendor-in parity_wasm library so that we can use `bumpalo::Vec`
- [ ] Implement relocations
- Requires knowing the _byte_ offset of each call site. This is awkward as the backend builds a `Vec<Instruction>` rather than a `Vec<u8>`. It may be worth serialising each instruction as it is inserted.
- Refactor for code sharing with CPU backends
- [ ] Implement a `scan_ast` pre-pass like `Backend` does, but for reusing Wasm locals rather than CPU registers
- [ ] Extract a trait from `WasmBackend` that looks as similar as possible to `Backend`, to prepare for code sharing
- [ ] Refactor to actually share code between `WasmBackend` and `Backend` if it seems feasible
- Integration
- Move wasm files to `gen_dev/src/wasm`
- Share tests between wasm and x64, with some way of saying which tests work on which backends, and dispatching to different eval helpers based on that.
- Get `build_module` in object_builder.rs to dispatch to the wasm generator (adding some Wasm options to the `Triple` struct)
- Get `build_module` to write to a file, or maybe return `Vec<u8>`, instead of returning an Object structure
- Code sharing
- Try to ensure that both Wasm and x64 use the same `Backend` trait so that we can share code.
- We need to work towards this after we've progressed a bit more with Wasm and gained more understanding and experience of the differences.
- We will have to think about how to deal with the `Backend` code that doesn't apply to Wasm. Perhaps we will end up with more traits like `RegisterBackend` / `StackBackend` or `NativeBackend` / `WasmBackend`, and perhaps even some traits to do with backends that support jumps and those that don't.
## Structured control flow
🚨 **This is an area that could be tricky** 🚨
One of the security features of WebAssembly is that it does not allow unrestricted "jumps" to anywhere you like. It does not have an instruction for that. All of the [control instructions][control-inst] can only implement "structured" control flow, and have names like `if`, `loop`, `block` that you'd normally associate with high-level languages. There are branch (`br`) instructions that can jump to labelled blocks within the same function, but the blocks have to be nested in sensible ways.
[control-inst]: https://webassembly.github.io/spec/core/syntax/instructions.html#control-instructions
Implications:
This way of representing control flow is similar to parts of the Roc AST like `When`, `If` and `LetRec`. But Mono IR converts this to jumps and join points, which are more of a Control Flow Graph than a tree. We need to map back from graph to a tree again in the Wasm backend.
Roc, like most modern languages, is already enforcing structured control flow in the source program. Constructs from the Roc AST like `When`, `If` and `LetRec` can all be converted straightforwardly to Wasm constructs.
Our solution is to wrap all joinpoint/jump graphs in an outer `loop`, with nested `block`s inside it.
However the Mono IR converts this to jumps and join points, which are more of a Control Flow Graph than a tree. That doesn't map so directly to the Wasm structures. This is such a common issue for compiler back-ends that the WebAssembly compiler toolkit `binaryen` has an [API for control-flow graphs][cfg-api]. We're not using `binaryen` right now. It's a C++ library, though it does have a (very thin and somewhat hard-to-use) [Rust wrapper][binaryen-rs]. We should probably investigate this area sooner rather than later. If relooping turns out to be necessary or difficult, we might need to switch from parity_wasm to binaryen.
### Possible future optimisations
> By the way, it's not obvious how to pronounce "binaryen" but apparently it rhymes with "Targaryen", the family name from the "Game of Thrones" TV series
There are other algorithms available that may result in more optimised control flow. We are not focusing on that for our development backend, but here are some notes for future reference.
The WebAssembly compiler toolkit `binaryen` has an [API for control-flow graphs][cfg-api]. We're not using `binaryen` right now. It's a C++ library, though it does have a (very thin and somewhat hard-to-use) [Rust wrapper][binaryen-rs]. Binaryen's control-flow graph API implements the "Relooper" algorithm developed by the Emscripten project and described in [this paper](https://github.com/emscripten-core/emscripten/blob/main/docs/paper.pdf).
> By the way, apparently "binaryen" rhymes with "Targaryen", the family name from the "Game of Thrones" TV series
There is also an improvement on Relooper called ["Stackifier"](https://medium.com/leaningtech/solving-the-structured-control-flow-problem-once-and-for-all-5123117b1ee2). It can reorder the joinpoints and jumps to make code more efficient. (It is also has things Roc wouldn't need but C++ does, like support for "irreducible" graphs that include `goto`).
[cfg-api]: https://github.com/WebAssembly/binaryen/wiki/Compiling-to-WebAssembly-with-Binaryen#cfg-api
[binaryen-rs]: https://crates.io/crates/binaryen
Binaryen's control-flow graph API implements the "Relooper" algorithm developed by the Emscripten project and described in [this paper](https://github.com/emscripten-core/emscripten/blob/main/docs/paper.pdf).
There is an alternative algorithm that is supposed to be an improvement on Relooper, called ["Stackifier"](https://medium.com/leaningtech/solving-the-structured-control-flow-problem-once-and-for-all-5123117b1ee2).
## Stack machine vs register machine
Wasm's instruction set is based on a stack-machine VM. Whereas CPU instructions have named registers that they operate on, Wasm has no named registers at all. The instructions don't contain register names. Instructions can oly operate on whatever data is at the top of the stack.
@ -113,6 +122,7 @@ $ wasm-opt --simplify-locals --reorder-locals --vacuum example.wasm > opt.wasm
```
The optimised functions have no local variables, and the code shrinks to about 60% of its original size.
```
(func (;0;) (param i64 i64) (result i64)
local.get 0
@ -143,7 +153,7 @@ When we are talking about how we store values in _memory_, I'll use the term _st
Of course our program can use another area of memory as a heap as well. WebAssembly doesn't mind how you divide up your memory. It just gives you some memory and some instructions for loading and storing.
## Function calls
## Calling conventions & stack memory
In WebAssembly you call a function by pushing arguments to the stack and then issuing a `call` instruction, which specifies a function index. The VM knows how many values to pop off the stack by examining the _type_ of the function. In our example earlier, `Num.add` had the type `[i64 i64] → [i64]` so it expects to find two i64's on the stack and pushes one i64 back as the result. Remember, the runtime engine will validate the module before running it, and if your generated code is trying to call a function at a point in the program where the wrong value types are on the stack, it will fail validation.
@ -151,11 +161,17 @@ Function arguments are restricted to the four value types, `i32`, `i64`, `f32` a
That's all great for primitive values but what happens when we want to pass more complex data structures between functions?
Well, remember, "stack memory" is not a special kind of memory in WebAssembly, it's just an area of our memory where we _decide_ that we want to implement a stack data structure. So we can implement it however we want. A good choice would be to make our stack frame look the same as it would when we're targeting a CPU, except without the return address (since there's no need for one). We can also decide to pass numbers through the machine stack rather than in stack memory, since that takes fewer instructions.
Well, remember, "stack memory" is not a special kind of memory in WebAssembly, and is separate from the VM stack. It's just an area of our memory where we implement a stack data structure. But there are some conventions that it makes sense to follow so that we can easily link to Wasm code generated from Zig or other languages.
The only other thing we need is a stack pointer. On CPU targets, there's often have a specific "stack pointer" register. WebAssembly has no equivalent to that, but we can use a `global` variable.
### Observations from compiled C code
The system I've outlined above is based on my experience of compiling C to WebAssembly via the Emscripten toolchain (which is built on top of clang). It's also in line with what the WebAssembly project describes [here](https://github.com/WebAssembly/design/blob/main/Rationale.md#locals).
- `global 0` is used as the stack pointer, and its value is normally copied to a `local` as well (presumably because locals tend to be assigned to CPU registers)
- Stack memory grows downwards
- If a C function returns a struct, the compiled WebAssembly function has no return value, but instead has an extra _argument_. The argument is an `i32` pointer to space allocated in the caller's stack, that the called function can write to.
- There is no maximum number of arguments for a WebAssembly function, and arguments are not passed via _stack memory_. This makes sense because the _VM stack_ has no size limit. It's like having a CPU with an unlimited number of registers.
- Stack memory is only used for allocating local variables, not for passing arguments. And it's only used for values that cannot be stored in one of WebAssembly's primitive values (`i32`, `i64`, `f32`, `f64`).
These observations are based on experiments compiling C to WebAssembly via the Emscripten toolchain (which is built on top of clang). It's also in line with what the WebAssembly project describes [here](https://github.com/WebAssembly/design/blob/main/Rationale.md#locals).
## Modules vs Instances

View File

@ -1,5 +1,5 @@
use parity_wasm::builder;
use parity_wasm::builder::{CodeLocation, ModuleBuilder};
use parity_wasm::builder::{CodeLocation, FunctionDefinition, ModuleBuilder, SignatureBuilder};
use parity_wasm::elements::{
BlockType, Instruction, Instruction::*, Instructions, Local, ValueType,
};
@ -8,147 +8,27 @@ use roc_collections::all::MutMap;
use roc_module::low_level::LowLevel;
use roc_module::symbol::Symbol;
use roc_mono::ir::{CallType, Expr, JoinPointId, Literal, Proc, Stmt};
use roc_mono::layout::{Builtin, Layout, UnionLayout};
use roc_mono::layout::{Builtin, Layout};
use crate::*;
use crate::layout::WasmLayout;
use crate::storage::SymbolStorage;
use crate::{
allocate_stack_frame, copy_memory, free_stack_frame, round_up_to_alignment, LocalId, PTR_TYPE,
};
// Don't allocate any constant data at address zero or near it. Would be valid, but bug-prone.
// Follow Emscripten's example by using 1kB (4 bytes would probably do)
const UNUSED_DATA_SECTION_BYTES: u32 = 1024;
#[derive(Clone, Copy, Debug)]
struct LocalId(u32);
#[derive(Clone, Copy, Debug)]
struct LabelId(u32);
#[derive(Debug)]
struct SymbolStorage(LocalId, WasmLayout);
// See README for background information on Wasm locals, memory and function calls
#[derive(Debug)]
pub enum WasmLayout {
// Most number types can fit in a Wasm local without any stack memory.
// Roc i8 is represented as an i32 local. Store the type and the original size.
LocalOnly(ValueType, u32),
// A `local` pointing to stack memory
StackMemory(u32),
// A `local` pointing to heap memory
HeapMemory,
}
impl WasmLayout {
fn new(layout: &Layout) -> Self {
use ValueType::*;
let size = layout.stack_size(PTR_SIZE);
match layout {
Layout::Builtin(Builtin::Int128) => Self::StackMemory(size),
Layout::Builtin(Builtin::Int64) => Self::LocalOnly(I64, size),
Layout::Builtin(Builtin::Int32) => Self::LocalOnly(I32, size),
Layout::Builtin(Builtin::Int16) => Self::LocalOnly(I32, size),
Layout::Builtin(Builtin::Int8) => Self::LocalOnly(I32, size),
Layout::Builtin(Builtin::Int1) => Self::LocalOnly(I32, size),
Layout::Builtin(Builtin::Usize) => Self::LocalOnly(I32, size),
Layout::Builtin(Builtin::Decimal) => Self::StackMemory(size),
Layout::Builtin(Builtin::Float128) => Self::StackMemory(size),
Layout::Builtin(Builtin::Float64) => Self::LocalOnly(F64, size),
Layout::Builtin(Builtin::Float32) => Self::LocalOnly(F32, size),
Layout::Builtin(Builtin::Str) => Self::StackMemory(size),
Layout::Builtin(Builtin::Dict(_, _)) => Self::StackMemory(size),
Layout::Builtin(Builtin::Set(_)) => Self::StackMemory(size),
Layout::Builtin(Builtin::List(_)) => Self::StackMemory(size),
Layout::Builtin(Builtin::EmptyStr) => Self::StackMemory(size),
Layout::Builtin(Builtin::EmptyList) => Self::StackMemory(size),
Layout::Builtin(Builtin::EmptyDict) => Self::StackMemory(size),
Layout::Builtin(Builtin::EmptySet) => Self::StackMemory(size),
Layout::LambdaSet(lambda_set) => WasmLayout::new(&lambda_set.runtime_representation()),
Layout::Struct(_) => Self::StackMemory(size),
Layout::Union(UnionLayout::NonRecursive(_)) => Self::StackMemory(size),
Layout::Union(UnionLayout::Recursive(_)) => Self::HeapMemory,
Layout::Union(UnionLayout::NonNullableUnwrapped(_)) => Self::HeapMemory,
Layout::Union(UnionLayout::NullableWrapped { .. }) => Self::HeapMemory,
Layout::Union(UnionLayout::NullableUnwrapped { .. }) => Self::HeapMemory,
Layout::RecursivePointer => Self::HeapMemory,
}
}
fn value_type(&self) -> ValueType {
match self {
Self::LocalOnly(type_, _) => *type_,
_ => PTR_TYPE,
}
}
fn stack_memory(&self) -> u32 {
match self {
Self::StackMemory(size) => *size,
_ => 0,
}
}
#[allow(dead_code)]
fn load(&self, offset: u32) -> Result<Instruction, String> {
use crate::backend::WasmLayout::*;
use ValueType::*;
match self {
LocalOnly(I32, 4) => Ok(I32Load(ALIGN_4, offset)),
LocalOnly(I32, 2) => Ok(I32Load16S(ALIGN_2, offset)),
LocalOnly(I32, 1) => Ok(I32Load8S(ALIGN_1, offset)),
LocalOnly(I64, 8) => Ok(I64Load(ALIGN_8, offset)),
LocalOnly(F64, 8) => Ok(F64Load(ALIGN_8, offset)),
LocalOnly(F32, 4) => Ok(F32Load(ALIGN_4, offset)),
// LocalOnly(F32, 2) => Ok(), // convert F16 to F32 (lowlevel function? Wasm-only?)
// StackMemory(size) => Ok(), // would this be some kind of memcpy in the IR?
HeapMemory => {
if PTR_TYPE == I64 {
Ok(I64Load(ALIGN_8, offset))
} else {
Ok(I32Load(ALIGN_4, offset))
}
}
_ => Err(format!(
"Failed to generate load instruction for WasmLayout {:?}",
self
)),
}
}
#[allow(dead_code)]
fn store(&self, offset: u32) -> Result<Instruction, String> {
use crate::backend::WasmLayout::*;
use ValueType::*;
match self {
LocalOnly(I32, 4) => Ok(I32Store(ALIGN_4, offset)),
LocalOnly(I32, 2) => Ok(I32Store16(ALIGN_2, offset)),
LocalOnly(I32, 1) => Ok(I32Store8(ALIGN_1, offset)),
LocalOnly(I64, 8) => Ok(I64Store(ALIGN_8, offset)),
LocalOnly(F64, 8) => Ok(F64Store(ALIGN_8, offset)),
LocalOnly(F32, 4) => Ok(F32Store(ALIGN_4, offset)),
// LocalOnly(F32, 2) => Ok(), // convert F32 to F16 (lowlevel function? Wasm-only?)
// StackMemory(size) => Ok(), // would this be some kind of memcpy in the IR?
HeapMemory => {
if PTR_TYPE == I64 {
Ok(I64Store(ALIGN_8, offset))
} else {
Ok(I32Store(ALIGN_4, offset))
}
}
_ => Err(format!(
"Failed to generate store instruction for WasmLayout {:?}",
self
)),
}
}
enum LocalKind {
Parameter,
Variable,
}
// TODO: use Bumpalo Vec once parity_wasm supports general iterators (>=0.43)
pub struct WasmBackend<'a> {
// Module: Wasm AST
pub builder: ModuleBuilder,
@ -160,12 +40,12 @@ pub struct WasmBackend<'a> {
// Functions: Wasm AST
instructions: std::vec::Vec<Instruction>,
ret_type: ValueType,
arg_types: std::vec::Vec<ValueType>,
locals: std::vec::Vec<Local>,
// Functions: internal state & IR mappings
stack_memory: u32,
stack_memory: i32,
stack_frame_pointer: Option<LocalId>,
symbol_storage_map: MutMap<Symbol, SymbolStorage>,
/// how many blocks deep are we (used for jumps)
block_depth: u32,
@ -185,12 +65,12 @@ impl<'a> WasmBackend<'a> {
// Functions: Wasm AST
instructions: std::vec::Vec::with_capacity(256),
ret_type: ValueType::I32,
arg_types: std::vec::Vec::with_capacity(8),
locals: std::vec::Vec::with_capacity(32),
// Functions: internal state & IR mappings
stack_memory: 0,
stack_frame_pointer: None,
symbol_storage_map: MutMap::default(),
block_depth: 0,
joinpoint_label_map: MutMap::default(),
@ -205,48 +85,18 @@ impl<'a> WasmBackend<'a> {
// Functions: internal state & IR mappings
self.stack_memory = 0;
self.stack_frame_pointer = None;
self.symbol_storage_map.clear();
// joinpoint_label_map.clear();
self.joinpoint_label_map.clear();
assert_eq!(self.block_depth, 0);
}
pub fn build_proc(&mut self, proc: Proc<'a>, sym: Symbol) -> Result<u32, String> {
let ret_layout = WasmLayout::new(&proc.ret_layout);
if let WasmLayout::StackMemory { .. } = ret_layout {
return Err(format!(
"Not yet implemented: Returning values to callee stack memory {:?} {:?}",
proc.name, sym
));
}
self.ret_type = ret_layout.value_type();
self.arg_types.reserve(proc.args.len());
for (layout, symbol) in proc.args {
let wasm_layout = WasmLayout::new(layout);
self.arg_types.push(wasm_layout.value_type());
self.insert_local(wasm_layout, *symbol);
}
let signature_builder = self.build_signature(&proc);
self.build_stmt(&proc.body, &proc.ret_layout)?;
let signature = builder::signature()
.with_params(self.arg_types.clone()) // requires std::Vec, not Bumpalo
.with_result(self.ret_type)
.build_sig();
// functions must end with an End instruction/opcode
let mut instructions = self.instructions.clone();
instructions.push(Instruction::End);
let function_def = builder::function()
.with_signature(signature)
.body()
.with_locals(self.locals.clone())
.with_instructions(Instructions::new(instructions))
.build() // body
.build(); // function
let function_def = self.finalize_proc(signature_builder);
let location = self.builder.push_function(function_def);
let function_index = location.body;
self.proc_symbol_map.insert(sym, location);
@ -255,16 +105,139 @@ impl<'a> WasmBackend<'a> {
Ok(function_index)
}
fn insert_local(&mut self, layout: WasmLayout, symbol: Symbol) -> LocalId {
self.stack_memory += layout.stack_memory();
let index = self.symbol_storage_map.len();
if index >= self.arg_types.len() {
self.locals.push(Local::new(1, layout.value_type()));
fn build_signature(&mut self, proc: &Proc<'a>) -> SignatureBuilder {
let ret_layout = WasmLayout::new(&proc.ret_layout);
let signature_builder = if let WasmLayout::StackMemory { .. } = ret_layout {
self.arg_types.push(PTR_TYPE);
builder::signature()
} else {
builder::signature().with_result(ret_layout.value_type())
};
for (layout, symbol) in proc.args {
self.insert_local(WasmLayout::new(layout), *symbol, LocalKind::Parameter);
}
signature_builder.with_params(self.arg_types.clone())
}
fn finalize_proc(&mut self, signature_builder: SignatureBuilder) -> FunctionDefinition {
let mut final_instructions = Vec::with_capacity(self.instructions.len() + 10);
if self.stack_memory > 0 {
allocate_stack_frame(
&mut final_instructions,
self.stack_memory,
self.stack_frame_pointer.unwrap(),
);
}
final_instructions.extend(self.instructions.drain(0..));
if self.stack_memory > 0 {
free_stack_frame(
&mut final_instructions,
self.stack_memory,
self.stack_frame_pointer.unwrap(),
);
}
final_instructions.push(Instruction::End);
builder::function()
.with_signature(signature_builder.build_sig())
.body()
.with_locals(self.locals.clone())
.with_instructions(Instructions::new(final_instructions))
.build() // body
.build() // function
}
fn insert_local(
&mut self,
wasm_layout: WasmLayout,
symbol: Symbol,
kind: LocalKind,
) -> SymbolStorage {
let local_index = (self.arg_types.len() + self.locals.len()) as u32;
let local_id = LocalId(local_index);
let storage = match kind {
LocalKind::Parameter => {
// Already stack-allocated by the caller if needed.
self.arg_types.push(wasm_layout.value_type());
match wasm_layout {
WasmLayout::LocalOnly(value_type, size) => SymbolStorage::ParamPrimitive {
local_id,
value_type,
size,
},
_ => SymbolStorage::ParamPointer {
local_id,
wasm_layout,
},
}
}
LocalKind::Variable => {
self.locals.push(Local::new(1, wasm_layout.value_type()));
match wasm_layout {
WasmLayout::LocalOnly(value_type, size) => SymbolStorage::VarPrimitive {
local_id,
value_type,
size,
},
WasmLayout::HeapMemory => SymbolStorage::VarHeapMemory { local_id },
WasmLayout::StackMemory {
size,
alignment_bytes,
} => {
let offset =
round_up_to_alignment(self.stack_memory, alignment_bytes as i32);
self.stack_memory = offset + size as i32;
// TODO: if we're creating the frame pointer just reuse the same local_id!
let frame_pointer = self.get_or_create_frame_pointer();
// initialise the local with the appropriate address
// TODO: skip this the first time, no point generating code to add zero offset!
self.instructions.extend([
GetLocal(frame_pointer.0),
I32Const(offset),
I32Add,
SetLocal(local_index),
]);
SymbolStorage::VarStackMemory {
local_id,
size,
offset: offset as u32,
alignment_bytes,
}
}
}
}
};
self.symbol_storage_map.insert(symbol, storage.clone());
storage
}
fn get_or_create_frame_pointer(&mut self) -> LocalId {
match self.stack_frame_pointer {
Some(local_id) => local_id,
None => {
let local_index = (self.arg_types.len() + self.locals.len()) as u32;
let local_id = LocalId(local_index);
self.stack_frame_pointer = Some(local_id);
self.locals.push(Local::new(1, ValueType::I32));
local_id
}
}
let local_id = LocalId(index as u32);
let storage = SymbolStorage(local_id, layout);
self.symbol_storage_map.insert(symbol, storage);
local_id
}
fn get_symbol_storage(&self, sym: &Symbol) -> Result<&SymbolStorage, String> {
@ -276,10 +249,15 @@ impl<'a> WasmBackend<'a> {
})
}
fn load_from_symbol(&mut self, sym: &Symbol) -> Result<(), String> {
let SymbolStorage(LocalId(local_id), _) = self.get_symbol_storage(sym)?;
let id: u32 = *local_id;
self.instructions.push(GetLocal(id));
fn local_id_from_symbol(&self, sym: &Symbol) -> Result<LocalId, String> {
let storage = self.get_symbol_storage(sym)?;
Ok(storage.local_id())
}
fn load_symbol(&mut self, sym: &Symbol) -> Result<(), String> {
let storage = self.get_symbol_storage(sym)?;
let index: u32 = storage.local_id().0;
self.instructions.push(GetLocal(index));
Ok(())
}
@ -306,17 +284,28 @@ impl<'a> WasmBackend<'a> {
fn build_stmt(&mut self, stmt: &Stmt<'a>, ret_layout: &Layout<'a>) -> Result<(), String> {
match stmt {
// This pattern is a simple optimisation to get rid of one local and two instructions per proc.
// If we are just returning the expression result, then don't SetLocal and immediately GetLocal
// Simple optimisation: if we are just returning the expression, we don't need a local
Stmt::Let(let_sym, expr, layout, Stmt::Ret(ret_sym)) if let_sym == ret_sym => {
let wasm_layout = WasmLayout::new(layout);
if let WasmLayout::StackMemory { .. } = wasm_layout {
// Map this symbol to the first argument (pointer into caller's stack)
// Saves us from having to copy it later
let storage = SymbolStorage::ParamPointer {
local_id: LocalId(0),
wasm_layout,
};
self.symbol_storage_map.insert(*let_sym, storage);
}
self.build_expr(let_sym, expr, layout)?;
self.instructions.push(Return);
self.instructions.push(Return); // TODO: branch instead of return so we can clean up stack
Ok(())
}
Stmt::Let(sym, expr, layout, following) => {
let wasm_layout = WasmLayout::new(layout);
let local_id = self.insert_local(wasm_layout, *sym);
let local_id = self
.insert_local(wasm_layout, *sym, LocalKind::Variable)
.local_id();
self.build_expr(sym, expr, layout)?;
self.instructions.push(SetLocal(local_id.0));
@ -326,16 +315,41 @@ impl<'a> WasmBackend<'a> {
}
Stmt::Ret(sym) => {
if let Some(SymbolStorage(local_id, _)) = self.symbol_storage_map.get(sym) {
self.instructions.push(GetLocal(local_id.0));
self.instructions.push(Return);
Ok(())
} else {
Err(format!(
"Not yet implemented: returning values with layout {:?}",
ret_layout
))
use crate::storage::SymbolStorage::*;
let storage = self.symbol_storage_map.get(sym).unwrap();
match storage {
VarStackMemory {
local_id,
size,
alignment_bytes,
..
}
| ParamPointer {
local_id,
wasm_layout:
WasmLayout::StackMemory {
size,
alignment_bytes,
..
},
} => {
let from = *local_id;
let to = LocalId(0);
copy_memory(&mut self.instructions, from, to, *size, *alignment_bytes, 0)?;
}
ParamPrimitive { local_id, .. }
| VarPrimitive { local_id, .. }
| ParamPointer { local_id, .. }
| VarHeapMemory { local_id, .. } => {
self.instructions.push(GetLocal(local_id.0));
self.instructions.push(Return); // TODO: branch instead of return so we can clean up stack
}
}
Ok(())
}
Stmt::Switch {
@ -355,15 +369,12 @@ impl<'a> WasmBackend<'a> {
}
// the LocalId of the symbol that we match on
let matched_on = match self.symbol_storage_map.get(cond_symbol) {
Some(SymbolStorage(local_id, _)) => local_id.0,
None => unreachable!("symbol not defined: {:?}", cond_symbol),
};
let matched_on = self.local_id_from_symbol(cond_symbol)?;
// then, we jump whenever the value under scrutiny is equal to the value of a branch
for (i, (value, _, _)) in branches.iter().enumerate() {
// put the cond_symbol on the top of the stack
self.instructions.push(GetLocal(matched_on));
self.instructions.push(GetLocal(matched_on.0));
self.instructions.push(I32Const(*value as i32));
@ -398,7 +409,9 @@ impl<'a> WasmBackend<'a> {
let mut jp_parameter_local_ids = std::vec::Vec::with_capacity(parameters.len());
for parameter in parameters.iter() {
let wasm_layout = WasmLayout::new(&parameter.layout);
let local_id = self.insert_local(wasm_layout, parameter.symbol);
let local_id = self
.insert_local(wasm_layout, parameter.symbol, LocalKind::Variable)
.local_id();
jp_parameter_local_ids.push(local_id);
}
@ -429,12 +442,8 @@ impl<'a> WasmBackend<'a> {
// put the arguments on the stack
for (symbol, local_id) in arguments.iter().zip(locals.iter()) {
let argument = match self.symbol_storage_map.get(symbol) {
Some(SymbolStorage(local_id, _)) => local_id.0,
None => unreachable!("symbol not defined: {:?}", symbol),
};
self.instructions.push(GetLocal(argument));
let argument = self.local_id_from_symbol(symbol)?;
self.instructions.push(GetLocal(argument.0));
self.instructions.push(SetLocal(local_id.0));
}
@ -463,7 +472,7 @@ impl<'a> WasmBackend<'a> {
}) => match call_type {
CallType::ByName { name: func_sym, .. } => {
for arg in *arguments {
self.load_from_symbol(arg)?;
self.load_symbol(arg)?;
}
let function_location = self.proc_symbol_map.get(func_sym).ok_or(format!(
"Cannot find function {:?} called from {:?}",
@ -479,45 +488,116 @@ impl<'a> WasmBackend<'a> {
x => Err(format!("the call type, {:?}, is not yet implemented", x)),
},
Expr::Struct(fields) => self.create_struct(sym, layout, fields),
x => Err(format!("Expression is not yet implemented {:?}", x)),
}
}
fn load_literal(&mut self, lit: &Literal<'a>, layout: &Layout<'a>) -> Result<(), String> {
match lit {
Literal::Bool(x) => {
self.instructions.push(I32Const(*x as i32));
Ok(())
let instruction = match lit {
Literal::Bool(x) => I32Const(*x as i32),
Literal::Byte(x) => I32Const(*x as i32),
Literal::Int(x) => match layout {
Layout::Builtin(Builtin::Int64) => I64Const(*x as i64),
Layout::Builtin(
Builtin::Int32
| Builtin::Int16
| Builtin::Int8
| Builtin::Int1
| Builtin::Usize,
) => I32Const(*x as i32),
x => {
return Err(format!("loading literal, {:?}, is not yet implemented", x));
}
},
Literal::Float(x) => match layout {
Layout::Builtin(Builtin::Float64) => F64Const((*x as f64).to_bits()),
Layout::Builtin(Builtin::Float32) => F32Const((*x as f32).to_bits()),
x => {
return Err(format!("loading literal, {:?}, is not yet implemented", x));
}
},
x => {
return Err(format!("loading literal, {:?}, is not yet implemented", x));
}
Literal::Byte(x) => {
self.instructions.push(I32Const(*x as i32));
Ok(())
};
self.instructions.push(instruction);
Ok(())
}
fn create_struct(
&mut self,
sym: &Symbol,
layout: &Layout<'a>,
fields: &'a [Symbol],
) -> Result<(), String> {
let storage = self.get_symbol_storage(sym)?.to_owned();
if let Layout::Struct(field_layouts) = layout {
match storage {
SymbolStorage::VarStackMemory { local_id, size, .. }
| SymbolStorage::ParamPointer {
local_id,
wasm_layout: WasmLayout::StackMemory { size, .. },
} => {
if size > 0 {
let mut relative_offset = 0;
for (field, _) in fields.iter().zip(field_layouts.iter()) {
relative_offset += self.copy_symbol_to_pointer_at_offset(
local_id,
relative_offset,
field,
)?;
}
} else {
return Err(format!("Not supported yet: zero-size struct at {:?}", sym));
}
}
_ => {
return Err(format!(
"Cannot create struct {:?} with storage {:?}",
sym, storage
));
}
}
Literal::Int(x) => {
let instruction = match layout {
Layout::Builtin(Builtin::Int64) => I64Const(*x as i64),
Layout::Builtin(
Builtin::Int32
| Builtin::Int16
| Builtin::Int8
| Builtin::Int1
| Builtin::Usize,
) => I32Const(*x as i32),
x => panic!("loading literal, {:?}, is not yet implemented", x),
};
self.instructions.push(instruction);
Ok(())
}
Literal::Float(x) => {
let instruction = match layout {
Layout::Builtin(Builtin::Float64) => F64Const((*x as f64).to_bits()),
Layout::Builtin(Builtin::Float32) => F32Const((*x as f32).to_bits()),
x => panic!("loading literal, {:?}, is not yet implemented", x),
};
self.instructions.push(instruction);
Ok(())
}
x => Err(format!("loading literal, {:?}, is not yet implemented", x)),
} else {
// Struct expression but not Struct layout => single element. Copy it.
let field_storage = self.get_symbol_storage(&fields[0])?.to_owned();
self.copy_storage(&storage, &field_storage)?;
}
Ok(())
}
fn copy_symbol_to_pointer_at_offset(
&mut self,
to_ptr: LocalId,
to_offset: u32,
from_symbol: &Symbol,
) -> Result<u32, String> {
let from_storage = self.get_symbol_storage(from_symbol)?.to_owned();
from_storage.copy_to_memory(&mut self.instructions, to_ptr, to_offset)
}
fn copy_storage(&mut self, to: &SymbolStorage, from: &SymbolStorage) -> Result<(), String> {
let has_stack_memory = to.has_stack_memory();
debug_assert!(from.has_stack_memory() == has_stack_memory);
if !has_stack_memory {
debug_assert!(from.value_type() == to.value_type());
self.instructions.push(GetLocal(from.local_id().0));
self.instructions.push(SetLocal(to.local_id().0));
Ok(())
} else {
let (size, alignment_bytes) = from.stack_size_and_alignment();
copy_memory(
&mut self.instructions,
from.local_id(),
to.local_id(),
size,
alignment_bytes,
0,
)
}
}
@ -528,7 +608,7 @@ impl<'a> WasmBackend<'a> {
return_layout: &Layout<'a>,
) -> Result<(), String> {
for arg in args {
self.load_from_symbol(arg)?;
self.load_symbol(arg)?;
}
let wasm_layout = WasmLayout::new(return_layout);
self.build_instructions_lowlevel(lowlevel, wasm_layout.value_type())?;
@ -546,7 +626,7 @@ impl<'a> WasmBackend<'a> {
// For those, we'll need to pre-process each argument before the main op,
// so simple arrays of instructions won't work. But there are common patterns.
let instructions: &[Instruction] = match lowlevel {
// Wasm type might not be enough, may need to sign-extend i8 etc. Maybe in load_from_symbol?
// Wasm type might not be enough, may need to sign-extend i8 etc. Maybe in load_symbol?
LowLevel::NumAdd => match return_value_type {
ValueType::I32 => &[I32Add],
ValueType::I64 => &[I64Add],

View File

@ -0,0 +1,82 @@
use parity_wasm::elements::ValueType;
use roc_mono::layout::{Layout, UnionLayout};
use crate::{PTR_SIZE, PTR_TYPE};
// See README for background information on Wasm locals, memory and function calls
#[derive(Debug, Clone)]
pub enum WasmLayout {
// Primitive number value. Just a Wasm local, without any stack memory.
// For example, Roc i8 is represented as Wasm i32. Store the type and the original size.
LocalOnly(ValueType, u32),
// Local pointer to stack memory
StackMemory { size: u32, alignment_bytes: u32 },
// Local pointer to heap memory
HeapMemory,
}
impl WasmLayout {
pub fn new(layout: &Layout) -> Self {
use roc_mono::layout::Builtin::*;
use UnionLayout::*;
use ValueType::*;
let size = layout.stack_size(PTR_SIZE);
let alignment_bytes = layout.alignment_bytes(PTR_SIZE);
match layout {
Layout::Builtin(Int32 | Int16 | Int8 | Int1 | Usize) => Self::LocalOnly(I32, size),
Layout::Builtin(Int64) => Self::LocalOnly(I64, size),
Layout::Builtin(Float32) => Self::LocalOnly(F32, size),
Layout::Builtin(Float64) => Self::LocalOnly(F64, size),
Layout::Builtin(
Int128
| Decimal
| Float128
| Str
| Dict(_, _)
| Set(_)
| List(_)
| EmptyStr
| EmptyList
| EmptyDict
| EmptySet,
)
| Layout::Struct(_)
| Layout::LambdaSet(_)
| Layout::Union(NonRecursive(_)) => Self::StackMemory {
size,
alignment_bytes,
},
Layout::Union(
Recursive(_)
| NonNullableUnwrapped(_)
| NullableWrapped { .. }
| NullableUnwrapped { .. },
)
| Layout::RecursivePointer => Self::HeapMemory,
}
}
pub fn value_type(&self) -> ValueType {
match self {
Self::LocalOnly(type_, _) => *type_,
_ => PTR_TYPE,
}
}
#[allow(dead_code)]
pub fn stack_memory(&self) -> u32 {
match self {
Self::StackMemory { size, .. } => *size,
_ => 0,
}
}
}

View File

@ -1,9 +1,11 @@
mod backend;
pub mod from_wasm32_memory;
mod layout;
mod storage;
use bumpalo::Bump;
use parity_wasm::builder;
use parity_wasm::elements::{Instruction, Internal, ValueType};
use parity_wasm::elements::{Instruction, Instruction::*, Internal, ValueType};
use roc_collections::all::{MutMap, MutSet};
use roc_module::symbol::{Interns, Symbol};
@ -22,6 +24,10 @@ pub const ALIGN_4: u32 = 2;
pub const ALIGN_8: u32 = 3;
pub const STACK_POINTER_GLOBAL_ID: u32 = 0;
pub const STACK_ALIGNMENT_BYTES: i32 = 16;
#[derive(Clone, Copy, Debug)]
pub struct LocalId(pub u32);
pub struct Env<'a> {
pub arena: &'a Bump, // not really using this much, parity_wasm works with std::vec a lot
@ -104,3 +110,84 @@ pub fn build_module_help<'a>(
Ok((backend.builder, main_function_index))
}
fn encode_alignment(bytes: u32) -> u32 {
match bytes {
1 => ALIGN_1,
2 => ALIGN_2,
4 => ALIGN_4,
8 => ALIGN_8,
_ => panic!("{:?}-byte alignment is not supported", bytes),
}
}
fn copy_memory(
instructions: &mut Vec<Instruction>,
from_ptr: LocalId,
to_ptr: LocalId,
size: u32,
alignment_bytes: u32,
offset: u32,
) -> Result<(), String> {
let alignment_flag = encode_alignment(alignment_bytes);
let mut current_offset = offset;
while size - current_offset >= 8 {
instructions.push(GetLocal(to_ptr.0));
instructions.push(GetLocal(from_ptr.0));
instructions.push(I64Load(alignment_flag, current_offset));
instructions.push(I64Store(alignment_flag, current_offset));
current_offset += 8;
}
if size - current_offset >= 4 {
instructions.push(GetLocal(to_ptr.0));
instructions.push(GetLocal(from_ptr.0));
instructions.push(I32Load(alignment_flag, current_offset));
instructions.push(I32Store(alignment_flag, current_offset));
current_offset += 4;
}
while size - current_offset > 0 {
instructions.push(GetLocal(to_ptr.0));
instructions.push(GetLocal(from_ptr.0));
instructions.push(I32Load8U(alignment_flag, current_offset));
instructions.push(I32Store8(alignment_flag, current_offset));
current_offset += 1;
}
Ok(())
}
/// Round up to alignment_bytes (assumed to be a power of 2)
pub fn round_up_to_alignment(unaligned: i32, alignment_bytes: i32) -> i32 {
let mut aligned = unaligned;
aligned += alignment_bytes - 1; // if lower bits are non-zero, push it over the next boundary
aligned &= -alignment_bytes; // mask with a flag that has upper bits 1, lower bits 0
aligned
}
pub fn allocate_stack_frame(
instructions: &mut Vec<Instruction>,
size: i32,
local_frame_pointer: LocalId,
) {
let aligned_size = round_up_to_alignment(size, STACK_ALIGNMENT_BYTES);
instructions.extend([
GetGlobal(STACK_POINTER_GLOBAL_ID),
I32Const(aligned_size),
I32Sub,
TeeLocal(local_frame_pointer.0),
SetGlobal(STACK_POINTER_GLOBAL_ID),
]);
}
pub fn free_stack_frame(
instructions: &mut Vec<Instruction>,
size: i32,
local_frame_pointer: LocalId,
) {
let aligned_size = round_up_to_alignment(size, STACK_ALIGNMENT_BYTES);
instructions.extend([
GetLocal(local_frame_pointer.0),
I32Const(aligned_size),
I32Add,
SetGlobal(STACK_POINTER_GLOBAL_ID),
]);
}

View File

@ -0,0 +1,159 @@
use crate::{copy_memory, layout::WasmLayout, LocalId, ALIGN_1, ALIGN_2, ALIGN_4, ALIGN_8};
use parity_wasm::elements::{Instruction, Instruction::*, ValueType};
#[derive(Debug, Clone)]
pub enum SymbolStorage {
ParamPrimitive {
local_id: LocalId,
value_type: ValueType,
size: u32,
},
ParamPointer {
local_id: LocalId,
wasm_layout: WasmLayout,
},
VarPrimitive {
local_id: LocalId,
value_type: ValueType,
size: u32,
},
VarStackMemory {
local_id: LocalId,
size: u32,
offset: u32,
alignment_bytes: u32,
},
VarHeapMemory {
local_id: LocalId,
},
}
impl SymbolStorage {
pub fn local_id(&self) -> LocalId {
match self {
Self::ParamPrimitive { local_id, .. } => *local_id,
Self::ParamPointer { local_id, .. } => *local_id,
Self::VarPrimitive { local_id, .. } => *local_id,
Self::VarStackMemory { local_id, .. } => *local_id,
Self::VarHeapMemory { local_id, .. } => *local_id,
}
}
pub fn value_type(&self) -> ValueType {
match self {
Self::ParamPrimitive { value_type, .. } => *value_type,
Self::VarPrimitive { value_type, .. } => *value_type,
Self::ParamPointer { .. } => ValueType::I32,
Self::VarStackMemory { .. } => ValueType::I32,
Self::VarHeapMemory { .. } => ValueType::I32,
}
}
pub fn has_stack_memory(&self) -> bool {
match self {
Self::ParamPointer {
wasm_layout: WasmLayout::StackMemory { .. },
..
} => true,
Self::ParamPointer { .. } => false,
Self::VarStackMemory { .. } => true,
Self::ParamPrimitive { .. } => false,
Self::VarPrimitive { .. } => false,
Self::VarHeapMemory { .. } => false,
}
}
pub fn stack_size_and_alignment(&self) -> (u32, u32) {
match self {
Self::VarStackMemory {
size,
alignment_bytes,
..
}
| Self::ParamPointer {
wasm_layout:
WasmLayout::StackMemory {
size,
alignment_bytes,
..
},
..
} => (*size, *alignment_bytes),
_ => (0, 0),
}
}
pub fn copy_to_memory(
&self,
instructions: &mut Vec<Instruction>,
to_pointer: LocalId,
to_offset: u32,
) -> Result<u32, String> {
match self {
Self::ParamPrimitive {
local_id,
value_type,
size,
..
}
| Self::VarPrimitive {
local_id,
value_type,
size,
..
} => {
let store_instruction = match (value_type, size) {
(ValueType::I64, 8) => I64Store(ALIGN_8, to_offset),
(ValueType::I32, 4) => I32Store(ALIGN_4, to_offset),
(ValueType::I32, 2) => I32Store16(ALIGN_2, to_offset),
(ValueType::I32, 1) => I32Store8(ALIGN_1, to_offset),
(ValueType::F32, 4) => F32Store(ALIGN_4, to_offset),
(ValueType::F64, 8) => F64Store(ALIGN_8, to_offset),
_ => {
return Err(format!(
"Cannot store {:?} with alignment of {:?}",
value_type, size
));
}
};
instructions.push(GetLocal(to_pointer.0));
instructions.push(GetLocal(local_id.0));
instructions.push(store_instruction);
Ok(*size)
}
Self::ParamPointer {
local_id,
wasm_layout:
WasmLayout::StackMemory {
size,
alignment_bytes,
},
}
| Self::VarStackMemory {
local_id,
size,
alignment_bytes,
..
} => {
copy_memory(
instructions,
*local_id,
to_pointer,
*size,
*alignment_bytes,
to_offset,
)?;
Ok(*size)
}
Self::ParamPointer { local_id, .. } | Self::VarHeapMemory { local_id, .. } => {
instructions.push(GetLocal(to_pointer.0));
instructions.push(GetLocal(local_id.0));
instructions.push(I32Store(ALIGN_4, to_offset));
Ok(4)
}
}
}
}

View File

@ -1,11 +1,15 @@
use parity_wasm::builder;
use parity_wasm::builder::ModuleBuilder;
use parity_wasm::elements::{Instruction, Instruction::*, Instructions, Internal, ValueType};
use parity_wasm::elements::{
Instruction, Instruction::*, Instructions, Internal, Local, ValueType,
};
use roc_gen_wasm::from_wasm32_memory::FromWasm32Memory;
use roc_gen_wasm::*;
use roc_std::{RocDec, RocList, RocOrder, RocStr};
const STACK_POINTER_LOCAL_ID: u32 = 0;
pub trait Wasm32TestResult {
fn insert_test_wrapper(
module_builder: &mut ModuleBuilder,
@ -16,9 +20,11 @@ pub trait Wasm32TestResult {
let signature = builder::signature().with_result(ValueType::I32).build_sig();
let stack_frame_pointer = Local::new(1, ValueType::I32);
let function_def = builder::function()
.with_signature(signature)
.body()
.with_locals(vec![stack_frame_pointer])
.with_instructions(Instructions::new(instructions))
.build() // body
.build(); // function
@ -35,22 +41,15 @@ pub trait Wasm32TestResult {
fn build_wrapper_body(main_function_index: u32) -> Vec<Instruction>;
}
fn build_wrapper_body_prelude(stack_memory_size: usize) -> Vec<Instruction> {
vec![
GetGlobal(STACK_POINTER_GLOBAL_ID),
I32Const(stack_memory_size as i32),
I32Sub,
SetGlobal(STACK_POINTER_GLOBAL_ID),
]
}
macro_rules! build_wrapper_body_primitive {
($store_instruction: expr, $align: expr) => {
fn build_wrapper_body(main_function_index: u32) -> Vec<Instruction> {
const MAX_ALIGNED_SIZE: usize = 16;
let mut instructions = build_wrapper_body_prelude(MAX_ALIGNED_SIZE);
let size: i32 = 8;
let mut instructions = Vec::with_capacity(16);
allocate_stack_frame(&mut instructions, size, LocalId(STACK_POINTER_LOCAL_ID));
instructions.extend([
GetGlobal(STACK_POINTER_GLOBAL_ID),
// load result address to prepare for the store instruction later
GetLocal(STACK_POINTER_LOCAL_ID),
//
// Call the main function with no arguments. Get primitive back.
Call(main_function_index),
@ -59,9 +58,10 @@ macro_rules! build_wrapper_body_primitive {
$store_instruction($align, 0),
//
// Return the result pointer
GetGlobal(STACK_POINTER_GLOBAL_ID),
End,
GetLocal(STACK_POINTER_LOCAL_ID),
]);
free_stack_frame(&mut instructions, size, LocalId(STACK_POINTER_LOCAL_ID));
instructions.push(End);
instructions
}
};
@ -76,18 +76,28 @@ macro_rules! wasm_test_result_primitive {
}
fn build_wrapper_body_stack_memory(main_function_index: u32, size: usize) -> Vec<Instruction> {
let mut instructions = build_wrapper_body_prelude(size);
let mut instructions = Vec::with_capacity(16);
allocate_stack_frame(
&mut instructions,
size as i32,
LocalId(STACK_POINTER_LOCAL_ID),
);
instructions.extend([
//
// Call the main function with the allocated address to write the result.
// No value is returned to the VM stack. This is the same as in compiled C.
GetGlobal(STACK_POINTER_GLOBAL_ID),
GetLocal(STACK_POINTER_LOCAL_ID),
Call(main_function_index),
//
// Return the result address
GetGlobal(STACK_POINTER_GLOBAL_ID),
End,
GetLocal(STACK_POINTER_LOCAL_ID),
]);
free_stack_frame(
&mut instructions,
size as i32,
LocalId(STACK_POINTER_LOCAL_ID),
);
instructions.push(End);
instructions
}
@ -163,3 +173,106 @@ where
)
}
}
impl<T, U, V, W> Wasm32TestResult for (T, U, V, W)
where
T: Wasm32TestResult + FromWasm32Memory,
U: Wasm32TestResult + FromWasm32Memory,
V: Wasm32TestResult + FromWasm32Memory,
W: Wasm32TestResult + FromWasm32Memory,
{
fn build_wrapper_body(main_function_index: u32) -> Vec<Instruction> {
build_wrapper_body_stack_memory(
main_function_index,
T::ACTUAL_WIDTH + U::ACTUAL_WIDTH + V::ACTUAL_WIDTH + W::ACTUAL_WIDTH,
)
}
}
impl<T, U, V, W, X> Wasm32TestResult for (T, U, V, W, X)
where
T: Wasm32TestResult + FromWasm32Memory,
U: Wasm32TestResult + FromWasm32Memory,
V: Wasm32TestResult + FromWasm32Memory,
W: Wasm32TestResult + FromWasm32Memory,
X: Wasm32TestResult + FromWasm32Memory,
{
fn build_wrapper_body(main_function_index: u32) -> Vec<Instruction> {
build_wrapper_body_stack_memory(
main_function_index,
T::ACTUAL_WIDTH + U::ACTUAL_WIDTH + V::ACTUAL_WIDTH + W::ACTUAL_WIDTH + X::ACTUAL_WIDTH,
)
}
}
impl<T, U, V, W, X, Y> Wasm32TestResult for (T, U, V, W, X, Y)
where
T: Wasm32TestResult + FromWasm32Memory,
U: Wasm32TestResult + FromWasm32Memory,
V: Wasm32TestResult + FromWasm32Memory,
W: Wasm32TestResult + FromWasm32Memory,
X: Wasm32TestResult + FromWasm32Memory,
Y: Wasm32TestResult + FromWasm32Memory,
{
fn build_wrapper_body(main_function_index: u32) -> Vec<Instruction> {
build_wrapper_body_stack_memory(
main_function_index,
T::ACTUAL_WIDTH
+ U::ACTUAL_WIDTH
+ V::ACTUAL_WIDTH
+ W::ACTUAL_WIDTH
+ X::ACTUAL_WIDTH
+ Y::ACTUAL_WIDTH,
)
}
}
impl<T, U, V, W, X, Y, Z> Wasm32TestResult for (T, U, V, W, X, Y, Z)
where
T: Wasm32TestResult + FromWasm32Memory,
U: Wasm32TestResult + FromWasm32Memory,
V: Wasm32TestResult + FromWasm32Memory,
W: Wasm32TestResult + FromWasm32Memory,
X: Wasm32TestResult + FromWasm32Memory,
Y: Wasm32TestResult + FromWasm32Memory,
Z: Wasm32TestResult + FromWasm32Memory,
{
fn build_wrapper_body(main_function_index: u32) -> Vec<Instruction> {
build_wrapper_body_stack_memory(
main_function_index,
T::ACTUAL_WIDTH
+ U::ACTUAL_WIDTH
+ V::ACTUAL_WIDTH
+ W::ACTUAL_WIDTH
+ X::ACTUAL_WIDTH
+ Y::ACTUAL_WIDTH
+ Z::ACTUAL_WIDTH,
)
}
}
impl<T, U, V, W, X, Y, Z, A> Wasm32TestResult for (T, U, V, W, X, Y, Z, A)
where
T: Wasm32TestResult + FromWasm32Memory,
U: Wasm32TestResult + FromWasm32Memory,
V: Wasm32TestResult + FromWasm32Memory,
W: Wasm32TestResult + FromWasm32Memory,
X: Wasm32TestResult + FromWasm32Memory,
Y: Wasm32TestResult + FromWasm32Memory,
Z: Wasm32TestResult + FromWasm32Memory,
A: Wasm32TestResult + FromWasm32Memory,
{
fn build_wrapper_body(main_function_index: u32) -> Vec<Instruction> {
build_wrapper_body_stack_memory(
main_function_index,
T::ACTUAL_WIDTH
+ U::ACTUAL_WIDTH
+ V::ACTUAL_WIDTH
+ W::ACTUAL_WIDTH
+ X::ACTUAL_WIDTH
+ Y::ACTUAL_WIDTH
+ Z::ACTUAL_WIDTH
+ A::ACTUAL_WIDTH,
)
}
}

View File

@ -307,94 +307,13 @@ mod wasm_records {
// ()
// );
// }
//
// #[test]
// fn i64_record1_literal() {
// assert_evals_to!(
// indoc!(
// r#"
// { x: 3 }
// "#
// ),
// 3,
// i64
// );
// }
// #[test]
// fn i64_record2_literal() {
// assert_evals_to!(
// indoc!(
// r#"
// { x: 3, y: 5 }
// "#
// ),
// (3, 5),
// (i64, i64)
// );
// }
// // #[test]
// // fn i64_record3_literal() {
// // assert_evals_to!(
// // indoc!(
// // r#"
// // { x: 3, y: 5, z: 17 }
// // "#
// // ),
// // (3, 5, 17),
// // (i64, i64, i64)
// // );
// // }
// #[test]
// fn f64_record2_literal() {
// assert_evals_to!(
// indoc!(
// r#"
// { x: 3.1, y: 5.1 }
// "#
// ),
// (3.1, 5.1),
// (f64, f64)
// );
// }
// // #[test]
// // fn f64_record3_literal() {
// // assert_evals_to!(
// // indoc!(
// // r#"
// // { x: 3.1, y: 5.1, z: 17.1 }
// // "#
// // ),
// // (3.1, 5.1, 17.1),
// // (f64, f64, f64)
// // );
// // }
// // #[test]
// // fn bool_record4_literal() {
// // assert_evals_to!(
// // indoc!(
// // r#"
// // record : { a : Bool, b : Bool, c : Bool, d : Bool }
// // record = { a: True, b: True, c : True, d : Bool }
// // record
// // "#
// // ),
// // (true, false, false, true),
// // (bool, bool, bool, bool)
// // );
// // }
#[test]
fn i64_record1_literal() {
assert_evals_to!(
indoc!(
r#"
{ a: 3 }
{ x: 3 }
"#
),
3,
@ -402,31 +321,86 @@ mod wasm_records {
);
}
// // #[test]
// // fn i64_record9_literal() {
// // assert_evals_to!(
// // indoc!(
// // r#"
// // { a: 3, b: 5, c: 17, d: 1, e: 9, f: 12, g: 13, h: 14, i: 15 }
// // "#
// // ),
// // (3, 5, 17, 1, 9, 12, 13, 14, 15),
// // (i64, i64, i64, i64, i64, i64, i64, i64, i64)
// // );
// // }
#[test]
fn i64_record2_literal() {
assert_evals_to!(
indoc!(
r#"
{ x: 3, y: 5 }
"#
),
(3, 5),
(i64, i64)
);
}
// // #[test]
// // fn f64_record3_literal() {
// // assert_evals_to!(
// // indoc!(
// // r#"
// // { x: 3.1, y: 5.1, z: 17.1 }
// // "#
// // ),
// // (3.1, 5.1, 17.1),
// // (f64, f64, f64)
// // );
// // }
#[test]
fn i64_record3_literal() {
assert_evals_to!(
indoc!(
r#"
{ x: 3, y: 5, z: 17 }
"#
),
(3, 5, 17),
(i64, i64, i64)
);
}
#[test]
fn f64_record2_literal() {
assert_evals_to!(
indoc!(
r#"
{ x: 3.1, y: 5.1 }
"#
),
(3.1, 5.1),
(f64, f64)
);
}
#[test]
fn f64_record3_literal() {
assert_evals_to!(
indoc!(
r#"
{ x: 3.1, y: 5.1, z: 17.1 }
"#
),
(3.1, 5.1, 17.1),
(f64, f64, f64)
);
}
#[test]
fn bool_record4_literal() {
assert_evals_to!(
indoc!(
r#"
record : { a : Bool, b : Bool, c : Bool, d : Bool }
record = { a: True, b: False, c : False, d : True }
record
"#
),
[true, false, false, true],
[bool; 4]
);
}
#[test]
fn i64_record9_literal() {
assert_evals_to!(
indoc!(
r#"
{ a: 3, b: 5, c: 17, d: 1, e: 9, f: 12, g: 13, h: 14, i: 15 }
"#
),
[3, 5, 17, 1, 9, 12, 13, 14, 15],
[i64; 9]
);
}
#[test]
fn bool_literal() {
@ -667,135 +641,135 @@ mod wasm_records {
// );
// }
// #[test]
// fn return_record_2() {
// assert_evals_to!(
// indoc!(
// r#"
// { x: 3, y: 5 }
// "#
// ),
// [3, 5],
// [i64; 2]
// );
// }
#[test]
fn return_record_2() {
assert_evals_to!(
indoc!(
r#"
{ x: 3, y: 5 }
"#
),
[3, 5],
[i64; 2]
);
}
// #[test]
// fn return_record_3() {
// assert_evals_to!(
// indoc!(
// r#"
// { x: 3, y: 5, z: 4 }
// "#
// ),
// (3, 5, 4),
// (i64, i64, i64)
// );
// }
#[test]
fn return_record_3() {
assert_evals_to!(
indoc!(
r#"
{ x: 3, y: 5, z: 4 }
"#
),
(3, 5, 4),
(i64, i64, i64)
);
}
// #[test]
// fn return_record_4() {
// assert_evals_to!(
// indoc!(
// r#"
// { a: 3, b: 5, c: 4, d: 2 }
// "#
// ),
// [3, 5, 4, 2],
// [i64; 4]
// );
// }
#[test]
fn return_record_4() {
assert_evals_to!(
indoc!(
r#"
{ a: 3, b: 5, c: 4, d: 2 }
"#
),
[3, 5, 4, 2],
[i64; 4]
);
}
// #[test]
// fn return_record_5() {
// assert_evals_to!(
// indoc!(
// r#"
// { a: 3, b: 5, c: 4, d: 2, e: 1 }
// "#
// ),
// [3, 5, 4, 2, 1],
// [i64; 5]
// );
// }
#[test]
fn return_record_5() {
assert_evals_to!(
indoc!(
r#"
{ a: 3, b: 5, c: 4, d: 2, e: 1 }
"#
),
[3, 5, 4, 2, 1],
[i64; 5]
);
}
// #[test]
// fn return_record_6() {
// assert_evals_to!(
// indoc!(
// r#"
// { a: 3, b: 5, c: 4, d: 2, e: 1, f: 7 }
// "#
// ),
// [3, 5, 4, 2, 1, 7],
// [i64; 6]
// );
// }
#[test]
fn return_record_6() {
assert_evals_to!(
indoc!(
r#"
{ a: 3, b: 5, c: 4, d: 2, e: 1, f: 7 }
"#
),
[3, 5, 4, 2, 1, 7],
[i64; 6]
);
}
// #[test]
// fn return_record_7() {
// assert_evals_to!(
// indoc!(
// r#"
// { a: 3, b: 5, c: 4, d: 2, e: 1, f: 7, g: 8 }
// "#
// ),
// [3, 5, 4, 2, 1, 7, 8],
// [i64; 7]
// );
// }
#[test]
fn return_record_7() {
assert_evals_to!(
indoc!(
r#"
{ a: 3, b: 5, c: 4, d: 2, e: 1, f: 7, g: 8 }
"#
),
[3, 5, 4, 2, 1, 7, 8],
[i64; 7]
);
}
// #[test]
// fn return_record_float_int() {
// assert_evals_to!(
// indoc!(
// r#"
// { a: 3.14, b: 0x1 }
// "#
// ),
// (3.14, 0x1),
// (f64, i64)
// );
// }
#[test]
fn return_record_float_int() {
assert_evals_to!(
indoc!(
r#"
{ a: 3.14, b: 0x1 }
"#
),
(3.14, 0x1),
(f64, i64)
);
}
// #[test]
// fn return_record_int_float() {
// assert_evals_to!(
// indoc!(
// r#"
// { a: 0x1, b: 3.14 }
// "#
// ),
// (0x1, 3.14),
// (i64, f64)
// );
// }
#[test]
fn return_record_int_float() {
assert_evals_to!(
indoc!(
r#"
{ a: 0x1, b: 3.14 }
"#
),
(0x1, 3.14),
(i64, f64)
);
}
// #[test]
// fn return_record_float_float() {
// assert_evals_to!(
// indoc!(
// r#"
// { a: 6.28, b: 3.14 }
// "#
// ),
// (6.28, 3.14),
// (f64, f64)
// );
// }
#[test]
fn return_record_float_float() {
assert_evals_to!(
indoc!(
r#"
{ a: 6.28, b: 3.14 }
"#
),
(6.28, 3.14),
(f64, f64)
);
}
// #[test]
// fn return_record_float_float_float() {
// assert_evals_to!(
// indoc!(
// r#"
// { a: 6.28, b: 3.14, c: 0.1 }
// "#
// ),
// (6.28, 3.14, 0.1),
// (f64, f64, f64)
// );
// }
#[test]
fn return_record_float_float_float() {
assert_evals_to!(
indoc!(
r#"
{ a: 6.28, b: 3.14, c: 0.1 }
"#
),
(6.28, 3.14, 0.1),
(f64, f64, f64)
);
}
// #[test]
// fn return_nested_record() {
@ -851,20 +825,20 @@ mod wasm_records {
// );
// }
#[test]
fn update_single_element_record() {
assert_evals_to!(
indoc!(
r#"
rec = { foo: 42}
// #[test]
// fn update_single_element_record() {
// assert_evals_to!(
// indoc!(
// r#"
// rec = { foo: 42}
{ rec & foo: rec.foo + 1 }
"#
),
43,
i64
);
}
// { rec & foo: rec.foo + 1 }
// "#
// ),
// 43,
// i64
// );
// }
// #[test]
// fn booleans_in_record() {