Bend/README.md

286 lines
8.9 KiB
Markdown
Raw Normal View History

2024-06-05 18:16:02 +03:00
<h1 >Bend</h1>
<p>A high-level, massively parallel programming language</p>
2023-09-01 18:44:59 +03:00
2024-06-05 18:16:02 +03:00
## Index
1. [Introduction](#introduction)
2. [Important Notes](#important-notes)
3. [Install](#install)
4. [Getting Started](#getting-started)
2024-06-06 00:17:46 +03:00
5. [Speedup Example](#speedup-examples)
2024-06-05 18:16:02 +03:00
6. [Additional Resources](#additional-resources)
2024-05-16 02:48:16 +03:00
2024-06-05 18:16:02 +03:00
## Introduction
2024-05-16 02:56:27 +03:00
2024-06-05 18:16:02 +03:00
Bend offers the feel and features of expressive languages like Python and Haskell. This includes fast object allocations, full support for higher-order functions with closures, unrestricted recursion, and even continuations.
Bend scales like CUDA, it runs on massively parallel hardware like GPUs, with nearly linear acceleration based on core count, and without explicit parallelism annotations: no thread creation, locks, mutexes, or atomics.
Bend is powered by the [HVM2](https://github.com/higherorderco/hvm) runtime.
2024-05-17 18:03:53 +03:00
2024-06-05 18:16:02 +03:00
## Important Notes
2024-06-05 18:16:02 +03:00
* Bend is designed to excel in scaling performance with cores, supporting over 10000 concurrent threads.
* The current version may have lower single-core performance.
* You can expect substantial improvements in performance as we advance our code generation and optimization techniques.
* We are still working to support Windows. Use [WSL2](https://learn.microsoft.com/en-us/windows/wsl/install) as an alternative solution.
* [We only support NVIDIA Gpus currently](https://github.com/HigherOrderCO/Bend/issues/341).
2024-05-17 20:12:00 +03:00
2024-06-05 19:31:30 +03:00
2024-06-05 18:16:02 +03:00
## Install
2024-06-05 18:20:19 +03:00
### Install dependencies
2024-06-05 18:16:02 +03:00
#### On Linux
2024-06-05 19:31:30 +03:00
```sh
2024-06-05 18:16:02 +03:00
# Install Rust if you haven't it already.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# For the C version of Bend, use GCC. We recommend a version up to 12.x.
sudo apt install gcc
```
For the CUDA runtime [install the CUDA toolkit for Linux](https://developer.nvidia.com/cuda-downloads?target_os=Linux) version 12.x.
2024-06-05 18:16:02 +03:00
#### On Mac
2024-06-05 19:31:30 +03:00
```sh
2024-06-05 18:16:02 +03:00
# Install Rust if you haven't it already.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# For the C version of Bend, use GCC. We recommend a version up to 12.x.
brew install gcc
```
2024-06-05 18:16:02 +03:00
### Install Bend
1. Install HVM2 by running:
```sh
2024-06-05 18:16:02 +03:00
# HVM2 is HOC's massively parallel Interaction Combinator evaluator.
2024-05-26 13:25:41 +03:00
cargo install hvm
2024-06-05 18:16:02 +03:00
# This ensures HVM is correctly installed and accessible.
hvm --version
2023-10-25 23:03:47 +03:00
```
2024-06-05 18:16:02 +03:00
2. Install Bend by running:
```sh
# This command will install Bend
cargo install bend-lang
2023-10-25 23:03:47 +03:00
2024-06-05 18:16:02 +03:00
# This ensures Bend is correctly installed and accessible.
bend --version
```
2024-05-06 20:55:33 +03:00
2024-06-05 18:16:02 +03:00
### Getting Started
#### Running Bend Programs
```sh
bend run <file.bend> # uses the Rust interpreter (sequential)
bend run-c <file.bend> # uses the C interpreter (parallel)
bend run-cu <file.bend> # uses the CUDA interpreter (massively parallel)
2024-06-05 18:16:02 +03:00
# Notes
# You can also compile Bend to standalone C/CUDA files using gen-c and gen-cu for maximum performance.
# The code generator is still in its early stages and not as mature as compilers like GCC and GHC.
# You can use the -s flag to have more information on
# Reductions
# Time the code took to run
# Interaction per second (In millions)
2023-10-25 23:03:47 +03:00
```
2024-06-05 18:16:02 +03:00
#### Testing Bend Programs
The example below sums all the numbers in the range from `start` to `target`. It can be written in two different methods: one that is inherently sequential (and thus cannot be parallelized), and another that is easily parallelizable. (We will be using the `-s`flag in most examples, for the sake of visibility)
#### Sequential version:
2024-06-05 19:31:30 +03:00
First, create a file named `sequential_sum.bend`
2024-06-05 18:16:02 +03:00
```sh
# Write this command on your terminal
2024-06-05 19:31:30 +03:00
touch sequential_sum.bend
2024-06-05 18:16:02 +03:00
```
2024-06-05 19:31:30 +03:00
Then with your text editor, open the file `sequential_sum.bend`, copy the code below and paste in the file.
2024-06-05 18:16:02 +03:00
```py
# Defines the function Sum with two parameters: start and target
def Sum(start, target):
if start == target:
2024-06-06 16:05:38 +03:00
# If the value of start is the same as target, returns start.
2024-06-05 18:16:02 +03:00
return start
else:
2024-06-06 16:05:38 +03:00
# If start is not equal to target, recursively call Sum with
# start incremented by 1, and add the result to start.
2024-06-05 18:16:02 +03:00
return start + Sum(start + 1, target)
def main():
2024-06-06 16:05:38 +03:00
# This translates to (1 + (2 + (3 + (...... + (999999 + 1000000)))))
# Note that this will overflow the maximum value of a number in Bend
return Sum(1, 1_000_000)
2024-06-05 18:16:02 +03:00
```
##### Running the file
You can run it using Rust interpreter (Sequential)
```sh
2024-06-05 19:31:30 +03:00
bend run sequential_sum.bend -s
2024-06-05 18:16:02 +03:00
```
Or you can run it using C interpreter (Sequential)
```sh
2024-06-05 19:31:30 +03:00
bend run-c sequential_sum.bend -s
2024-06-05 18:16:02 +03:00
```
If you have a NVIDIA GPU, you can also run in CUDA (Sequential)
```sh
2024-06-05 19:31:30 +03:00
bend run-cu sequential_sum.bend -s
2024-06-05 18:16:02 +03:00
```
In this version, the next value to be calculated depends on the previous sum, meaning that it cannot proceed until the current computation is complete. Now, let's look at the easily parallelizable version.
2024-05-14 17:56:05 +03:00
2024-06-05 18:16:02 +03:00
#### Parallelizable version:
2024-06-05 19:31:30 +03:00
First close the old file and then proceed to your terminal to create `parallel_sum.bend`
2024-06-05 18:16:02 +03:00
```sh
# Write this command on your terminal
2024-06-05 19:31:30 +03:00
touch parallel_sum.bend
2024-06-05 18:16:02 +03:00
```
2024-06-05 19:31:30 +03:00
Then with your text editor, open the file `parallel_sum.bend`, copy the code below and paste in the file.
2024-06-05 18:16:02 +03:00
```py
# Defines the function Sum with two parameters: start and target
def Sum(start, target):
if start == target:
2024-06-06 16:05:38 +03:00
# If the value of start is the same as target, returns start.
2024-06-05 18:16:02 +03:00
return start
else:
2024-06-06 16:05:38 +03:00
# If start is not equal to target, calculate the midpoint (half),
# then recursively call Sum on both halves.
2024-06-05 18:16:02 +03:00
half = (start + target) / 2
left = Sum(start, half) # (Start -> Half)
right = Sum(half + 1, target)
return left + right
2024-06-06 16:05:38 +03:00
# A parallelizable sum of numbers from 1 to 1000000
2024-06-05 18:16:02 +03:00
def main():
2024-06-06 16:05:38 +03:00
# This translates to (((1 + 2) + (3 + 4)) + ... (999999 + 1000000)...)
return Sum(1, 1_000_000)
2024-06-05 18:16:02 +03:00
```
2023-10-25 23:03:47 +03:00
2024-06-05 18:16:02 +03:00
In this example, the (3 + 4) sum does not depend on the (1 + 2), meaning that it can run in parallel because both computations can happen at the same time.
2024-05-14 17:56:05 +03:00
2024-06-05 18:16:02 +03:00
##### Running the file
You can run it using Rust interpreter (Sequential)
```sh
2024-06-05 19:31:30 +03:00
bend run parallel_sum.bend -s
```
2024-06-05 18:16:02 +03:00
Or you can run it using C interpreter (Parallel)
```sh
2024-06-06 23:34:04 +03:00
bend run-c parallel_sum.bend -s
2024-06-05 18:16:02 +03:00
```
2024-05-14 17:56:05 +03:00
2024-06-05 18:20:19 +03:00
If you have a NVIDIA GPU, you can also run in CUDA (Massively parallel)
2024-06-05 18:16:02 +03:00
```sh
2024-06-06 23:34:04 +03:00
bend run-cu parallel_sum.bend -s
```
2024-06-05 18:16:02 +03:00
In Bend, it can be parallelized by just changing the run command. If your code **can** run in parallel it **will** run in parallel.
2024-05-14 17:56:05 +03:00
2024-05-06 20:55:33 +03:00
2024-06-05 18:16:02 +03:00
### Speedup Examples
The code snippet below implements a [bitonic sorter](https://en.wikipedia.org/wiki/Bitonic_sorter) with *immutable tree rotations*. It's not the type of algorithm you would expect to run fast on GPUs. However, since it uses a divide and conquer approach, which is inherently parallel, Bend will execute it on multiple threads, no thread creation, no explicit lock management.
2024-05-06 20:55:33 +03:00
#### Bitonic Sorter Benchmark
- `bend run`: CPU, Apple M3 Max: 12.15 seconds
- `bend run-c`: CPU, Apple M3 Max: 0.96 seconds
- `bend run-cu`: GPU, NVIDIA RTX 4090: 0.21 seconds
2024-06-05 18:16:02 +03:00
<details>
<summary><b>Click here for the Bitonic Sorter code</b></summary>
2024-06-05 18:16:02 +03:00
```py
# Sorting Network = just rotate trees!
def sort(d, s, tree):
switch d:
case 0:
return tree
case _:
(x,y) = tree
lft = sort(d-1, 0, x)
rgt = sort(d-1, 1, y)
2024-06-05 18:16:02 +03:00
return rots(d, s, (lft, rgt))
# Rotates sub-trees (Blue/Green Box)
def rots(d, s, tree):
switch d:
case 0:
return tree
case _:
(x,y) = tree
return down(d, s, warp(d-1, s, x, y))
2024-06-05 18:16:02 +03:00
# Swaps distant values (Red Box)
def warp(d, s, a, b):
switch d:
case 0:
return swap(s ^ (a > b), a, b)
case _:
(a.a, a.b) = a
(b.a, b.b) = b
(A.a, A.b) = warp(d-1, s, a.a, b.a)
(B.a, B.b) = warp(d-1, s, a.b, b.b)
return ((A.a,B.a),(A.b,B.b))
# Propagates downwards
def down(d,s,t):
switch d:
case 0:
return t
case _:
(t.a, t.b) = t
return (rots(d-1, s, t.a), rots(d-1, s, t.b))
# Swaps a single pair
def swap(s, a, b):
switch s:
case 0:
return (a,b)
case _:
return (b,a)
# Testing
# -------
# Generates a big tree
def gen(d, x):
switch d:
case 0:
return x
case _:
return (gen(d-1, x * 2 + 1), gen(d-1, x * 2))
# Sums a big tree
def sum(d, t):
switch d:
case 0:
return t
case _:
(t.a, t.b) = t
return sum(d-1, t.a) + sum(d-1, t.b)
# Sorts a big tree
2024-05-16 20:15:54 +03:00
def main:
2024-06-05 18:16:02 +03:00
return sum(20, sort(20, 0, gen(20, 0)))
2024-05-16 20:15:54 +03:00
```
2024-06-05 18:16:02 +03:00
</details>
if you are interested in some other algorithms, you can check our [examples folder](https://github.com/HigherOrderCO/Bend/tree/main/examples)
2024-06-05 18:16:02 +03:00
### Additional Resources
2024-06-05 19:31:30 +03:00
- To understand the technology behind Bend, check out the HVM2 [paper](https://docs.google.com/viewer?url=https://raw.githubusercontent.com/HigherOrderCO/HVM/main/paper/PAPER.pdf).
2024-06-05 19:34:15 +03:00
- We are working on an official documentation, meanwhile for a more in depth
explanation check [GUIDE.md](https://github.com/HigherOrderCO/Bend/blob/main/GUIDE.md)
- Read about our features at [FEATURES.md](https://github.com/HigherOrderCO/Bend/blob/main/FEATURES.md)
2024-06-05 19:31:30 +03:00
- Bend is developed by [HigherOrderCO](https://higherorderco.com/) - join our [Discord](https://discord.higherorderco.com)!