1
1
mirror of https://github.com/anoma/juvix.git synced 2024-12-14 17:32:00 +03:00
juvix/app/GlobalOptions.hs

242 lines
7.7 KiB
Haskell
Raw Normal View History

module GlobalOptions
( module GlobalOptions,
module Juvix.Data.Effect.TaggedLock,
)
where
2022-12-20 15:05:40 +03:00
import CommonOptions
import Juvix.Compiler.Core.Options qualified as Core
import Juvix.Compiler.Internal.Pretty.Options qualified as Internal
import Juvix.Compiler.Pipeline
import Juvix.Compiler.Pipeline.Root
import Juvix.Data.Effect.TaggedLock
import Juvix.Data.Error.GenericError qualified as E
import Juvix.Data.Field
data GlobalOptions = GlobalOptions
{ _globalNoColors :: Bool,
_globalShowNameIds :: Bool,
_globalBuildDir :: Maybe (AppPath Dir),
_globalIdeEndErrorChar :: Maybe Char,
2022-08-19 17:57:07 +03:00
_globalStdin :: Bool,
_globalNoTermination :: Bool,
_globalNoPositivity :: Bool,
_globalNoCoverage :: Bool,
_globalNoStdlib :: Bool,
_globalUnrollLimit :: Int,
Parallel pipeline (#2779) This pr introduces parallelism in the pipeline to gain performance. I've included benchmarks at the end. - Closes #2750. # Flags: There are two new global flags: 1. `-N / --threads`. It is used to set the number of capabilities. According to [GHC documentation](https://hackage.haskell.org/package/base-4.20.0.0/docs/GHC-Conc.html#v:setNumCapabilities): _Set the number of Haskell threads that can run truly simultaneously (on separate physical processors) at any given time_. When compiling in parallel, we create this many worker threads. The default value is `-N auto`, which sets `-N` to half the number of logical cores, capped at 8. 2. `--dev-show-thread-ids`. When given, the thread id is printed in the compilation progress log. E.g. ![image](https://github.com/anoma/juvix/assets/5511599/9359fae2-0be1-43e5-8d74-faa82cba4034) # Parallel compilation 1. I've added `src/Parallel/ParallelTemplate.hs` which contains all the concurrency related code. I think it is good to keep this code separated from the actual compiler code. 2. I've added a progress log (only for the parallel driver) that outputs a log of the compilation progress, similar to what stack/cabal do. # Code changes: 1. I've removed the `setup` stage where we were registering dependencies. Instead, the dependencies are registered when the `pathResolver` is run for the first time. This way it is safer. 1. Now the `ImportTree` is needed to run the pipeline. Cycles are detected during the construction of this tree, so I've removed `Reader ImportParents` from the pipeline. 3. For the package pathresolver, we do not support parallelism yet (we could add support for it in the future, but the gains will be small). 4. When `-N1`, the pipeline remains unchanged, so performance should be the same as in the main branch (except there is a small performance degradation due to adding the `-threaded` flag). 5. I've introduced `PipelineOptions`, which are options that are used to pass options to the effects in the pipeline. 6. `PathResolver` constraint has been removed from the `upTo*` functions in the pipeline due to being redundant. 7. I've added a lot of `NFData` instances. They are needed to force the full evaluation of `Stored.ModuleInfo` in each of the threads. 2. The `Cache` effect uses [`SharedState`](https://hackage.haskell.org/package/effectful-core-2.3.0.1/docs/Effectful-State-Static-Shared.html) as opposed to [`LocalState`](https://hackage.haskell.org/package/effectful-core-2.3.0.1/docs/Effectful-Writer-Static-Local.html). Perhaps we should provide different versions. 3. I've added a `Cache` handler that accepts a setup function. The setup is triggered when a miss is detected. It is used to lazily compile the modules in parallel. # Tests 1. I've adapted the smoke test suite to ignore the progress log in the stderr. 5. I've had to adapt `tests/positive/Internal/Lambda.juvix`. Due to laziness, a crash happening in this file was not being caught. The problem is that in this file we have a lambda function with different number of patterns in their clauses, which we currently do not support (https://github.com/anoma/juvix/issues/1706). 6. I've had to comment out the definition ``` x : Box ((A : Type) → A → A) := box λ {A a := a}; ``` From the test as it was causing a crash (https://github.com/anoma/juvix/issues/2247). # Future Work 1. It should be investigated how much performance we lose by fully evaluating the `Stored.ModuleInfo`, since some information in it will be discarded. It may be possible to be more fine-grained when forcing evaluation. 8. The scanning of imports to build the import tree is sequential. Now, we build the import tree from the entry point module and only the modules that are imported from it are in the tree. However, we have discussed that at some point we should make a distinction between `juvix` _the compiler_ and `juvix` _the build tool_. When using `juvix` as a build tool it makes sense to typecheck/compile (to stored core) all modules in the project. When/if we do this, scanning imports in all modules in parallel becomes trivial. 9. The implementation of the `ParallelTemplate` uses low level primitives such as [forkIO](https://hackage.haskell.org/package/base-4.20.0.0/docs/Control-Concurrent.html#v:forkIO). At some point it should be refactored to use safer functions from the [`Effectful.Concurrent.Async`](https://hackage.haskell.org/package/effectful-2.3.0.0/docs/Effectful-Concurrent-Async.html) module. 10. The number of cores and worker threads that we spawn is determined by the command line. Ideally, we could use to import tree to compute an upper bound to the ideal number of cores to use. 11. We could add an animation that displays which modules are being compiled in parallel and which have finished being compiled. # Benchmarks On some benchmarks, I include the GHC runtime option [`-A`](https://downloads.haskell.org/ghc/latest/docs/users_guide/runtime_control.html#rts-flag--A%20%E2%9F%A8size%E2%9F%A9), which sometimes makes a good impact on performance. Thanks to @paulcadman for pointing this out. I've figured a good combination of `-N` and `-A` through trial and error (but this oviously depends on the cpu and juvix projects). ## Typecheck the standard library ### Clean run (88% faster than main): ``` hyperfine --warmup 1 --prepare 'juvix clean' 'juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432' 'juvix -N 4 typecheck Stdlib/Prelude.juvix' 'juvix-main typecheck Stdlib/Prelude.juvix' Benchmark 1: juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 Time (mean ± σ): 444.1 ms ± 6.5 ms [User: 1018.0 ms, System: 77.7 ms] Range (min … max): 432.6 ms … 455.9 ms 10 runs Benchmark 2: juvix -N 4 typecheck Stdlib/Prelude.juvix Time (mean ± σ): 628.3 ms ± 23.9 ms [User: 1227.6 ms, System: 69.5 ms] Range (min … max): 584.7 ms … 670.6 ms 10 runs Benchmark 3: juvix-main typecheck Stdlib/Prelude.juvix Time (mean ± σ): 835.9 ms ± 12.3 ms [User: 788.5 ms, System: 31.9 ms] Range (min … max): 816.0 ms … 853.6 ms 10 runs Summary juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 ran 1.41 ± 0.06 times faster than juvix -N 4 typecheck Stdlib/Prelude.juvix 1.88 ± 0.04 times faster than juvix-main typecheck Stdlib/Prelude.juvix ``` ### Cached run (43% faster than main): ``` hyperfine --warmup 1 'juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432' 'juvix -N 4 typecheck Stdlib/Prelude.juvix' 'juvix-main typecheck Stdlib/Prelude.juvix' Benchmark 1: juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 Time (mean ± σ): 241.3 ms ± 7.3 ms [User: 538.6 ms, System: 101.3 ms] Range (min … max): 231.5 ms … 251.3 ms 11 runs Benchmark 2: juvix -N 4 typecheck Stdlib/Prelude.juvix Time (mean ± σ): 235.1 ms ± 12.0 ms [User: 405.3 ms, System: 87.7 ms] Range (min … max): 216.1 ms … 253.1 ms 12 runs Benchmark 3: juvix-main typecheck Stdlib/Prelude.juvix Time (mean ± σ): 336.7 ms ± 13.3 ms [User: 269.5 ms, System: 67.1 ms] Range (min … max): 316.9 ms … 351.8 ms 10 runs Summary juvix -N 4 typecheck Stdlib/Prelude.juvix ran 1.03 ± 0.06 times faster than juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 1.43 ± 0.09 times faster than juvix-main typecheck Stdlib/Prelude.juvix ``` ## Typecheck the test suite of the containers library At the moment this is the biggest juvix project that we have. ### Clean run (105% faster than main) ``` hyperfine --warmup 1 --prepare 'juvix clean' 'juvix -N 6 typecheck Main.juvix +RTS -A67108864' 'juvix -N 4 typecheck Main.juvix' 'juvix-main typecheck Main.juvix' Benchmark 1: juvix -N 6 typecheck Main.juvix +RTS -A67108864 Time (mean ± σ): 1.006 s ± 0.011 s [User: 2.171 s, System: 0.162 s] Range (min … max): 0.991 s … 1.023 s 10 runs Benchmark 2: juvix -N 4 typecheck Main.juvix Time (mean ± σ): 1.584 s ± 0.046 s [User: 2.934 s, System: 0.149 s] Range (min … max): 1.535 s … 1.660 s 10 runs Benchmark 3: juvix-main typecheck Main.juvix Time (mean ± σ): 2.066 s ± 0.010 s [User: 1.939 s, System: 0.089 s] Range (min … max): 2.048 s … 2.077 s 10 runs Summary juvix -N 6 typecheck Main.juvix +RTS -A67108864 ran 1.57 ± 0.05 times faster than juvix -N 4 typecheck Main.juvix 2.05 ± 0.03 times faster than juvix-main typecheck Main.juvix ``` ### Cached run (54% faster than main) ``` hyperfine --warmup 1 'juvix -N 6 typecheck Main.juvix +RTS -A33554432' 'juvix -N 4 typecheck Main.juvix' 'juvix-main typecheck Main.juvix' Benchmark 1: juvix -N 6 typecheck Main.juvix +RTS -A33554432 Time (mean ± σ): 551.8 ms ± 13.2 ms [User: 1419.8 ms, System: 199.4 ms] Range (min … max): 535.2 ms … 570.6 ms 10 runs Benchmark 2: juvix -N 4 typecheck Main.juvix Time (mean ± σ): 636.7 ms ± 17.3 ms [User: 1006.3 ms, System: 196.3 ms] Range (min … max): 601.6 ms … 655.3 ms 10 runs Benchmark 3: juvix-main typecheck Main.juvix Time (mean ± σ): 847.2 ms ± 58.9 ms [User: 710.1 ms, System: 126.5 ms] Range (min … max): 731.1 ms … 890.0 ms 10 runs Summary juvix -N 6 typecheck Main.juvix +RTS -A33554432 ran 1.15 ± 0.04 times faster than juvix -N 4 typecheck Main.juvix 1.54 ± 0.11 times faster than juvix-main typecheck Main.juvix ```
2024-05-31 14:41:30 +03:00
_globalNumThreads :: NumThreads,
_globalFieldSize :: Maybe Natural,
Parallel pipeline (#2779) This pr introduces parallelism in the pipeline to gain performance. I've included benchmarks at the end. - Closes #2750. # Flags: There are two new global flags: 1. `-N / --threads`. It is used to set the number of capabilities. According to [GHC documentation](https://hackage.haskell.org/package/base-4.20.0.0/docs/GHC-Conc.html#v:setNumCapabilities): _Set the number of Haskell threads that can run truly simultaneously (on separate physical processors) at any given time_. When compiling in parallel, we create this many worker threads. The default value is `-N auto`, which sets `-N` to half the number of logical cores, capped at 8. 2. `--dev-show-thread-ids`. When given, the thread id is printed in the compilation progress log. E.g. ![image](https://github.com/anoma/juvix/assets/5511599/9359fae2-0be1-43e5-8d74-faa82cba4034) # Parallel compilation 1. I've added `src/Parallel/ParallelTemplate.hs` which contains all the concurrency related code. I think it is good to keep this code separated from the actual compiler code. 2. I've added a progress log (only for the parallel driver) that outputs a log of the compilation progress, similar to what stack/cabal do. # Code changes: 1. I've removed the `setup` stage where we were registering dependencies. Instead, the dependencies are registered when the `pathResolver` is run for the first time. This way it is safer. 1. Now the `ImportTree` is needed to run the pipeline. Cycles are detected during the construction of this tree, so I've removed `Reader ImportParents` from the pipeline. 3. For the package pathresolver, we do not support parallelism yet (we could add support for it in the future, but the gains will be small). 4. When `-N1`, the pipeline remains unchanged, so performance should be the same as in the main branch (except there is a small performance degradation due to adding the `-threaded` flag). 5. I've introduced `PipelineOptions`, which are options that are used to pass options to the effects in the pipeline. 6. `PathResolver` constraint has been removed from the `upTo*` functions in the pipeline due to being redundant. 7. I've added a lot of `NFData` instances. They are needed to force the full evaluation of `Stored.ModuleInfo` in each of the threads. 2. The `Cache` effect uses [`SharedState`](https://hackage.haskell.org/package/effectful-core-2.3.0.1/docs/Effectful-State-Static-Shared.html) as opposed to [`LocalState`](https://hackage.haskell.org/package/effectful-core-2.3.0.1/docs/Effectful-Writer-Static-Local.html). Perhaps we should provide different versions. 3. I've added a `Cache` handler that accepts a setup function. The setup is triggered when a miss is detected. It is used to lazily compile the modules in parallel. # Tests 1. I've adapted the smoke test suite to ignore the progress log in the stderr. 5. I've had to adapt `tests/positive/Internal/Lambda.juvix`. Due to laziness, a crash happening in this file was not being caught. The problem is that in this file we have a lambda function with different number of patterns in their clauses, which we currently do not support (https://github.com/anoma/juvix/issues/1706). 6. I've had to comment out the definition ``` x : Box ((A : Type) → A → A) := box λ {A a := a}; ``` From the test as it was causing a crash (https://github.com/anoma/juvix/issues/2247). # Future Work 1. It should be investigated how much performance we lose by fully evaluating the `Stored.ModuleInfo`, since some information in it will be discarded. It may be possible to be more fine-grained when forcing evaluation. 8. The scanning of imports to build the import tree is sequential. Now, we build the import tree from the entry point module and only the modules that are imported from it are in the tree. However, we have discussed that at some point we should make a distinction between `juvix` _the compiler_ and `juvix` _the build tool_. When using `juvix` as a build tool it makes sense to typecheck/compile (to stored core) all modules in the project. When/if we do this, scanning imports in all modules in parallel becomes trivial. 9. The implementation of the `ParallelTemplate` uses low level primitives such as [forkIO](https://hackage.haskell.org/package/base-4.20.0.0/docs/Control-Concurrent.html#v:forkIO). At some point it should be refactored to use safer functions from the [`Effectful.Concurrent.Async`](https://hackage.haskell.org/package/effectful-2.3.0.0/docs/Effectful-Concurrent-Async.html) module. 10. The number of cores and worker threads that we spawn is determined by the command line. Ideally, we could use to import tree to compute an upper bound to the ideal number of cores to use. 11. We could add an animation that displays which modules are being compiled in parallel and which have finished being compiled. # Benchmarks On some benchmarks, I include the GHC runtime option [`-A`](https://downloads.haskell.org/ghc/latest/docs/users_guide/runtime_control.html#rts-flag--A%20%E2%9F%A8size%E2%9F%A9), which sometimes makes a good impact on performance. Thanks to @paulcadman for pointing this out. I've figured a good combination of `-N` and `-A` through trial and error (but this oviously depends on the cpu and juvix projects). ## Typecheck the standard library ### Clean run (88% faster than main): ``` hyperfine --warmup 1 --prepare 'juvix clean' 'juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432' 'juvix -N 4 typecheck Stdlib/Prelude.juvix' 'juvix-main typecheck Stdlib/Prelude.juvix' Benchmark 1: juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 Time (mean ± σ): 444.1 ms ± 6.5 ms [User: 1018.0 ms, System: 77.7 ms] Range (min … max): 432.6 ms … 455.9 ms 10 runs Benchmark 2: juvix -N 4 typecheck Stdlib/Prelude.juvix Time (mean ± σ): 628.3 ms ± 23.9 ms [User: 1227.6 ms, System: 69.5 ms] Range (min … max): 584.7 ms … 670.6 ms 10 runs Benchmark 3: juvix-main typecheck Stdlib/Prelude.juvix Time (mean ± σ): 835.9 ms ± 12.3 ms [User: 788.5 ms, System: 31.9 ms] Range (min … max): 816.0 ms … 853.6 ms 10 runs Summary juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 ran 1.41 ± 0.06 times faster than juvix -N 4 typecheck Stdlib/Prelude.juvix 1.88 ± 0.04 times faster than juvix-main typecheck Stdlib/Prelude.juvix ``` ### Cached run (43% faster than main): ``` hyperfine --warmup 1 'juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432' 'juvix -N 4 typecheck Stdlib/Prelude.juvix' 'juvix-main typecheck Stdlib/Prelude.juvix' Benchmark 1: juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 Time (mean ± σ): 241.3 ms ± 7.3 ms [User: 538.6 ms, System: 101.3 ms] Range (min … max): 231.5 ms … 251.3 ms 11 runs Benchmark 2: juvix -N 4 typecheck Stdlib/Prelude.juvix Time (mean ± σ): 235.1 ms ± 12.0 ms [User: 405.3 ms, System: 87.7 ms] Range (min … max): 216.1 ms … 253.1 ms 12 runs Benchmark 3: juvix-main typecheck Stdlib/Prelude.juvix Time (mean ± σ): 336.7 ms ± 13.3 ms [User: 269.5 ms, System: 67.1 ms] Range (min … max): 316.9 ms … 351.8 ms 10 runs Summary juvix -N 4 typecheck Stdlib/Prelude.juvix ran 1.03 ± 0.06 times faster than juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 1.43 ± 0.09 times faster than juvix-main typecheck Stdlib/Prelude.juvix ``` ## Typecheck the test suite of the containers library At the moment this is the biggest juvix project that we have. ### Clean run (105% faster than main) ``` hyperfine --warmup 1 --prepare 'juvix clean' 'juvix -N 6 typecheck Main.juvix +RTS -A67108864' 'juvix -N 4 typecheck Main.juvix' 'juvix-main typecheck Main.juvix' Benchmark 1: juvix -N 6 typecheck Main.juvix +RTS -A67108864 Time (mean ± σ): 1.006 s ± 0.011 s [User: 2.171 s, System: 0.162 s] Range (min … max): 0.991 s … 1.023 s 10 runs Benchmark 2: juvix -N 4 typecheck Main.juvix Time (mean ± σ): 1.584 s ± 0.046 s [User: 2.934 s, System: 0.149 s] Range (min … max): 1.535 s … 1.660 s 10 runs Benchmark 3: juvix-main typecheck Main.juvix Time (mean ± σ): 2.066 s ± 0.010 s [User: 1.939 s, System: 0.089 s] Range (min … max): 2.048 s … 2.077 s 10 runs Summary juvix -N 6 typecheck Main.juvix +RTS -A67108864 ran 1.57 ± 0.05 times faster than juvix -N 4 typecheck Main.juvix 2.05 ± 0.03 times faster than juvix-main typecheck Main.juvix ``` ### Cached run (54% faster than main) ``` hyperfine --warmup 1 'juvix -N 6 typecheck Main.juvix +RTS -A33554432' 'juvix -N 4 typecheck Main.juvix' 'juvix-main typecheck Main.juvix' Benchmark 1: juvix -N 6 typecheck Main.juvix +RTS -A33554432 Time (mean ± σ): 551.8 ms ± 13.2 ms [User: 1419.8 ms, System: 199.4 ms] Range (min … max): 535.2 ms … 570.6 ms 10 runs Benchmark 2: juvix -N 4 typecheck Main.juvix Time (mean ± σ): 636.7 ms ± 17.3 ms [User: 1006.3 ms, System: 196.3 ms] Range (min … max): 601.6 ms … 655.3 ms 10 runs Benchmark 3: juvix-main typecheck Main.juvix Time (mean ± σ): 847.2 ms ± 58.9 ms [User: 710.1 ms, System: 126.5 ms] Range (min … max): 731.1 ms … 890.0 ms 10 runs Summary juvix -N 6 typecheck Main.juvix +RTS -A33554432 ran 1.15 ± 0.04 times faster than juvix -N 4 typecheck Main.juvix 1.54 ± 0.11 times faster than juvix-main typecheck Main.juvix ```
2024-05-31 14:41:30 +03:00
_globalOffline :: Bool,
_globalLogLevel :: LogLevel,
Parallel pipeline (#2779) This pr introduces parallelism in the pipeline to gain performance. I've included benchmarks at the end. - Closes #2750. # Flags: There are two new global flags: 1. `-N / --threads`. It is used to set the number of capabilities. According to [GHC documentation](https://hackage.haskell.org/package/base-4.20.0.0/docs/GHC-Conc.html#v:setNumCapabilities): _Set the number of Haskell threads that can run truly simultaneously (on separate physical processors) at any given time_. When compiling in parallel, we create this many worker threads. The default value is `-N auto`, which sets `-N` to half the number of logical cores, capped at 8. 2. `--dev-show-thread-ids`. When given, the thread id is printed in the compilation progress log. E.g. ![image](https://github.com/anoma/juvix/assets/5511599/9359fae2-0be1-43e5-8d74-faa82cba4034) # Parallel compilation 1. I've added `src/Parallel/ParallelTemplate.hs` which contains all the concurrency related code. I think it is good to keep this code separated from the actual compiler code. 2. I've added a progress log (only for the parallel driver) that outputs a log of the compilation progress, similar to what stack/cabal do. # Code changes: 1. I've removed the `setup` stage where we were registering dependencies. Instead, the dependencies are registered when the `pathResolver` is run for the first time. This way it is safer. 1. Now the `ImportTree` is needed to run the pipeline. Cycles are detected during the construction of this tree, so I've removed `Reader ImportParents` from the pipeline. 3. For the package pathresolver, we do not support parallelism yet (we could add support for it in the future, but the gains will be small). 4. When `-N1`, the pipeline remains unchanged, so performance should be the same as in the main branch (except there is a small performance degradation due to adding the `-threaded` flag). 5. I've introduced `PipelineOptions`, which are options that are used to pass options to the effects in the pipeline. 6. `PathResolver` constraint has been removed from the `upTo*` functions in the pipeline due to being redundant. 7. I've added a lot of `NFData` instances. They are needed to force the full evaluation of `Stored.ModuleInfo` in each of the threads. 2. The `Cache` effect uses [`SharedState`](https://hackage.haskell.org/package/effectful-core-2.3.0.1/docs/Effectful-State-Static-Shared.html) as opposed to [`LocalState`](https://hackage.haskell.org/package/effectful-core-2.3.0.1/docs/Effectful-Writer-Static-Local.html). Perhaps we should provide different versions. 3. I've added a `Cache` handler that accepts a setup function. The setup is triggered when a miss is detected. It is used to lazily compile the modules in parallel. # Tests 1. I've adapted the smoke test suite to ignore the progress log in the stderr. 5. I've had to adapt `tests/positive/Internal/Lambda.juvix`. Due to laziness, a crash happening in this file was not being caught. The problem is that in this file we have a lambda function with different number of patterns in their clauses, which we currently do not support (https://github.com/anoma/juvix/issues/1706). 6. I've had to comment out the definition ``` x : Box ((A : Type) → A → A) := box λ {A a := a}; ``` From the test as it was causing a crash (https://github.com/anoma/juvix/issues/2247). # Future Work 1. It should be investigated how much performance we lose by fully evaluating the `Stored.ModuleInfo`, since some information in it will be discarded. It may be possible to be more fine-grained when forcing evaluation. 8. The scanning of imports to build the import tree is sequential. Now, we build the import tree from the entry point module and only the modules that are imported from it are in the tree. However, we have discussed that at some point we should make a distinction between `juvix` _the compiler_ and `juvix` _the build tool_. When using `juvix` as a build tool it makes sense to typecheck/compile (to stored core) all modules in the project. When/if we do this, scanning imports in all modules in parallel becomes trivial. 9. The implementation of the `ParallelTemplate` uses low level primitives such as [forkIO](https://hackage.haskell.org/package/base-4.20.0.0/docs/Control-Concurrent.html#v:forkIO). At some point it should be refactored to use safer functions from the [`Effectful.Concurrent.Async`](https://hackage.haskell.org/package/effectful-2.3.0.0/docs/Effectful-Concurrent-Async.html) module. 10. The number of cores and worker threads that we spawn is determined by the command line. Ideally, we could use to import tree to compute an upper bound to the ideal number of cores to use. 11. We could add an animation that displays which modules are being compiled in parallel and which have finished being compiled. # Benchmarks On some benchmarks, I include the GHC runtime option [`-A`](https://downloads.haskell.org/ghc/latest/docs/users_guide/runtime_control.html#rts-flag--A%20%E2%9F%A8size%E2%9F%A9), which sometimes makes a good impact on performance. Thanks to @paulcadman for pointing this out. I've figured a good combination of `-N` and `-A` through trial and error (but this oviously depends on the cpu and juvix projects). ## Typecheck the standard library ### Clean run (88% faster than main): ``` hyperfine --warmup 1 --prepare 'juvix clean' 'juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432' 'juvix -N 4 typecheck Stdlib/Prelude.juvix' 'juvix-main typecheck Stdlib/Prelude.juvix' Benchmark 1: juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 Time (mean ± σ): 444.1 ms ± 6.5 ms [User: 1018.0 ms, System: 77.7 ms] Range (min … max): 432.6 ms … 455.9 ms 10 runs Benchmark 2: juvix -N 4 typecheck Stdlib/Prelude.juvix Time (mean ± σ): 628.3 ms ± 23.9 ms [User: 1227.6 ms, System: 69.5 ms] Range (min … max): 584.7 ms … 670.6 ms 10 runs Benchmark 3: juvix-main typecheck Stdlib/Prelude.juvix Time (mean ± σ): 835.9 ms ± 12.3 ms [User: 788.5 ms, System: 31.9 ms] Range (min … max): 816.0 ms … 853.6 ms 10 runs Summary juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 ran 1.41 ± 0.06 times faster than juvix -N 4 typecheck Stdlib/Prelude.juvix 1.88 ± 0.04 times faster than juvix-main typecheck Stdlib/Prelude.juvix ``` ### Cached run (43% faster than main): ``` hyperfine --warmup 1 'juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432' 'juvix -N 4 typecheck Stdlib/Prelude.juvix' 'juvix-main typecheck Stdlib/Prelude.juvix' Benchmark 1: juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 Time (mean ± σ): 241.3 ms ± 7.3 ms [User: 538.6 ms, System: 101.3 ms] Range (min … max): 231.5 ms … 251.3 ms 11 runs Benchmark 2: juvix -N 4 typecheck Stdlib/Prelude.juvix Time (mean ± σ): 235.1 ms ± 12.0 ms [User: 405.3 ms, System: 87.7 ms] Range (min … max): 216.1 ms … 253.1 ms 12 runs Benchmark 3: juvix-main typecheck Stdlib/Prelude.juvix Time (mean ± σ): 336.7 ms ± 13.3 ms [User: 269.5 ms, System: 67.1 ms] Range (min … max): 316.9 ms … 351.8 ms 10 runs Summary juvix -N 4 typecheck Stdlib/Prelude.juvix ran 1.03 ± 0.06 times faster than juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 1.43 ± 0.09 times faster than juvix-main typecheck Stdlib/Prelude.juvix ``` ## Typecheck the test suite of the containers library At the moment this is the biggest juvix project that we have. ### Clean run (105% faster than main) ``` hyperfine --warmup 1 --prepare 'juvix clean' 'juvix -N 6 typecheck Main.juvix +RTS -A67108864' 'juvix -N 4 typecheck Main.juvix' 'juvix-main typecheck Main.juvix' Benchmark 1: juvix -N 6 typecheck Main.juvix +RTS -A67108864 Time (mean ± σ): 1.006 s ± 0.011 s [User: 2.171 s, System: 0.162 s] Range (min … max): 0.991 s … 1.023 s 10 runs Benchmark 2: juvix -N 4 typecheck Main.juvix Time (mean ± σ): 1.584 s ± 0.046 s [User: 2.934 s, System: 0.149 s] Range (min … max): 1.535 s … 1.660 s 10 runs Benchmark 3: juvix-main typecheck Main.juvix Time (mean ± σ): 2.066 s ± 0.010 s [User: 1.939 s, System: 0.089 s] Range (min … max): 2.048 s … 2.077 s 10 runs Summary juvix -N 6 typecheck Main.juvix +RTS -A67108864 ran 1.57 ± 0.05 times faster than juvix -N 4 typecheck Main.juvix 2.05 ± 0.03 times faster than juvix-main typecheck Main.juvix ``` ### Cached run (54% faster than main) ``` hyperfine --warmup 1 'juvix -N 6 typecheck Main.juvix +RTS -A33554432' 'juvix -N 4 typecheck Main.juvix' 'juvix-main typecheck Main.juvix' Benchmark 1: juvix -N 6 typecheck Main.juvix +RTS -A33554432 Time (mean ± σ): 551.8 ms ± 13.2 ms [User: 1419.8 ms, System: 199.4 ms] Range (min … max): 535.2 ms … 570.6 ms 10 runs Benchmark 2: juvix -N 4 typecheck Main.juvix Time (mean ± σ): 636.7 ms ± 17.3 ms [User: 1006.3 ms, System: 196.3 ms] Range (min … max): 601.6 ms … 655.3 ms 10 runs Benchmark 3: juvix-main typecheck Main.juvix Time (mean ± σ): 847.2 ms ± 58.9 ms [User: 710.1 ms, System: 126.5 ms] Range (min … max): 731.1 ms … 890.0 ms 10 runs Summary juvix -N 6 typecheck Main.juvix +RTS -A33554432 ran 1.15 ± 0.04 times faster than juvix -N 4 typecheck Main.juvix 1.54 ± 0.11 times faster than juvix-main typecheck Main.juvix ```
2024-05-31 14:41:30 +03:00
_globalDevShowThreadIds :: Bool
}
2022-12-20 15:05:40 +03:00
deriving stock (Eq, Show)
makeLenses ''GlobalOptions
instance CanonicalProjection GlobalOptions Internal.Options where
project g =
Internal.defaultOptions
{ Internal._optShowNameIds = g ^. globalShowNameIds
}
2022-09-14 17:16:15 +03:00
instance CanonicalProjection GlobalOptions E.GenericOptions where
project GlobalOptions {..} =
E.GenericOptions
{ E._showNameIds = _globalShowNameIds
}
2022-09-14 17:16:15 +03:00
instance CanonicalProjection GlobalOptions Core.CoreOptions where
project GlobalOptions {..} =
Core.CoreOptions
{ Core._optCheckCoverage = not _globalNoCoverage,
Core._optUnrollLimit = _globalUnrollLimit,
Core._optFieldSize = fromMaybe defaultFieldSize _globalFieldSize,
Core._optOptimizationLevel = defaultOptimizationLevel,
Core._optInliningDepth = defaultInliningDepth
}
defaultGlobalOptions :: GlobalOptions
defaultGlobalOptions =
GlobalOptions
{ _globalNoColors = False,
Parallel pipeline (#2779) This pr introduces parallelism in the pipeline to gain performance. I've included benchmarks at the end. - Closes #2750. # Flags: There are two new global flags: 1. `-N / --threads`. It is used to set the number of capabilities. According to [GHC documentation](https://hackage.haskell.org/package/base-4.20.0.0/docs/GHC-Conc.html#v:setNumCapabilities): _Set the number of Haskell threads that can run truly simultaneously (on separate physical processors) at any given time_. When compiling in parallel, we create this many worker threads. The default value is `-N auto`, which sets `-N` to half the number of logical cores, capped at 8. 2. `--dev-show-thread-ids`. When given, the thread id is printed in the compilation progress log. E.g. ![image](https://github.com/anoma/juvix/assets/5511599/9359fae2-0be1-43e5-8d74-faa82cba4034) # Parallel compilation 1. I've added `src/Parallel/ParallelTemplate.hs` which contains all the concurrency related code. I think it is good to keep this code separated from the actual compiler code. 2. I've added a progress log (only for the parallel driver) that outputs a log of the compilation progress, similar to what stack/cabal do. # Code changes: 1. I've removed the `setup` stage where we were registering dependencies. Instead, the dependencies are registered when the `pathResolver` is run for the first time. This way it is safer. 1. Now the `ImportTree` is needed to run the pipeline. Cycles are detected during the construction of this tree, so I've removed `Reader ImportParents` from the pipeline. 3. For the package pathresolver, we do not support parallelism yet (we could add support for it in the future, but the gains will be small). 4. When `-N1`, the pipeline remains unchanged, so performance should be the same as in the main branch (except there is a small performance degradation due to adding the `-threaded` flag). 5. I've introduced `PipelineOptions`, which are options that are used to pass options to the effects in the pipeline. 6. `PathResolver` constraint has been removed from the `upTo*` functions in the pipeline due to being redundant. 7. I've added a lot of `NFData` instances. They are needed to force the full evaluation of `Stored.ModuleInfo` in each of the threads. 2. The `Cache` effect uses [`SharedState`](https://hackage.haskell.org/package/effectful-core-2.3.0.1/docs/Effectful-State-Static-Shared.html) as opposed to [`LocalState`](https://hackage.haskell.org/package/effectful-core-2.3.0.1/docs/Effectful-Writer-Static-Local.html). Perhaps we should provide different versions. 3. I've added a `Cache` handler that accepts a setup function. The setup is triggered when a miss is detected. It is used to lazily compile the modules in parallel. # Tests 1. I've adapted the smoke test suite to ignore the progress log in the stderr. 5. I've had to adapt `tests/positive/Internal/Lambda.juvix`. Due to laziness, a crash happening in this file was not being caught. The problem is that in this file we have a lambda function with different number of patterns in their clauses, which we currently do not support (https://github.com/anoma/juvix/issues/1706). 6. I've had to comment out the definition ``` x : Box ((A : Type) → A → A) := box λ {A a := a}; ``` From the test as it was causing a crash (https://github.com/anoma/juvix/issues/2247). # Future Work 1. It should be investigated how much performance we lose by fully evaluating the `Stored.ModuleInfo`, since some information in it will be discarded. It may be possible to be more fine-grained when forcing evaluation. 8. The scanning of imports to build the import tree is sequential. Now, we build the import tree from the entry point module and only the modules that are imported from it are in the tree. However, we have discussed that at some point we should make a distinction between `juvix` _the compiler_ and `juvix` _the build tool_. When using `juvix` as a build tool it makes sense to typecheck/compile (to stored core) all modules in the project. When/if we do this, scanning imports in all modules in parallel becomes trivial. 9. The implementation of the `ParallelTemplate` uses low level primitives such as [forkIO](https://hackage.haskell.org/package/base-4.20.0.0/docs/Control-Concurrent.html#v:forkIO). At some point it should be refactored to use safer functions from the [`Effectful.Concurrent.Async`](https://hackage.haskell.org/package/effectful-2.3.0.0/docs/Effectful-Concurrent-Async.html) module. 10. The number of cores and worker threads that we spawn is determined by the command line. Ideally, we could use to import tree to compute an upper bound to the ideal number of cores to use. 11. We could add an animation that displays which modules are being compiled in parallel and which have finished being compiled. # Benchmarks On some benchmarks, I include the GHC runtime option [`-A`](https://downloads.haskell.org/ghc/latest/docs/users_guide/runtime_control.html#rts-flag--A%20%E2%9F%A8size%E2%9F%A9), which sometimes makes a good impact on performance. Thanks to @paulcadman for pointing this out. I've figured a good combination of `-N` and `-A` through trial and error (but this oviously depends on the cpu and juvix projects). ## Typecheck the standard library ### Clean run (88% faster than main): ``` hyperfine --warmup 1 --prepare 'juvix clean' 'juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432' 'juvix -N 4 typecheck Stdlib/Prelude.juvix' 'juvix-main typecheck Stdlib/Prelude.juvix' Benchmark 1: juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 Time (mean ± σ): 444.1 ms ± 6.5 ms [User: 1018.0 ms, System: 77.7 ms] Range (min … max): 432.6 ms … 455.9 ms 10 runs Benchmark 2: juvix -N 4 typecheck Stdlib/Prelude.juvix Time (mean ± σ): 628.3 ms ± 23.9 ms [User: 1227.6 ms, System: 69.5 ms] Range (min … max): 584.7 ms … 670.6 ms 10 runs Benchmark 3: juvix-main typecheck Stdlib/Prelude.juvix Time (mean ± σ): 835.9 ms ± 12.3 ms [User: 788.5 ms, System: 31.9 ms] Range (min … max): 816.0 ms … 853.6 ms 10 runs Summary juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 ran 1.41 ± 0.06 times faster than juvix -N 4 typecheck Stdlib/Prelude.juvix 1.88 ± 0.04 times faster than juvix-main typecheck Stdlib/Prelude.juvix ``` ### Cached run (43% faster than main): ``` hyperfine --warmup 1 'juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432' 'juvix -N 4 typecheck Stdlib/Prelude.juvix' 'juvix-main typecheck Stdlib/Prelude.juvix' Benchmark 1: juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 Time (mean ± σ): 241.3 ms ± 7.3 ms [User: 538.6 ms, System: 101.3 ms] Range (min … max): 231.5 ms … 251.3 ms 11 runs Benchmark 2: juvix -N 4 typecheck Stdlib/Prelude.juvix Time (mean ± σ): 235.1 ms ± 12.0 ms [User: 405.3 ms, System: 87.7 ms] Range (min … max): 216.1 ms … 253.1 ms 12 runs Benchmark 3: juvix-main typecheck Stdlib/Prelude.juvix Time (mean ± σ): 336.7 ms ± 13.3 ms [User: 269.5 ms, System: 67.1 ms] Range (min … max): 316.9 ms … 351.8 ms 10 runs Summary juvix -N 4 typecheck Stdlib/Prelude.juvix ran 1.03 ± 0.06 times faster than juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 1.43 ± 0.09 times faster than juvix-main typecheck Stdlib/Prelude.juvix ``` ## Typecheck the test suite of the containers library At the moment this is the biggest juvix project that we have. ### Clean run (105% faster than main) ``` hyperfine --warmup 1 --prepare 'juvix clean' 'juvix -N 6 typecheck Main.juvix +RTS -A67108864' 'juvix -N 4 typecheck Main.juvix' 'juvix-main typecheck Main.juvix' Benchmark 1: juvix -N 6 typecheck Main.juvix +RTS -A67108864 Time (mean ± σ): 1.006 s ± 0.011 s [User: 2.171 s, System: 0.162 s] Range (min … max): 0.991 s … 1.023 s 10 runs Benchmark 2: juvix -N 4 typecheck Main.juvix Time (mean ± σ): 1.584 s ± 0.046 s [User: 2.934 s, System: 0.149 s] Range (min … max): 1.535 s … 1.660 s 10 runs Benchmark 3: juvix-main typecheck Main.juvix Time (mean ± σ): 2.066 s ± 0.010 s [User: 1.939 s, System: 0.089 s] Range (min … max): 2.048 s … 2.077 s 10 runs Summary juvix -N 6 typecheck Main.juvix +RTS -A67108864 ran 1.57 ± 0.05 times faster than juvix -N 4 typecheck Main.juvix 2.05 ± 0.03 times faster than juvix-main typecheck Main.juvix ``` ### Cached run (54% faster than main) ``` hyperfine --warmup 1 'juvix -N 6 typecheck Main.juvix +RTS -A33554432' 'juvix -N 4 typecheck Main.juvix' 'juvix-main typecheck Main.juvix' Benchmark 1: juvix -N 6 typecheck Main.juvix +RTS -A33554432 Time (mean ± σ): 551.8 ms ± 13.2 ms [User: 1419.8 ms, System: 199.4 ms] Range (min … max): 535.2 ms … 570.6 ms 10 runs Benchmark 2: juvix -N 4 typecheck Main.juvix Time (mean ± σ): 636.7 ms ± 17.3 ms [User: 1006.3 ms, System: 196.3 ms] Range (min … max): 601.6 ms … 655.3 ms 10 runs Benchmark 3: juvix-main typecheck Main.juvix Time (mean ± σ): 847.2 ms ± 58.9 ms [User: 710.1 ms, System: 126.5 ms] Range (min … max): 731.1 ms … 890.0 ms 10 runs Summary juvix -N 6 typecheck Main.juvix +RTS -A33554432 ran 1.15 ± 0.04 times faster than juvix -N 4 typecheck Main.juvix 1.54 ± 0.11 times faster than juvix-main typecheck Main.juvix ```
2024-05-31 14:41:30 +03:00
_globalNumThreads = defaultNumThreads,
_globalShowNameIds = False,
_globalIdeEndErrorChar = Nothing,
_globalNoTermination = False,
_globalBuildDir = Nothing,
2022-08-19 17:57:07 +03:00
_globalStdin = False,
_globalNoPositivity = False,
_globalLogLevel = LogLevelProgress,
_globalNoCoverage = False,
_globalNoStdlib = False,
_globalUnrollLimit = defaultUnrollLimit,
_globalFieldSize = Nothing,
Parallel pipeline (#2779) This pr introduces parallelism in the pipeline to gain performance. I've included benchmarks at the end. - Closes #2750. # Flags: There are two new global flags: 1. `-N / --threads`. It is used to set the number of capabilities. According to [GHC documentation](https://hackage.haskell.org/package/base-4.20.0.0/docs/GHC-Conc.html#v:setNumCapabilities): _Set the number of Haskell threads that can run truly simultaneously (on separate physical processors) at any given time_. When compiling in parallel, we create this many worker threads. The default value is `-N auto`, which sets `-N` to half the number of logical cores, capped at 8. 2. `--dev-show-thread-ids`. When given, the thread id is printed in the compilation progress log. E.g. ![image](https://github.com/anoma/juvix/assets/5511599/9359fae2-0be1-43e5-8d74-faa82cba4034) # Parallel compilation 1. I've added `src/Parallel/ParallelTemplate.hs` which contains all the concurrency related code. I think it is good to keep this code separated from the actual compiler code. 2. I've added a progress log (only for the parallel driver) that outputs a log of the compilation progress, similar to what stack/cabal do. # Code changes: 1. I've removed the `setup` stage where we were registering dependencies. Instead, the dependencies are registered when the `pathResolver` is run for the first time. This way it is safer. 1. Now the `ImportTree` is needed to run the pipeline. Cycles are detected during the construction of this tree, so I've removed `Reader ImportParents` from the pipeline. 3. For the package pathresolver, we do not support parallelism yet (we could add support for it in the future, but the gains will be small). 4. When `-N1`, the pipeline remains unchanged, so performance should be the same as in the main branch (except there is a small performance degradation due to adding the `-threaded` flag). 5. I've introduced `PipelineOptions`, which are options that are used to pass options to the effects in the pipeline. 6. `PathResolver` constraint has been removed from the `upTo*` functions in the pipeline due to being redundant. 7. I've added a lot of `NFData` instances. They are needed to force the full evaluation of `Stored.ModuleInfo` in each of the threads. 2. The `Cache` effect uses [`SharedState`](https://hackage.haskell.org/package/effectful-core-2.3.0.1/docs/Effectful-State-Static-Shared.html) as opposed to [`LocalState`](https://hackage.haskell.org/package/effectful-core-2.3.0.1/docs/Effectful-Writer-Static-Local.html). Perhaps we should provide different versions. 3. I've added a `Cache` handler that accepts a setup function. The setup is triggered when a miss is detected. It is used to lazily compile the modules in parallel. # Tests 1. I've adapted the smoke test suite to ignore the progress log in the stderr. 5. I've had to adapt `tests/positive/Internal/Lambda.juvix`. Due to laziness, a crash happening in this file was not being caught. The problem is that in this file we have a lambda function with different number of patterns in their clauses, which we currently do not support (https://github.com/anoma/juvix/issues/1706). 6. I've had to comment out the definition ``` x : Box ((A : Type) → A → A) := box λ {A a := a}; ``` From the test as it was causing a crash (https://github.com/anoma/juvix/issues/2247). # Future Work 1. It should be investigated how much performance we lose by fully evaluating the `Stored.ModuleInfo`, since some information in it will be discarded. It may be possible to be more fine-grained when forcing evaluation. 8. The scanning of imports to build the import tree is sequential. Now, we build the import tree from the entry point module and only the modules that are imported from it are in the tree. However, we have discussed that at some point we should make a distinction between `juvix` _the compiler_ and `juvix` _the build tool_. When using `juvix` as a build tool it makes sense to typecheck/compile (to stored core) all modules in the project. When/if we do this, scanning imports in all modules in parallel becomes trivial. 9. The implementation of the `ParallelTemplate` uses low level primitives such as [forkIO](https://hackage.haskell.org/package/base-4.20.0.0/docs/Control-Concurrent.html#v:forkIO). At some point it should be refactored to use safer functions from the [`Effectful.Concurrent.Async`](https://hackage.haskell.org/package/effectful-2.3.0.0/docs/Effectful-Concurrent-Async.html) module. 10. The number of cores and worker threads that we spawn is determined by the command line. Ideally, we could use to import tree to compute an upper bound to the ideal number of cores to use. 11. We could add an animation that displays which modules are being compiled in parallel and which have finished being compiled. # Benchmarks On some benchmarks, I include the GHC runtime option [`-A`](https://downloads.haskell.org/ghc/latest/docs/users_guide/runtime_control.html#rts-flag--A%20%E2%9F%A8size%E2%9F%A9), which sometimes makes a good impact on performance. Thanks to @paulcadman for pointing this out. I've figured a good combination of `-N` and `-A` through trial and error (but this oviously depends on the cpu and juvix projects). ## Typecheck the standard library ### Clean run (88% faster than main): ``` hyperfine --warmup 1 --prepare 'juvix clean' 'juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432' 'juvix -N 4 typecheck Stdlib/Prelude.juvix' 'juvix-main typecheck Stdlib/Prelude.juvix' Benchmark 1: juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 Time (mean ± σ): 444.1 ms ± 6.5 ms [User: 1018.0 ms, System: 77.7 ms] Range (min … max): 432.6 ms … 455.9 ms 10 runs Benchmark 2: juvix -N 4 typecheck Stdlib/Prelude.juvix Time (mean ± σ): 628.3 ms ± 23.9 ms [User: 1227.6 ms, System: 69.5 ms] Range (min … max): 584.7 ms … 670.6 ms 10 runs Benchmark 3: juvix-main typecheck Stdlib/Prelude.juvix Time (mean ± σ): 835.9 ms ± 12.3 ms [User: 788.5 ms, System: 31.9 ms] Range (min … max): 816.0 ms … 853.6 ms 10 runs Summary juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 ran 1.41 ± 0.06 times faster than juvix -N 4 typecheck Stdlib/Prelude.juvix 1.88 ± 0.04 times faster than juvix-main typecheck Stdlib/Prelude.juvix ``` ### Cached run (43% faster than main): ``` hyperfine --warmup 1 'juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432' 'juvix -N 4 typecheck Stdlib/Prelude.juvix' 'juvix-main typecheck Stdlib/Prelude.juvix' Benchmark 1: juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 Time (mean ± σ): 241.3 ms ± 7.3 ms [User: 538.6 ms, System: 101.3 ms] Range (min … max): 231.5 ms … 251.3 ms 11 runs Benchmark 2: juvix -N 4 typecheck Stdlib/Prelude.juvix Time (mean ± σ): 235.1 ms ± 12.0 ms [User: 405.3 ms, System: 87.7 ms] Range (min … max): 216.1 ms … 253.1 ms 12 runs Benchmark 3: juvix-main typecheck Stdlib/Prelude.juvix Time (mean ± σ): 336.7 ms ± 13.3 ms [User: 269.5 ms, System: 67.1 ms] Range (min … max): 316.9 ms … 351.8 ms 10 runs Summary juvix -N 4 typecheck Stdlib/Prelude.juvix ran 1.03 ± 0.06 times faster than juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 1.43 ± 0.09 times faster than juvix-main typecheck Stdlib/Prelude.juvix ``` ## Typecheck the test suite of the containers library At the moment this is the biggest juvix project that we have. ### Clean run (105% faster than main) ``` hyperfine --warmup 1 --prepare 'juvix clean' 'juvix -N 6 typecheck Main.juvix +RTS -A67108864' 'juvix -N 4 typecheck Main.juvix' 'juvix-main typecheck Main.juvix' Benchmark 1: juvix -N 6 typecheck Main.juvix +RTS -A67108864 Time (mean ± σ): 1.006 s ± 0.011 s [User: 2.171 s, System: 0.162 s] Range (min … max): 0.991 s … 1.023 s 10 runs Benchmark 2: juvix -N 4 typecheck Main.juvix Time (mean ± σ): 1.584 s ± 0.046 s [User: 2.934 s, System: 0.149 s] Range (min … max): 1.535 s … 1.660 s 10 runs Benchmark 3: juvix-main typecheck Main.juvix Time (mean ± σ): 2.066 s ± 0.010 s [User: 1.939 s, System: 0.089 s] Range (min … max): 2.048 s … 2.077 s 10 runs Summary juvix -N 6 typecheck Main.juvix +RTS -A67108864 ran 1.57 ± 0.05 times faster than juvix -N 4 typecheck Main.juvix 2.05 ± 0.03 times faster than juvix-main typecheck Main.juvix ``` ### Cached run (54% faster than main) ``` hyperfine --warmup 1 'juvix -N 6 typecheck Main.juvix +RTS -A33554432' 'juvix -N 4 typecheck Main.juvix' 'juvix-main typecheck Main.juvix' Benchmark 1: juvix -N 6 typecheck Main.juvix +RTS -A33554432 Time (mean ± σ): 551.8 ms ± 13.2 ms [User: 1419.8 ms, System: 199.4 ms] Range (min … max): 535.2 ms … 570.6 ms 10 runs Benchmark 2: juvix -N 4 typecheck Main.juvix Time (mean ± σ): 636.7 ms ± 17.3 ms [User: 1006.3 ms, System: 196.3 ms] Range (min … max): 601.6 ms … 655.3 ms 10 runs Benchmark 3: juvix-main typecheck Main.juvix Time (mean ± σ): 847.2 ms ± 58.9 ms [User: 710.1 ms, System: 126.5 ms] Range (min … max): 731.1 ms … 890.0 ms 10 runs Summary juvix -N 6 typecheck Main.juvix +RTS -A33554432 ran 1.15 ± 0.04 times faster than juvix -N 4 typecheck Main.juvix 1.54 ± 0.11 times faster than juvix-main typecheck Main.juvix ```
2024-05-31 14:41:30 +03:00
_globalDevShowThreadIds = False,
2023-12-01 18:50:37 +03:00
_globalOffline = False
}
-- | Get a parser for global flags which can be hidden or not depending on
-- the input boolean
2022-09-14 17:16:15 +03:00
parseGlobalFlags :: Parser GlobalOptions
parseGlobalFlags = do
_globalNoColors <-
switch
( long "no-colors"
<> help "Disable ANSI formatting"
)
_globalBuildDir <-
optional
( parseBuildDir
( long "internal-build-dir"
<> help "Directory for compiler internal output"
)
)
2022-08-19 17:57:07 +03:00
_globalStdin <-
switch
( long "stdin"
<> help "Read from Stdin"
)
_globalIdeEndErrorChar <-
optional $
option
readMChar
( long "ide-end-error-char"
<> help "End error message with the given character in order to facilitate parsing"
)
_globalNoTermination <-
switch
( long "no-termination"
<> help "Disable termination checking"
)
_globalNoPositivity <-
switch
( long "no-positivity"
<> help "Disable positivity checking for inductive types"
)
_globalNoCoverage <-
switch
( long "no-coverage"
<> help "Disable coverage checking for patterns"
)
_globalNoStdlib <-
switch
( long "no-stdlib"
<> help "Do not use the standard library"
)
_globalFieldSize <-
option
fieldSizeOpt
( long "field-size"
<> value Nothing
<> help "Field type size [cairo,small,11] (default: small)"
)
_globalUnrollLimit <-
option
(fromIntegral <$> naturalNumberOpt)
( long "unroll"
<> value defaultUnrollLimit
<> help ("Recursion unrolling limit (default: " <> show defaultUnrollLimit <> ")")
)
_globalOffline <-
switch
( long "offline"
<> help "Disable access to network resources"
)
_globalLogLevel <-
option
(enumReader Proxy)
( long "log-level"
<> metavar "LOG_LEVEL"
<> completer (enumCompleter @LogLevel Proxy)
<> value defaultLogLevel
<> help
( "Determines how much log the compiler produces."
<> intercalate " < " [show l | l <- allElements @LogLevel]
)
)
_globalShowNameIds <-
switch
( long "show-name-ids"
<> help "[DEV] Show the unique number of each identifier when pretty printing"
)
Parallel pipeline (#2779) This pr introduces parallelism in the pipeline to gain performance. I've included benchmarks at the end. - Closes #2750. # Flags: There are two new global flags: 1. `-N / --threads`. It is used to set the number of capabilities. According to [GHC documentation](https://hackage.haskell.org/package/base-4.20.0.0/docs/GHC-Conc.html#v:setNumCapabilities): _Set the number of Haskell threads that can run truly simultaneously (on separate physical processors) at any given time_. When compiling in parallel, we create this many worker threads. The default value is `-N auto`, which sets `-N` to half the number of logical cores, capped at 8. 2. `--dev-show-thread-ids`. When given, the thread id is printed in the compilation progress log. E.g. ![image](https://github.com/anoma/juvix/assets/5511599/9359fae2-0be1-43e5-8d74-faa82cba4034) # Parallel compilation 1. I've added `src/Parallel/ParallelTemplate.hs` which contains all the concurrency related code. I think it is good to keep this code separated from the actual compiler code. 2. I've added a progress log (only for the parallel driver) that outputs a log of the compilation progress, similar to what stack/cabal do. # Code changes: 1. I've removed the `setup` stage where we were registering dependencies. Instead, the dependencies are registered when the `pathResolver` is run for the first time. This way it is safer. 1. Now the `ImportTree` is needed to run the pipeline. Cycles are detected during the construction of this tree, so I've removed `Reader ImportParents` from the pipeline. 3. For the package pathresolver, we do not support parallelism yet (we could add support for it in the future, but the gains will be small). 4. When `-N1`, the pipeline remains unchanged, so performance should be the same as in the main branch (except there is a small performance degradation due to adding the `-threaded` flag). 5. I've introduced `PipelineOptions`, which are options that are used to pass options to the effects in the pipeline. 6. `PathResolver` constraint has been removed from the `upTo*` functions in the pipeline due to being redundant. 7. I've added a lot of `NFData` instances. They are needed to force the full evaluation of `Stored.ModuleInfo` in each of the threads. 2. The `Cache` effect uses [`SharedState`](https://hackage.haskell.org/package/effectful-core-2.3.0.1/docs/Effectful-State-Static-Shared.html) as opposed to [`LocalState`](https://hackage.haskell.org/package/effectful-core-2.3.0.1/docs/Effectful-Writer-Static-Local.html). Perhaps we should provide different versions. 3. I've added a `Cache` handler that accepts a setup function. The setup is triggered when a miss is detected. It is used to lazily compile the modules in parallel. # Tests 1. I've adapted the smoke test suite to ignore the progress log in the stderr. 5. I've had to adapt `tests/positive/Internal/Lambda.juvix`. Due to laziness, a crash happening in this file was not being caught. The problem is that in this file we have a lambda function with different number of patterns in their clauses, which we currently do not support (https://github.com/anoma/juvix/issues/1706). 6. I've had to comment out the definition ``` x : Box ((A : Type) → A → A) := box λ {A a := a}; ``` From the test as it was causing a crash (https://github.com/anoma/juvix/issues/2247). # Future Work 1. It should be investigated how much performance we lose by fully evaluating the `Stored.ModuleInfo`, since some information in it will be discarded. It may be possible to be more fine-grained when forcing evaluation. 8. The scanning of imports to build the import tree is sequential. Now, we build the import tree from the entry point module and only the modules that are imported from it are in the tree. However, we have discussed that at some point we should make a distinction between `juvix` _the compiler_ and `juvix` _the build tool_. When using `juvix` as a build tool it makes sense to typecheck/compile (to stored core) all modules in the project. When/if we do this, scanning imports in all modules in parallel becomes trivial. 9. The implementation of the `ParallelTemplate` uses low level primitives such as [forkIO](https://hackage.haskell.org/package/base-4.20.0.0/docs/Control-Concurrent.html#v:forkIO). At some point it should be refactored to use safer functions from the [`Effectful.Concurrent.Async`](https://hackage.haskell.org/package/effectful-2.3.0.0/docs/Effectful-Concurrent-Async.html) module. 10. The number of cores and worker threads that we spawn is determined by the command line. Ideally, we could use to import tree to compute an upper bound to the ideal number of cores to use. 11. We could add an animation that displays which modules are being compiled in parallel and which have finished being compiled. # Benchmarks On some benchmarks, I include the GHC runtime option [`-A`](https://downloads.haskell.org/ghc/latest/docs/users_guide/runtime_control.html#rts-flag--A%20%E2%9F%A8size%E2%9F%A9), which sometimes makes a good impact on performance. Thanks to @paulcadman for pointing this out. I've figured a good combination of `-N` and `-A` through trial and error (but this oviously depends on the cpu and juvix projects). ## Typecheck the standard library ### Clean run (88% faster than main): ``` hyperfine --warmup 1 --prepare 'juvix clean' 'juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432' 'juvix -N 4 typecheck Stdlib/Prelude.juvix' 'juvix-main typecheck Stdlib/Prelude.juvix' Benchmark 1: juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 Time (mean ± σ): 444.1 ms ± 6.5 ms [User: 1018.0 ms, System: 77.7 ms] Range (min … max): 432.6 ms … 455.9 ms 10 runs Benchmark 2: juvix -N 4 typecheck Stdlib/Prelude.juvix Time (mean ± σ): 628.3 ms ± 23.9 ms [User: 1227.6 ms, System: 69.5 ms] Range (min … max): 584.7 ms … 670.6 ms 10 runs Benchmark 3: juvix-main typecheck Stdlib/Prelude.juvix Time (mean ± σ): 835.9 ms ± 12.3 ms [User: 788.5 ms, System: 31.9 ms] Range (min … max): 816.0 ms … 853.6 ms 10 runs Summary juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 ran 1.41 ± 0.06 times faster than juvix -N 4 typecheck Stdlib/Prelude.juvix 1.88 ± 0.04 times faster than juvix-main typecheck Stdlib/Prelude.juvix ``` ### Cached run (43% faster than main): ``` hyperfine --warmup 1 'juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432' 'juvix -N 4 typecheck Stdlib/Prelude.juvix' 'juvix-main typecheck Stdlib/Prelude.juvix' Benchmark 1: juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 Time (mean ± σ): 241.3 ms ± 7.3 ms [User: 538.6 ms, System: 101.3 ms] Range (min … max): 231.5 ms … 251.3 ms 11 runs Benchmark 2: juvix -N 4 typecheck Stdlib/Prelude.juvix Time (mean ± σ): 235.1 ms ± 12.0 ms [User: 405.3 ms, System: 87.7 ms] Range (min … max): 216.1 ms … 253.1 ms 12 runs Benchmark 3: juvix-main typecheck Stdlib/Prelude.juvix Time (mean ± σ): 336.7 ms ± 13.3 ms [User: 269.5 ms, System: 67.1 ms] Range (min … max): 316.9 ms … 351.8 ms 10 runs Summary juvix -N 4 typecheck Stdlib/Prelude.juvix ran 1.03 ± 0.06 times faster than juvix -N 4 typecheck Stdlib/Prelude.juvix +RTS -A33554432 1.43 ± 0.09 times faster than juvix-main typecheck Stdlib/Prelude.juvix ``` ## Typecheck the test suite of the containers library At the moment this is the biggest juvix project that we have. ### Clean run (105% faster than main) ``` hyperfine --warmup 1 --prepare 'juvix clean' 'juvix -N 6 typecheck Main.juvix +RTS -A67108864' 'juvix -N 4 typecheck Main.juvix' 'juvix-main typecheck Main.juvix' Benchmark 1: juvix -N 6 typecheck Main.juvix +RTS -A67108864 Time (mean ± σ): 1.006 s ± 0.011 s [User: 2.171 s, System: 0.162 s] Range (min … max): 0.991 s … 1.023 s 10 runs Benchmark 2: juvix -N 4 typecheck Main.juvix Time (mean ± σ): 1.584 s ± 0.046 s [User: 2.934 s, System: 0.149 s] Range (min … max): 1.535 s … 1.660 s 10 runs Benchmark 3: juvix-main typecheck Main.juvix Time (mean ± σ): 2.066 s ± 0.010 s [User: 1.939 s, System: 0.089 s] Range (min … max): 2.048 s … 2.077 s 10 runs Summary juvix -N 6 typecheck Main.juvix +RTS -A67108864 ran 1.57 ± 0.05 times faster than juvix -N 4 typecheck Main.juvix 2.05 ± 0.03 times faster than juvix-main typecheck Main.juvix ``` ### Cached run (54% faster than main) ``` hyperfine --warmup 1 'juvix -N 6 typecheck Main.juvix +RTS -A33554432' 'juvix -N 4 typecheck Main.juvix' 'juvix-main typecheck Main.juvix' Benchmark 1: juvix -N 6 typecheck Main.juvix +RTS -A33554432 Time (mean ± σ): 551.8 ms ± 13.2 ms [User: 1419.8 ms, System: 199.4 ms] Range (min … max): 535.2 ms … 570.6 ms 10 runs Benchmark 2: juvix -N 4 typecheck Main.juvix Time (mean ± σ): 636.7 ms ± 17.3 ms [User: 1006.3 ms, System: 196.3 ms] Range (min … max): 601.6 ms … 655.3 ms 10 runs Benchmark 3: juvix-main typecheck Main.juvix Time (mean ± σ): 847.2 ms ± 58.9 ms [User: 710.1 ms, System: 126.5 ms] Range (min … max): 731.1 ms … 890.0 ms 10 runs Summary juvix -N 6 typecheck Main.juvix +RTS -A33554432 ran 1.15 ± 0.04 times faster than juvix -N 4 typecheck Main.juvix 1.54 ± 0.11 times faster than juvix-main typecheck Main.juvix ```
2024-05-31 14:41:30 +03:00
_globalDevShowThreadIds <-
switch
( long "dev-show-thread-ids"
<> help "[DEV] Show the thread id when compiling a module"
)
_globalNumThreads <- parseNumThreads
2022-09-14 17:16:15 +03:00
return GlobalOptions {..}
parseBuildDir :: Mod OptionFields (Prepath Dir) -> Parser (AppPath Dir)
parseBuildDir m = do
_pathPath <-
option
somePreDirOpt
( metavar "BUILD_DIR"
<> action "directory"
<> m
)
pure AppPath {_pathIsInput = False, ..}
Make `compile` targets a subcommand instead of a flag (#2700) # Changes The main goal of this pr is to remove the `--target` flag for `juvix compile` and use subcommands instead. The targets that are relevant to normal users are found in `juvix compile --help`. Targets that are relevant only to developers are found in `juvix dev compile --help`. Below I list some of the changes in more detail. ## Compile targets for user-facing languages - `juvix compile native` - `juvix compile wasi`. I wasn't sure how to call this: `wasm`, `wasm32-wasi`, etc. In the end I thought `wasi` was short and accurate, but we can change it. - `juvix compile vampir` - `juvix compile anoma` - `juvix compile cairo` ## *New* compile targets for internal languages See `juvix dev compile --help`. 1. `dev compile core` has the same behaviour as `dev core from-concrete`. The `dev core from-concrete` is redundant at the moment. 2. `dev compile tree` compiles to Tree and prints the InfoTable to the output file wihout any additional checks. 3. `dev compile reg` compiles to Reg and prints the InfoTable to the output file wihout any additional checks. 4. `dev compile asm` compiles to Asm and prints the InfoTable to the output file wihout any additional checks. 5. 4. `dev compile casm` compiles to Asm and prints the Result to the output file wihout any additional checks. TODO: should the Result be printed or something else? At the moment the Result lacks a pretty instance. 6. ## Optional input file 1. The input file for commands that expect a .juvix file as input is now optional. If the argument is ommited, he main file given in the package.yaml will be used. This applies to the following commands: 1. `juvix compile [native|wasi|geb|vampir|anoma|cairo]` 8. `juvix dev compile [core|reg|tree|casm|asm]` 1. `juvix html` 3. `juvix markdown`. 4. `juvix dev internal [typecheck|pretty]`. 5. `juvix dev [parse|scope]` 7. `juvix compile [native|wasi|geb|vampir|anoma|cairo]` 9. note that `juvix format` has not changed its behaviour. ## Refactor some C-like compiler flags Both `juvix compile native` and `juvix compile wasi` support `--only-c` (`-C`), `--only-preprocess` (`-E`), `--only-assemble` (`-S`). I propose to deviate from the `gcc` style and instead use a flag with a single argument: - `--cstage [source|preprocess|assembly|exec(default)]`. I'm open to suggestions. For now, I've kept the legacy flags but marked them as deprecated in the help message. ## Remove code duplication I've tried to reduce code duplication. This is sometimes in tension with code readability so I've tried to find a good balance. I've tried to make it so we don't have to jump to many different files to understand what a single command is doing. I'm sure there is still room for improvement. ## Other refactors I've implemented other small refactors that I considered improved the quality of the code. ## TODO/Future work We should refactor commands (under `compile dev`) which still use `module Commands.Extra.Compile` and remove it.
2024-04-09 14:29:07 +03:00
entryPointFromGlobalOptionsPre ::
(Members '[TaggedLock, EmbedIO] r) =>
Root ->
Improve performance of formatting a project (#2863) Currently formatting a project is equivalent to running `juvix format` on each individual file. Hence, the performance is quadratic wrt the number of modules in the project. This pr fixes that and we now we only process each module once. # Benchmark (1236% faster :rocket:) Checking the standard library ``` hyperfine --warmup 1 'juvix format --check' 'juvix-main format --check' Benchmark 1: juvix format --check Time (mean ± σ): 450.6 ms ± 33.7 ms [User: 707.2 ms, System: 178.7 ms] Range (min … max): 396.0 ms … 497.0 ms 10 runs Benchmark 2: juvix-main format --check Time (mean ± σ): 6.019 s ± 0.267 s [User: 9.333 s, System: 1.512 s] Range (min … max): 5.598 s … 6.524 s 10 runs Summary juvix format --check ran 13.36 ± 1.16 times faster than juvix-main format --check ``` # Other changes: 1. The `EntryPoint` field `entryPointModulePath` is now optional. 2. I've introduced a new type `TopModulePathKey` which is analogous to `TopModulePath` but wihout location information. It is used in hashmap keys where the location in the key is never used. This is useful as we can now get a `TopModulePathKey` from a `Path Rel File`. 3. I've refactored the `_formatInput` field in `FormatOptions` so that it doesn't need to be a special case anymore. 4. I've introduced a new effect `Forcing` that allows to individually force fields of a record type with a convenient syntax. 5. I've refactored some of the constraints in scoping so that they only require `Reader Package` instead of `Reader EntryPoint`. 6. I've introduced a new type family so that local modules are no longer required to have `ModuleId` from their type. Before, they were assigned one, but it was never used. # Future work: 1. For project-wise formatting, the compilation is done in parallel, but the formatting is still done sequentially. That should be improved.
2024-07-01 19:05:24 +03:00
Maybe (Prepath File) ->
Make `compile` targets a subcommand instead of a flag (#2700) # Changes The main goal of this pr is to remove the `--target` flag for `juvix compile` and use subcommands instead. The targets that are relevant to normal users are found in `juvix compile --help`. Targets that are relevant only to developers are found in `juvix dev compile --help`. Below I list some of the changes in more detail. ## Compile targets for user-facing languages - `juvix compile native` - `juvix compile wasi`. I wasn't sure how to call this: `wasm`, `wasm32-wasi`, etc. In the end I thought `wasi` was short and accurate, but we can change it. - `juvix compile vampir` - `juvix compile anoma` - `juvix compile cairo` ## *New* compile targets for internal languages See `juvix dev compile --help`. 1. `dev compile core` has the same behaviour as `dev core from-concrete`. The `dev core from-concrete` is redundant at the moment. 2. `dev compile tree` compiles to Tree and prints the InfoTable to the output file wihout any additional checks. 3. `dev compile reg` compiles to Reg and prints the InfoTable to the output file wihout any additional checks. 4. `dev compile asm` compiles to Asm and prints the InfoTable to the output file wihout any additional checks. 5. 4. `dev compile casm` compiles to Asm and prints the Result to the output file wihout any additional checks. TODO: should the Result be printed or something else? At the moment the Result lacks a pretty instance. 6. ## Optional input file 1. The input file for commands that expect a .juvix file as input is now optional. If the argument is ommited, he main file given in the package.yaml will be used. This applies to the following commands: 1. `juvix compile [native|wasi|geb|vampir|anoma|cairo]` 8. `juvix dev compile [core|reg|tree|casm|asm]` 1. `juvix html` 3. `juvix markdown`. 4. `juvix dev internal [typecheck|pretty]`. 5. `juvix dev [parse|scope]` 7. `juvix compile [native|wasi|geb|vampir|anoma|cairo]` 9. note that `juvix format` has not changed its behaviour. ## Refactor some C-like compiler flags Both `juvix compile native` and `juvix compile wasi` support `--only-c` (`-C`), `--only-preprocess` (`-E`), `--only-assemble` (`-S`). I propose to deviate from the `gcc` style and instead use a flag with a single argument: - `--cstage [source|preprocess|assembly|exec(default)]`. I'm open to suggestions. For now, I've kept the legacy flags but marked them as deprecated in the help message. ## Remove code duplication I've tried to reduce code duplication. This is sometimes in tension with code readability so I've tried to find a good balance. I've tried to make it so we don't have to jump to many different files to understand what a single command is doing. I'm sure there is still room for improvement. ## Other refactors I've implemented other small refactors that I considered improved the quality of the code. ## TODO/Future work We should refactor commands (under `compile dev`) which still use `module Commands.Extra.Compile` and remove it.
2024-04-09 14:29:07 +03:00
GlobalOptions ->
Sem r EntryPoint
entryPointFromGlobalOptionsPre root premainFile opts = do
Improve performance of formatting a project (#2863) Currently formatting a project is equivalent to running `juvix format` on each individual file. Hence, the performance is quadratic wrt the number of modules in the project. This pr fixes that and we now we only process each module once. # Benchmark (1236% faster :rocket:) Checking the standard library ``` hyperfine --warmup 1 'juvix format --check' 'juvix-main format --check' Benchmark 1: juvix format --check Time (mean ± σ): 450.6 ms ± 33.7 ms [User: 707.2 ms, System: 178.7 ms] Range (min … max): 396.0 ms … 497.0 ms 10 runs Benchmark 2: juvix-main format --check Time (mean ± σ): 6.019 s ± 0.267 s [User: 9.333 s, System: 1.512 s] Range (min … max): 5.598 s … 6.524 s 10 runs Summary juvix format --check ran 13.36 ± 1.16 times faster than juvix-main format --check ``` # Other changes: 1. The `EntryPoint` field `entryPointModulePath` is now optional. 2. I've introduced a new type `TopModulePathKey` which is analogous to `TopModulePath` but wihout location information. It is used in hashmap keys where the location in the key is never used. This is useful as we can now get a `TopModulePathKey` from a `Path Rel File`. 3. I've refactored the `_formatInput` field in `FormatOptions` so that it doesn't need to be a special case anymore. 4. I've introduced a new effect `Forcing` that allows to individually force fields of a record type with a convenient syntax. 5. I've refactored some of the constraints in scoping so that they only require `Reader Package` instead of `Reader EntryPoint`. 6. I've introduced a new type family so that local modules are no longer required to have `ModuleId` from their type. Before, they were assigned one, but it was never used. # Future work: 1. For project-wise formatting, the compilation is done in parallel, but the formatting is still done sequentially. That should be improved.
2024-07-01 19:05:24 +03:00
mainFile <- mapM (prepathToAbsFile (root ^. rootInvokeDir)) premainFile
entryPointFromGlobalOptions root mainFile opts
Make `compile` targets a subcommand instead of a flag (#2700) # Changes The main goal of this pr is to remove the `--target` flag for `juvix compile` and use subcommands instead. The targets that are relevant to normal users are found in `juvix compile --help`. Targets that are relevant only to developers are found in `juvix dev compile --help`. Below I list some of the changes in more detail. ## Compile targets for user-facing languages - `juvix compile native` - `juvix compile wasi`. I wasn't sure how to call this: `wasm`, `wasm32-wasi`, etc. In the end I thought `wasi` was short and accurate, but we can change it. - `juvix compile vampir` - `juvix compile anoma` - `juvix compile cairo` ## *New* compile targets for internal languages See `juvix dev compile --help`. 1. `dev compile core` has the same behaviour as `dev core from-concrete`. The `dev core from-concrete` is redundant at the moment. 2. `dev compile tree` compiles to Tree and prints the InfoTable to the output file wihout any additional checks. 3. `dev compile reg` compiles to Reg and prints the InfoTable to the output file wihout any additional checks. 4. `dev compile asm` compiles to Asm and prints the InfoTable to the output file wihout any additional checks. 5. 4. `dev compile casm` compiles to Asm and prints the Result to the output file wihout any additional checks. TODO: should the Result be printed or something else? At the moment the Result lacks a pretty instance. 6. ## Optional input file 1. The input file for commands that expect a .juvix file as input is now optional. If the argument is ommited, he main file given in the package.yaml will be used. This applies to the following commands: 1. `juvix compile [native|wasi|geb|vampir|anoma|cairo]` 8. `juvix dev compile [core|reg|tree|casm|asm]` 1. `juvix html` 3. `juvix markdown`. 4. `juvix dev internal [typecheck|pretty]`. 5. `juvix dev [parse|scope]` 7. `juvix compile [native|wasi|geb|vampir|anoma|cairo]` 9. note that `juvix format` has not changed its behaviour. ## Refactor some C-like compiler flags Both `juvix compile native` and `juvix compile wasi` support `--only-c` (`-C`), `--only-preprocess` (`-E`), `--only-assemble` (`-S`). I propose to deviate from the `gcc` style and instead use a flag with a single argument: - `--cstage [source|preprocess|assembly|exec(default)]`. I'm open to suggestions. For now, I've kept the legacy flags but marked them as deprecated in the help message. ## Remove code duplication I've tried to reduce code duplication. This is sometimes in tension with code readability so I've tried to find a good balance. I've tried to make it so we don't have to jump to many different files to understand what a single command is doing. I'm sure there is still room for improvement. ## Other refactors I've implemented other small refactors that I considered improved the quality of the code. ## TODO/Future work We should refactor commands (under `compile dev`) which still use `module Commands.Extra.Compile` and remove it.
2024-04-09 14:29:07 +03:00
entryPointFromGlobalOptions ::
(Members '[TaggedLock, EmbedIO] r) =>
Root ->
Improve performance of formatting a project (#2863) Currently formatting a project is equivalent to running `juvix format` on each individual file. Hence, the performance is quadratic wrt the number of modules in the project. This pr fixes that and we now we only process each module once. # Benchmark (1236% faster :rocket:) Checking the standard library ``` hyperfine --warmup 1 'juvix format --check' 'juvix-main format --check' Benchmark 1: juvix format --check Time (mean ± σ): 450.6 ms ± 33.7 ms [User: 707.2 ms, System: 178.7 ms] Range (min … max): 396.0 ms … 497.0 ms 10 runs Benchmark 2: juvix-main format --check Time (mean ± σ): 6.019 s ± 0.267 s [User: 9.333 s, System: 1.512 s] Range (min … max): 5.598 s … 6.524 s 10 runs Summary juvix format --check ran 13.36 ± 1.16 times faster than juvix-main format --check ``` # Other changes: 1. The `EntryPoint` field `entryPointModulePath` is now optional. 2. I've introduced a new type `TopModulePathKey` which is analogous to `TopModulePath` but wihout location information. It is used in hashmap keys where the location in the key is never used. This is useful as we can now get a `TopModulePathKey` from a `Path Rel File`. 3. I've refactored the `_formatInput` field in `FormatOptions` so that it doesn't need to be a special case anymore. 4. I've introduced a new effect `Forcing` that allows to individually force fields of a record type with a convenient syntax. 5. I've refactored some of the constraints in scoping so that they only require `Reader Package` instead of `Reader EntryPoint`. 6. I've introduced a new type family so that local modules are no longer required to have `ModuleId` from their type. Before, they were assigned one, but it was never used. # Future work: 1. For project-wise formatting, the compilation is done in parallel, but the formatting is still done sequentially. That should be improved.
2024-07-01 19:05:24 +03:00
Maybe (Path Abs File) ->
Make `compile` targets a subcommand instead of a flag (#2700) # Changes The main goal of this pr is to remove the `--target` flag for `juvix compile` and use subcommands instead. The targets that are relevant to normal users are found in `juvix compile --help`. Targets that are relevant only to developers are found in `juvix dev compile --help`. Below I list some of the changes in more detail. ## Compile targets for user-facing languages - `juvix compile native` - `juvix compile wasi`. I wasn't sure how to call this: `wasm`, `wasm32-wasi`, etc. In the end I thought `wasi` was short and accurate, but we can change it. - `juvix compile vampir` - `juvix compile anoma` - `juvix compile cairo` ## *New* compile targets for internal languages See `juvix dev compile --help`. 1. `dev compile core` has the same behaviour as `dev core from-concrete`. The `dev core from-concrete` is redundant at the moment. 2. `dev compile tree` compiles to Tree and prints the InfoTable to the output file wihout any additional checks. 3. `dev compile reg` compiles to Reg and prints the InfoTable to the output file wihout any additional checks. 4. `dev compile asm` compiles to Asm and prints the InfoTable to the output file wihout any additional checks. 5. 4. `dev compile casm` compiles to Asm and prints the Result to the output file wihout any additional checks. TODO: should the Result be printed or something else? At the moment the Result lacks a pretty instance. 6. ## Optional input file 1. The input file for commands that expect a .juvix file as input is now optional. If the argument is ommited, he main file given in the package.yaml will be used. This applies to the following commands: 1. `juvix compile [native|wasi|geb|vampir|anoma|cairo]` 8. `juvix dev compile [core|reg|tree|casm|asm]` 1. `juvix html` 3. `juvix markdown`. 4. `juvix dev internal [typecheck|pretty]`. 5. `juvix dev [parse|scope]` 7. `juvix compile [native|wasi|geb|vampir|anoma|cairo]` 9. note that `juvix format` has not changed its behaviour. ## Refactor some C-like compiler flags Both `juvix compile native` and `juvix compile wasi` support `--only-c` (`-C`), `--only-preprocess` (`-E`), `--only-assemble` (`-S`). I propose to deviate from the `gcc` style and instead use a flag with a single argument: - `--cstage [source|preprocess|assembly|exec(default)]`. I'm open to suggestions. For now, I've kept the legacy flags but marked them as deprecated in the help message. ## Remove code duplication I've tried to reduce code duplication. This is sometimes in tension with code readability so I've tried to find a good balance. I've tried to make it so we don't have to jump to many different files to understand what a single command is doing. I'm sure there is still room for improvement. ## Other refactors I've implemented other small refactors that I considered improved the quality of the code. ## TODO/Future work We should refactor commands (under `compile dev`) which still use `module Commands.Extra.Compile` and remove it.
2024-04-09 14:29:07 +03:00
GlobalOptions ->
Sem r EntryPoint
entryPointFromGlobalOptions root mainFile opts = do
mabsBuildDir :: Maybe (Path Abs Dir) <- liftIO (mapM (prepathToAbsDir cwd) optBuildDir)
pkg <- readPackageRootIO root
let def :: EntryPoint
def = defaultEntryPoint pkg root mainFile
return
def
{ _entryPointNoTermination = opts ^. globalNoTermination,
_entryPointNoPositivity = opts ^. globalNoPositivity,
_entryPointNoCoverage = opts ^. globalNoCoverage,
_entryPointNoStdlib = opts ^. globalNoStdlib,
_entryPointUnrollLimit = opts ^. globalUnrollLimit,
_entryPointGenericOptions = project opts,
_entryPointBuildDir = maybe (def ^. entryPointBuildDir) (CustomBuildDir . Abs) mabsBuildDir,
_entryPointOffline = opts ^. globalOffline,
_entryPointFieldSize = fromMaybe defaultFieldSize $ opts ^. globalFieldSize
}
where
optBuildDir :: Maybe (Prepath Dir)
optBuildDir = fmap (^. pathPath) (opts ^. globalBuildDir)
cwd = root ^. rootInvokeDir
entryPointFromGlobalOptionsNoFile :: (Members '[EmbedIO, TaggedLock] r, MonadIO (Sem r)) => Root -> GlobalOptions -> Sem r EntryPoint
entryPointFromGlobalOptionsNoFile root opts = do
mabsBuildDir :: Maybe (Path Abs Dir) <- mapM (prepathToAbsDir cwd) optBuildDir
pkg <- readPackageRootIO root
let def :: EntryPoint
def = defaultEntryPointNoFile pkg root
return
def
{ _entryPointNoTermination = opts ^. globalNoTermination,
_entryPointNoPositivity = opts ^. globalNoPositivity,
_entryPointNoCoverage = opts ^. globalNoCoverage,
_entryPointNoStdlib = opts ^. globalNoStdlib,
_entryPointUnrollLimit = opts ^. globalUnrollLimit,
_entryPointGenericOptions = project opts,
_entryPointBuildDir = maybe (def ^. entryPointBuildDir) (CustomBuildDir . Abs) mabsBuildDir,
_entryPointOffline = opts ^. globalOffline,
_entryPointFieldSize = fromMaybe defaultFieldSize $ opts ^. globalFieldSize
}
where
optBuildDir :: Maybe (Prepath Dir)
optBuildDir = fmap (^. pathPath) (opts ^. globalBuildDir)
cwd = root ^. rootInvokeDir