mirror of
https://github.com/haskell/ghcide.git
synced 2024-11-23 03:25:40 +03:00
ba4bdb2def
* send WorkDoneProgressEnd only when work done * Progress reporting now spans over multiple overlapping kicks * Repurpose benchmark experiments as tests Fixes #650 * use stack to fetch from Hackage * benchmark tests run with the same lsp-test config as other tests * Fix stack cradle in benchmark * Make stack unpack --silent * Fix issues in "code actions after edit" experiment - Repeated breaking edits make ghc run out of suggestions - Diagnostics seem to come and go in-between edits, which leads to a timing issue when asking for code actions. The fix is to wait for diagnostics to be present before asking for code actions * Fix stack.yaml generation in example project * Fix getDefinition in GHC 8.4 Did it break before 0.2.0 or after? * better naming for the progress event TVar * stop progress reporting in shakeShut https://github.com/digital-asset/ghcide/pull/649#discussion_r443408884 * hlint
51 lines
1.9 KiB
Haskell
51 lines
1.9 KiB
Haskell
{- An automated benchmark built around the simple experiment described in:
|
|
|
|
> https://neilmitchell.blogspot.com/2020/05/fixing-space-leaks-in-ghcide.html
|
|
|
|
As an example project, it unpacks Cabal-3.2.0.0 in the local filesystem and
|
|
loads the module 'Distribution.Simple'. The rationale for this choice is:
|
|
|
|
- It's convenient to download with `cabal unpack Cabal-3.2.0.0`
|
|
- It has very few dependencies, and all are already needed to build ghcide
|
|
- Distribution.Simple has 235 transitive module dependencies, so non trivial
|
|
|
|
The experiments are sequences of lsp commands scripted using lsp-test.
|
|
A more refined approach would be to record and replay real IDE interactions,
|
|
once the replay functionality is available in lsp-test.
|
|
A more declarative approach would be to reuse ide-debug-driver:
|
|
|
|
> https://github.com/digital-asset/daml/blob/master/compiler/damlc/ide-debug-driver/README.md
|
|
|
|
The result of an experiment is a total duration in seconds after a preset
|
|
number of iterations. There is ample room for improvement:
|
|
- Statistical analysis to detect outliers and auto infer the number of iterations needed
|
|
- GC stats analysis (currently -S is printed as part of the experiment)
|
|
- Analyisis of performance over the commit history of the project
|
|
|
|
How to run:
|
|
1. `cabal bench`
|
|
2. `cabal exec cabal run ghcide-bench -- -- ghcide-bench-options`
|
|
|
|
Note that the package database influences the response times of certain actions,
|
|
e.g. code actions, and therefore the two methods above do not necessarily
|
|
produce the same results.
|
|
|
|
-}
|
|
|
|
{-# LANGUAGE ImplicitParams #-}
|
|
|
|
import Control.Exception.Safe
|
|
import Experiments
|
|
import Options.Applicative
|
|
|
|
main :: IO ()
|
|
main = do
|
|
config <- execParser $ info (configP <**> helper) fullDesc
|
|
let ?config = config
|
|
|
|
output "starting test"
|
|
|
|
cleanUp <- setup
|
|
|
|
runBenchmarks experiments `finally` cleanUp
|