1
1
mirror of https://github.com/github/semantic.git synced 2024-12-26 08:25:19 +03:00
semantic/bench/evaluation/Main.hs

51 lines
2.5 KiB
Haskell
Raw Normal View History

{-# LANGUAGE DataKinds, FlexibleContexts, TypeFamilies, TypeApplications #-}
2018-03-08 03:34:20 +03:00
module Main where
import Control.Monad
import Criterion.Main
import qualified Data.Language as Language
import Data.Proxy
import Parsing.Parser
import Semantic.Config (defaultOptions)
import Semantic.Task (withOptions)
import Semantic.Util hiding (evalRubyProject, evalPythonProject, evaluateProject)
-- Duplicating this stuff from Util to shut off the logging
2018-08-14 01:42:42 +03:00
evalRubyProject = justEvaluating <=< evaluateProject (Proxy :: Proxy 'Language.Ruby) rubyParser
evalPythonProject = justEvaluating <=< evaluateProject (Proxy :: Proxy 'Language.Python) pythonParser
2018-08-14 01:42:42 +03:00
evaluateProject proxy parser paths = withOptions defaultOptions $ \ config logger statter ->
evaluateProject' (TaskConfig config logger statter) proxy parser paths
2018-03-08 03:34:20 +03:00
2018-03-08 21:18:36 +03:00
-- We use `fmap show` to ensure that all the parts of the result of evaluation are
-- evaluated themselves. While an NFData instance is the most morally correct way
-- to do this, I'm reluctant to add NFData instances to every single datatype in the
-- project—coercing the result into a string will suffice, though it throws off the
-- memory allocation results a bit.
2018-03-08 03:34:20 +03:00
pyEval :: FilePath -> Benchmarkable
Add NFData instances to enable more accurate benchmarking. Because we're getting serious about benchmarking in the run-up to Windrose, it's time to bring in the `deepseq` package to ensure that benchmarks can fully evaluate the result of a test case. The `deepseq` package provides an `NFData` typeclass: ``` class NFData a where rnf :: a -> () ``` Instances use the `seq` combinator to ensure that the argument to `rnf` is fully evaluated, returning (). If there is a `Generic` instance for `a`, the implementation can be omitted. This patch adds NFData for every syntax node, graph vertex, environment data structures, and exceptions. It is long, but the work is very straightforward, so don't panick. The benchmark suite (`stack bench`) now produces more accurate results. The benchmarks previously mimicked `rnf` by calling `show` on the result of an evaluation or graph construction; now that we have actual `NFData` instances we can use the `nfIO` combinator from criterion. This has sped up the evaluation benchmarks and reduced their memory consumption, while it has slowed down the call graph benchmarks, as those benchmarks weren't evaluating the whole of the graph. Unfortunately, this patch increases compile times, as we have to derive a few more Generic instances. I wish this weren't the case, but there's little we can do about it now. In the future I have some plans for how to reduce compile time, and I bet that those gains will at least nullify the speed hit from this patch. Now that we have NFData instances for every data type, we can start benchmarking assignments, in preparation for fixing #2205. This patch also pulls in updates to `effects` and `fastsum` that add appropriate NFData instances for the data they vend.
2018-10-17 21:08:47 +03:00
pyEval p = nfIO . evalPythonProject $ ["bench/bench-fixtures/python/" <> p]
2018-03-08 03:34:20 +03:00
2018-03-08 03:47:23 +03:00
rbEval :: FilePath -> Benchmarkable
Add NFData instances to enable more accurate benchmarking. Because we're getting serious about benchmarking in the run-up to Windrose, it's time to bring in the `deepseq` package to ensure that benchmarks can fully evaluate the result of a test case. The `deepseq` package provides an `NFData` typeclass: ``` class NFData a where rnf :: a -> () ``` Instances use the `seq` combinator to ensure that the argument to `rnf` is fully evaluated, returning (). If there is a `Generic` instance for `a`, the implementation can be omitted. This patch adds NFData for every syntax node, graph vertex, environment data structures, and exceptions. It is long, but the work is very straightforward, so don't panick. The benchmark suite (`stack bench`) now produces more accurate results. The benchmarks previously mimicked `rnf` by calling `show` on the result of an evaluation or graph construction; now that we have actual `NFData` instances we can use the `nfIO` combinator from criterion. This has sped up the evaluation benchmarks and reduced their memory consumption, while it has slowed down the call graph benchmarks, as those benchmarks weren't evaluating the whole of the graph. Unfortunately, this patch increases compile times, as we have to derive a few more Generic instances. I wish this weren't the case, but there's little we can do about it now. In the future I have some plans for how to reduce compile time, and I bet that those gains will at least nullify the speed hit from this patch. Now that we have NFData instances for every data type, we can start benchmarking assignments, in preparation for fixing #2205. This patch also pulls in updates to `effects` and `fastsum` that add appropriate NFData instances for the data they vend.
2018-10-17 21:08:47 +03:00
rbEval p = nfIO . evalRubyProject $ ["bench/bench-fixtures/ruby/" <> p]
pyCall :: FilePath -> Benchmarkable
Add NFData instances to enable more accurate benchmarking. Because we're getting serious about benchmarking in the run-up to Windrose, it's time to bring in the `deepseq` package to ensure that benchmarks can fully evaluate the result of a test case. The `deepseq` package provides an `NFData` typeclass: ``` class NFData a where rnf :: a -> () ``` Instances use the `seq` combinator to ensure that the argument to `rnf` is fully evaluated, returning (). If there is a `Generic` instance for `a`, the implementation can be omitted. This patch adds NFData for every syntax node, graph vertex, environment data structures, and exceptions. It is long, but the work is very straightforward, so don't panick. The benchmark suite (`stack bench`) now produces more accurate results. The benchmarks previously mimicked `rnf` by calling `show` on the result of an evaluation or graph construction; now that we have actual `NFData` instances we can use the `nfIO` combinator from criterion. This has sped up the evaluation benchmarks and reduced their memory consumption, while it has slowed down the call graph benchmarks, as those benchmarks weren't evaluating the whole of the graph. Unfortunately, this patch increases compile times, as we have to derive a few more Generic instances. I wish this weren't the case, but there's little we can do about it now. In the future I have some plans for how to reduce compile time, and I bet that those gains will at least nullify the speed hit from this patch. Now that we have NFData instances for every data type, we can start benchmarking assignments, in preparation for fixing #2205. This patch also pulls in updates to `effects` and `fastsum` that add appropriate NFData instances for the data they vend.
2018-10-17 21:08:47 +03:00
pyCall p = nfIO $ callGraphProject pythonParser (Proxy @'Language.Python) defaultOptions ["bench/bench-fixtures/python/" <> p]
rbCall :: FilePath -> Benchmarkable
Add NFData instances to enable more accurate benchmarking. Because we're getting serious about benchmarking in the run-up to Windrose, it's time to bring in the `deepseq` package to ensure that benchmarks can fully evaluate the result of a test case. The `deepseq` package provides an `NFData` typeclass: ``` class NFData a where rnf :: a -> () ``` Instances use the `seq` combinator to ensure that the argument to `rnf` is fully evaluated, returning (). If there is a `Generic` instance for `a`, the implementation can be omitted. This patch adds NFData for every syntax node, graph vertex, environment data structures, and exceptions. It is long, but the work is very straightforward, so don't panick. The benchmark suite (`stack bench`) now produces more accurate results. The benchmarks previously mimicked `rnf` by calling `show` on the result of an evaluation or graph construction; now that we have actual `NFData` instances we can use the `nfIO` combinator from criterion. This has sped up the evaluation benchmarks and reduced their memory consumption, while it has slowed down the call graph benchmarks, as those benchmarks weren't evaluating the whole of the graph. Unfortunately, this patch increases compile times, as we have to derive a few more Generic instances. I wish this weren't the case, but there's little we can do about it now. In the future I have some plans for how to reduce compile time, and I bet that those gains will at least nullify the speed hit from this patch. Now that we have NFData instances for every data type, we can start benchmarking assignments, in preparation for fixing #2205. This patch also pulls in updates to `effects` and `fastsum` that add appropriate NFData instances for the data they vend.
2018-10-17 21:08:47 +03:00
rbCall p = nfIO $ callGraphProject rubyParser (Proxy @'Language.Ruby) defaultOptions ["bench/bench-fixtures/ruby/" <> p]
2018-03-08 03:34:20 +03:00
main :: IO ()
2018-03-08 03:47:23 +03:00
main = defaultMain
[ bgroup "python" [ bench "assignment" $ pyEval "simple-assignment.py"
2018-03-12 17:51:07 +03:00
, bench "function def" $ pyEval "function-definition.py"
, bench "if + function calls" $ pyEval "if-statement-functions.py"
, bench "call graph" $ pyCall "if-statement-functions.py"
2018-03-12 17:51:07 +03:00
]
2018-03-08 03:47:23 +03:00
, bgroup "ruby" [ bench "assignment" $ rbEval "simple-assignment.rb"
, bench "function def" $ rbEval "function-definition.rb"
, bench "if + function calls" $ rbEval "if-statement-functions.rb"
, bench "call graph" $ rbCall "if-statement-functions.rb"
2018-03-08 03:47:23 +03:00
]
2018-03-08 03:34:20 +03:00
]