create a datatype `ColorMode`, which is stored in `Config`
and `Output`. It is passed to all of the functions that handle color
processing/output.
Currently this only has one possible value, which gives us the default
vty color handling behavior.
Factor out color drawing code in TerminfoBased.hs
Add datatype for 24-bit colors
Add 24-bit color support
we just write the colors straight to the terminal without using
terminfo.
Move all color-handling info to `ColorMode`
- Add terms for 1/8/16/240 colors to `ColorMode`
- Add function `detectColorMode` to detect color mode
This removes all color-handling information outside of the `ColorMode`
datatype, giving us a single source for color-support information.
This emulates the old color-handling code perfectly pretty much perfectly,
with one exception; For users with >256 colors listed, we don't do anything
special when outputting `Color240`s. The old behavior was absolutely
baffling to me, and seemed like it might cause future issues, so I just
removed it.
Add functions for creating 24-bit colors
- linearColor is just a polymorphic synonym for `RGBColor`
- srgbColor converts inputs from to linear before making a `RGBColor`
- color240 is the same as rgbColor, but with a less confusing name
This is all based of of the assumption that outputs will be linear,
which is true for gnome-terminal, but I haven't tested it with anything
else
This change modifies the Input type's event channel API to produce
InternalEvents, not Events. The new InternalEvent either wraps Event
with the InputEvent constructor (the previous behavior) or indicates
that Vty resumed after handling a signal using the ResumeAfterSignal
constructor. This change avoids the previous use of EvResize with lazy
exception arguments as a sentinel value for ResumeAfterSignal.
The current code is `O(n^2)` because it processes the entire input each
time a new chunk arrives. This gets very expensive for large inputs.
This commit changes the code to process each chunk exactly twice: once
to see if it contains the end of the bracketed paste, and another time
to parse the whole paste and return it as an event.
The code can now process more than 1MB/sec. At this point, my tests are
probably more network bound, so these timings might be off, and instead
be testing my network throughput. Good enough for me anyway :).
Timing:
* 1MB: 1s
* 4MB: 3s
* 7MB: 6s
* 10MB: 7s
* 13MB: 10s
* 15MB: 12s
* 20MB: 16s
NOTE: This breaks lots of internal APIs, many of which are actually
exposed as public API. `brick` does not break, but client code that
depends on constants containing escape sequences will need to adapt.
There's really no point in using `[Char]` here. The first thing the code
did was convert all the bytes individually to `Char` and putting them
into a linked list. Then, when actually parsing utf-8, it turns those
`Char`s back into `Word8`s. All the other code is generally concerned
with bytes and goes out of its way to deal with `[Char]` instead. The
code is now simpler and much faster.
Overall this gives another order of magnitude speedup over the previous
speedup totalling roughly a 200x speedup over 2 commits ago for the
300KB case (it's less for smaller cases and much more for larger cases,
because we already made one `n^2` algorithm `n`).
This does not fix the polynomial time complexity, but at this point we
can comfortably paste a low number of megabytes into the terminal and
process it reasonably quickly. This is sufficient to support small file
uploads via bracketed paste.
Some timing:
* 100KB: unmeasurable by hand (basically instant)
* 1000KB: 2 seconds
* 1500KB: 3 seconds
* 2000KB: 5 seconds
* 3000KB: 10 seconds
* 4000KB: 17 seconds
Integral types like Char and Int benefit a lot from UNPACK. For these,
there is also very little reason to use lazy fields, especially in this
codebase where most computations don't need laziness.
The current algorithm has polynomial time complexity. The new algorithm
runs in linear time and is generally much more efficient because it
operates on packed byte strings instead of linked lists of Char.
Some timing:
* 100KB: 1 second
* 200KB: 2.5 seconds
* 300KB: 4 seconds
* 400KB: 7 seconds
* 500KB: 12 seconds
* 600KB: 16 seconds
* 700KB: 22 seconds
As we can see, it's still `O(n^2)` overall, probably because of the
calls to `bracketedPasteFinished`. I'll investigate that next. The
constant factor overall is much lower now:
```
Before: 2.866E-6n^3 - 1.784E-4n^2 + 0.114n - 2.622
After: -1.389E-8n^3 + 5.53E-5n^2 - 1.604E-3n + 0.273
```
This change removes the aforementioned instances because they were
misbehaved; merging Attr and MaybeDefault values with these instances
resulted in field value losses. For example, before this change,
(defAttr `withForeColor` blue) <> (defAttr `withBackColor` green)
would result in just
(defAttr `withBackColor` green)
because the instances were designed to favor the right-hand arguments'
fields even if they had not been explicitly set (a consequence of the
MaybeDefault Semigroup instance). While that behavior was sensible
specifically in the context of Graphics.Vty.Inline, it wasn't a useful
user-facing API and it made for surprising instance behavior. Since
there is actually no good way to handle this in a Semigroup instance for
Attr -- some choices have to be made about how to merge two attributes'
foreground colors, and that won't be much better than what we had -- the
instance was just removed. I suspect that the risk of this impacting
users negatively is very low, given that the instance behavior was not
very useful.